Fedora People

Episode 190 - Building a talent "ecosystem"

Posted by Open Source Security Podcast on April 05, 2020 11:45 PM
Josh and Kurt talk about building a talent ecosystem. What starts out as an attempt by Kurt to talk about Canada evolves into a discussion about how talent can evolve, or be purposely grown. Canada's entertainment industry and Unit 8200 are good examples of this.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/13860338/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes


    Custom WiFi enabled nightlight with ESPHome and Home Assistant

    Posted by Christopher Smart on April 05, 2020 04:29 AM

    I built this custom night light for my kids as a fun little project. It’s pretty easy so thought someone else might be inspired to do something similar.

    <figure class="wp-block-image size-large is-resized"><figcaption>Custom WiFi connected nightlight</figcaption></figure>

    Hardware

    The core hardware is just an ESP8266 module and an Adafruit NeoPixel Ring. I also bought a 240V bunker light and took the guts out to use as the housing, as it looked nice and had a diffuser (you could pick anything that you like).

    <figure class="wp-block-image size-large is-resized"><figcaption>Removing existing components from bunker light</figcaption></figure>

    While the data pin of the NeoPixel Ring can pretty much connect to any GPIO pin on the ESP, bitbanging can cause flickering. It’s better to use pins 1, 2 or 3 on an ESP8266 where we can use other methods to talk to the device.

    These methods are exposed in ESPHome’s support for NeoPixel.

    • ESP8266_DMA (default for ESP8266, only on pin GPIO3)
    • ESP8266_UART0 (only on pin GPIO1)
    • ESP8266_UART1 (only on pin GPIO2)
    • ESP8266_ASYNC_UART0 (only on pin GPIO1)
    • ESP8266_ASYNC_UART1 (only on pin GPIO2) (only on pin GPIO2)
    • ESP32_I2S_0 (ESP32 only)
    • ESP32_I2S_1 (default for ESP32)
    • BIT_BANG (can flicker a bit)

    I chose GPIO2 and use ESP8266_UART1 method in the code below.

    So, first things first, solder up some wires to 5V, GND and GPIO pin 2 on the ESP module. These connect to the 5V, GND and data pins on the NeoPixel Ring respectively.

    It’s not very neat, but I used a hot glue gun to stick the ESP module into the bottom part of the bunker light, and fed the USB cable through for power and data.

    <figure class="wp-block-image size-large is-resized"></figure>

    I hot-glued the NeoPixel Ring in-place on the inside of the bunker light, in the centre, shining outwards towards the diffuser.

    <figure class="wp-block-image size-large is-resized"></figure>

    The bottom can then go back on and screws hold it in place. I used a hacksaw to create a little slot for the USB cable to sit in and then added hot-glue blobs for feet. All closed up, it looks like this underneath.

    <figure class="wp-block-image size-large is-resized"></figure>

    Looks a bit more professional from the top.

    <figure class="wp-block-image size-large is-resized"></figure>

    Code using ESPHome

    I flashed the ESP8266 using ESPHome (see my earlier blog post) with this simple YAML config.

    esphome:
      name: nightlight
      build_path: ./builds/nightlight
      platform: ESP8266
      board: huzzah
      esp8266_restore_from_flash: true
    
    wifi:
      ssid: !secret wifi_ssid
      password: !secret wifi_password
    
    # Enable logging
    logger:
    
    # Enable Home Assistant API
    api:
      password: '!secret api_password'
    
    # Enable over the air updates
    ota:
      password: !secret ota_password
    
    mqtt:
      broker: !secret mqtt_broker
      username: !secret mqtt_username
      password: !secret mqtt_password
      port: !secret mqtt_port
    
    light:
      - platform: neopixelbus
        pin: GPIO2
        method: ESP8266_UART1
        num_leds: 16
        type: GRBW
        name: "Nightlight"
        effects:
          # Customize parameters
          - random:
              name: "Slow Random"
              transition_length: 30s
              update_interval: 30s
          - random:
              name: "Fast Random"
              transition_length: 4s
              update_interval: 5s
          - addressable_rainbow:
              name: Rainbow
              speed: 10
              width: 50
          - addressable_twinkle:
              name: Twinkle Effect
              twinkle_probability: 5%
              progress_interval: 4ms
          - addressable_random_twinkle:
              name: Random Twinkle
              twinkle_probability: 5%
              progress_interval: 32ms
          - addressable_fireworks:
              name: Fireworks
              update_interval: 32ms
              spark_probability: 10%
              use_random_color: false
              fade_out_rate: 120
          - addressable_flicker:
              name: Flicker
    

    The esp8266_restore_from_flash option is useful because if the light is on and someone accidentally turns it off, it will go back to the same state when it is turned back on. It does wear the flash out more quickly, however.

    The important settings are the light component with the neopixelbus platform, which is where all the magic happens. We specify which GPIO on the ESP the data line on the NeoPixel Ring is connected to (pin 2 in my case). The method we use needs to match the pin (as discussed above) and in this example is ESP8266_UART1.

    The number of LEDs must match the actual number on the NeoPixel Ring, in my case 16. This is used when talking to the on-chip LED driver and calculating effects, etc.

    Similarly, the LED type is important as it determines which order the colours are in (swap around if colours don’t match). This must match the actual type of NeoPixel Ring, in my case I’m using an RGBW model which has a separate white LED and is in the order GRBW.

    Finally, you get all sorts of effects for free, you just need to list the ones you want and any options for them. These show up in Home Assistant under the advanced view of the light (screenshot below).

    Now it’s a matter of plugging the ESP module in and flashing it with esphome.

    esphome nightlight.yaml run
    

    Home Assistant

    After a reboot, the device should automatically show up in Home Assistant under Configuration -> Devices. From here you can add it to the Lovelace dashboard and make Automations or Scripts for the device.

    <figure class="wp-block-image size-large"><figcaption>Nightlight in Home Assistant with automations</figcaption></figure>

    Adding it to Lovelace dashboard looks something like this, which lets you easily turn the light on and off and set the brightness.

    <figure class="wp-block-image size-large"></figure> <figure class="wp-block-image size-large is-resized"></figure>

    You can also get advanced settings for the light, where you can change brightness, colours and apply effects.

    <figure class="wp-block-image size-large"><figcaption>Nightlight options</figcaption></figure>

    Effects

    One of the great things about using ESPHome is all the effects which are defined in the YAML file. To apply an effect, choose it from the advanced device view in Home Assistant (as per screenshot above).

    This is what rainbow looks like.

    <figure class="wp-block-image size-large"><figcaption>Nightlight running Rainbow effect</figcaption></figure>

    The kids love to select the colours and effects they want!

    Automation

    So, once you have the nightlight showing up in Home Assistant, we can create a simple automation to turn it on at sunset and off at sunrise.

    Go to Configuration -> Automation and add a new one. You can fill in any name you like and there’s an Execute button there when you want to test it.

    <figure class="wp-block-image size-large"></figure>

    The trigger uses the Sun module and runs 10 minutes before sunset.

    <figure class="wp-block-image size-large"></figure>

    I don’t use Conditions, but you could. For example, only do this when someone’s at home.

    <figure class="wp-block-image size-large"></figure>

    The Actions are set to call the homeassistant.turn_on function and specifies the device(s). Note this takes a comma separated list, so if you have more than one nightlight you can do it with the one automation rule.

    <figure class="wp-block-image size-large"></figure>

    That’s it! You can create another one for sunrise, but instead of calling homeassistant.turn_on just call homeassistant.turn_off and use Sunrise instead Sunset.

    20200404: What I did this week

    Posted by Ankur Sinha "FranciscoD" on April 04, 2020 09:49 AM
    Picture of "Report" written on a blackboard in chalk kept on a desk with a calculator, a pen, clips

    "Report" by GotCredit on Flickr (CC-BY 2.0 license)

    I seem to be getting busier every day, and the only way I can seem to be able to do all the work I want to is by staying extremely organised. An important part of organisation is being able to look back at the work that was done, and whether it could be done better---perhaps prioritised better---to make it all a little more efficient. So, given that I haven't been blogging frequently over the last year, I am trying to track the time I spend working more religiously and write a short weekly work report. It gives me the chance to review the past week.

    So, in the weeks leading up to today, 4th April, this is what I have been up to.

    Dissertation writing

    I'm working on writing up my dissertation, at the end of my PhD now. I've made good progress. I have another chapter or so to go before I should be able to submit. My supervision team has been reviewing and editing individual chapters as I finish them. We are already working on a paper, and the pre-print for this had gone through multiple rounds of review. So the chapters that came from the paper didn't need too much work. Another chapter is based on the reports I wrote for my assessments (back in 2015!), but having it all written down also made it easier to edit and add to the dissertation. I am extremely glad that my supervision team got me to maintain a daily lab journal. It makes writing the dissertation so much easier.

    Research fellow at the Silver Lab at University College London

    I was fortunate enough to secure a research fellow job at the Silver Lab while working on my dissertation. Generally, we PhD candidates submit our dissertations and then spend a few months hunting for positions. From all accounts, it's not meant to be an easy transition to a post-doc position. The number of PhDs entering the job market easily outnumbers the number of available research positions. So, I was also preparing for a few months of unemployment---saving money primarily---while I hunted for positions after I had submitted. In my case as an international migrant/expat/job stealer, my employers would also have to sponsor my visa, and not all employers do that. If I hadn't managed to find a position before my current student visa expired, I'd have to return to India and continue applying from there. That makes it even harder, and all the time one is not working on research, one is already falling behind. So, transitioning on to a position while still writing up was a very very lucky break for me.

    The research group does a lot of experimental work, but it is also where the Open Source Brain project is based. Given my computing background, and experience with FOSS in Fedora, a large component of my role is to work on the development of the Open Source Brain platform, and liaise with MetaCell who do most of the core development. Along with that, I get to work on modelling and other research projects. I was looking to work in a group that included experimentalists. I think that it is important for me to develop as an independent researcher in neuroscience.

    It's early days yet. I am only two weeks in, so I am still settling down. Since I am on a Tier 4 student visa at the moment, I can only work twenty hours a week. That's about two days a week, and is really not enough to get a lot done, especially given that it is meant to include the various meetings that I am to attend. HR are working on getting me my Certificate of Sponsorship (CoS) so that I can apply to transition on to a Tier 2 work visa. It is similar to the Certificate of Acceptance of Studies (CAS) that international students must get from their universities before they can apply for their Tier 4 student visa. Sponsoring organisations can sponsor a limited number of internationals each year, so we're waiting for the new cycle to start this month in April when UCL will be able to sponsor more of us.

    In the two weeks that I've been in, I've been learning the development process that Open Source Brain follows: attending sprint meetings, and the sort. I'll write a post dedicated to this in a near future. It is on my to-do list. This week, we diagnosed and fixed an issue with the current deployment. Sendmail was blocking the server because the Docker container hadn't a FQDN as its hostname. The things we learn. If you do find any issues with the Open Source Brain platform, please file an issue (or e-mail me if you prefer).

    There's more work to be done: the deployment is being moved off AWS on to the Google Cloud Platform. It's simple enough, but of course, the deployment needs to be tested and validated before it can be declared live and the AWS instance torn down.

    I've also been learning how the research group works: getting to know the people, what their interests are; attending the group discussions and journal clubs; keeping an eye on various journals to share new science that may be interesting to us. I've already learned quite a bit from the discussion. Given the COVID situation, we're all working from home, so everything is happening over Slack and video calls. I had managed to go to UCL for my induction in my first week. I was the only one there for that particular session. Again, I was lucky, since I expect that was the last induction session before UCL decided to limit access.

    Fedora

    We're nearing the Fedora 32 release, so I worked on the bits remaining for the new CompNeuroFedora lab image. Based on the discussion at the NeuroFedora meeting, I passed all the information needed to set up a page for the lab to the Websites team.

    The general package updates continue. I just updated Brian2 to the new version this morning and pushed an update with a test case. The test case takes one through the tutorial, so if one is looking to learn how to use Brian2, this is a good way of doing it while contributing to NeuroFedora. Another few bugs were fixed and updates pushed too. I've got to work on packaging a few new tools that are on the list.

    On the Fedora-Join front, we've had a few more folks join the community to help out. It was lovely chatting with new folks and discussing where and how they'd like to work with the community. Needless to say, lots of cookie giving has occurred in the IRC channel.

    I've also been thinking about the lack of a process for Community Changes in Fedora. Why isn't there something similar to the Change process that we use for dev changes? I finally filed a ticket with the Council. It's being discussed on the council-discuss mailing list. I've also asked Mindshare and CommOps to weigh in this morning. Please feel free to jump in and discuss how we should go about this. A change process that focusses on community is important, in my book.

    The Git forge discussion continues on the -devel mailing list, so I've been keeping up with that. I would prefer Pagure myself, and I do understand the CPE team's view even if I don't necessarily agree with it.

    Organisation for Computational Neuroscience: OCNS

    Things are quite quiet in OCNS. The Board has been discussing how best to handle the CNS*2020 conference. An announcement will be made once a decision has been reached.

    Review comments

    This turned out a lot longer than I'd expected. As I settle down to a weekly post, it should get shorter. However, I do see that I've got lots going on, and perhaps I do need to be more disciplined when accepting/volunteering for tasks, and prioritising them once I've taken them up. I'm not cookie-licking at the moment, so that's quite good. Still, lots to do.

    High resolution wheel scrolling in the desktop stack

    Posted by Peter Hutterer on April 04, 2020 04:00 AM

    This is a follow up from the kernel support for high-resolution wheel scrolling which you totally forgot about because it's already more then a year in the past and seriously, who has the attention span these days to remember this. Anyway, I finally found time and motivation to pick this up again and I started lining up the pieces like cans, for it only to be shot down by the commentary of strangers on the internet. The Wayland merge request lists the various pieces (libinput, wayland, weston, mutter, gtk and Xwayland) but for the impatient there's also an Fedora 32 COPR. For all you weirdos inexplicably not running the latest Fedora, well, you'll have to compile this yourself, just like I did.

    Let's recap: in v5.0 the kernel added new axes REL_WHEEL_HI_RES and REL_HWHEEL_HI_RES for all devices. On devices that actually support high-resolution wheel scrolling (Logitech and Microsoft mice, primarily) you'll get multiple hires events before the now-legacy REL_WHEEL events. On all other devices those two are in sync.

    Integrating this into the userspace stack was a bit of a mess at first, but I think the solution is good enough, even if it has a rather verbose explanation on how to handle it. The actual patches to integrate ended up being relatively simple. So let's see why it's a bit weird:

    When Wayland started, back in WhoahReallyThatLongAgo, scrolling was specified as the wl_pointer.axis event with a value in pixels. This works fine for touchpads, not so much for wheels. The early versions of Weston decreed that one wheel click was 10 pixels [1] and, perhaps surprisingly, the world kept on turning. When libinput was forked from Weston an early change was that wheel events would have two values - degrees of movement and click count ("discrete steps"). The wayland protocol was expanded to include the discrete steps as wl_pointer.axis_discrete as well. Then backwards compatibility reared its ugly head and Mutter, Weston, GTK all basically said: one discrete step equals 10 pixels so we multiply the discrete value by 10 and, perhaps surprisingly, the world kept on turning.

    This worked out well enough for a few years but with high resolution wheels we ran into a problem. Discrete steps are integers, so we can't send partial values. And the protocol is defined in a way that any tweaking of the behaviour would result in broken clients which, perhaps surprisingly, is a Bad Thing. This lead to the current proposal of separate events. LIBINPUT_EVENT_POINTER_AXIS_WHEEL and for Wayland the wl_pointer.axis_v120 event, linked to above. These events are (like the kernel events) a parallel event stream to the previous events and effectively replace the LIBINPUT_EVENT_POINTER_AXIS and Wayland wl_pointer.axis/axis_discrete pair for wheel events (not so for touchpad or button scrolling though).

    The compositor side of things is relatively simple: take the events from libinput and pass the hires ones as v120 events and the lowres ones as v120 events with a value of zero. The client side takes the v120 events and uses them over wl_pointer.axis/axis_discrete unless one is zero in which case you can discard all axis events in that wl_pointer.frame. Since most client implementation already have the support for smooth scrolling (because, well, touchpads do exist) it's relatively simple to integrate - the new events just feed into the smooth scrolling code. And since you already have to do wheel emulation for that (because, well, old clients exist) wheel emulation is handled easily too.

    All that to provide buttery smooth [2] wheel scrolling. Or not, if your hardware doesn't support it. In which case, well, live with the warm fuzzy feeling that someone else has a better user experience now. Or soon, anyway.

    [1] with, I suspect, the scientific measurement of "yeah, that seems about alright"
    [2] like butter out of a fridge, so still chunky but at least less so than before

    Onlykey review

    Posted by Kevin Fenzi on April 04, 2020 12:22 AM

    I’ve been busy and remiss in blogging and this is something I was hoping to publish a while ago, so I give you… a review of the onlykey.

    The onlykey is a USB2 connectable hardware security key. Many of you may know yubikeys, this is a competitor to those, with some advantages and disadvantages.

    Ordering was simple, I just ordered one via amazon and had it 2 days later. You can also order directly from the onlykey website.

    <figure class="wp-block-image size-large"></figure>

    Right away you will note that this doesn’t have just one key like the yubikeys do, but instead has 6 of them. Additionally, it’s got a colored LED underneath, which shines green when it’s unlocked, flashes red when you enter an incorrect pin, etc. The idea here is that the key will be completely locked and useless to others until its plugged in and a correct pin is entered (yes, there’s more than one correct pin 🙂 This key only comes in USB2 form as they say USBC would be too fragile. You can of course use a USB2 -> USBC adapter.

    There’s Several ways to manage the key:

    • A chrom(imum) “app”. (Although chrome is discontinuing these)
    • A “snap”
    • A debian .deb package

    So, thats not great for Fedora. I tried to get the snap working, but it failed. I tried to use the chrome app, but thats how I found that they are dropping those, so I went with the debian package unpacked and just using the application from there. The app is open source if anyone wants to package it up: https://github.com/trustcrypto/OnlyKey-App (It’s npm based)

    Right off the bat I hit an issue. My onlykey had old firmware and it was old enough that the app was unable to update it, so I had to monkey around with “Teensy loader” to upgrade the firmware. At one point the key stopped even lighting up and I asked for help on the onlykey discussion group. They had someone answer me pretty quickly and I found that I wasn’t properly shorting the two contacts on each end of the key to put it in ‘upload firmware mode’. After I did that I managed to update to the new firmware and everything was smooth sailing after that. I really hope they have all their existing stock updated now so no one else should have to go through this. 🙂 Just look at these little contacts you have to short together while pressing the upload button on Teensy Loader! I managed to do it with a hanger from a xmas tree ornament finally:

    <figure class="wp-block-image size-large"></figure>

    There’s a sort of soft rubber case around the key, you can get all kinds of colors (I just stuck with black). It also comes with the handy little carribeener to attach it to your keychain or whatever.

    So, once you have the firmware somewhat up to date, you can run the app. It will also update firmware as long as it’s not too old. The firmware is open source: https://github.com/trustcrypto/OnlyKey-Firmware

    On your first run (or if you factory wipe it), you have to do a bit of setup. You can enter 2 profile pins (sequences of buttons). They suggest that this might be ‘work’ and ‘home’, but you could use them for whatever you like. You can also enter a ‘self destruct’ profile pin, which wipes back to factory settings if you enter it. You can also tell it to do this if someone enters the wrong pin 10 times, but it will flash red and stop taking input after 3 failed pins. So to wipe it this way you have to enter 3 wrong pins, remove, insert, 3 more wrong pins, remove, insert 3 more wrong pins, remove, insert, 1 more wrong pin. You can also load a firmware called the “International Travel Edition” that has no encryption at all (it’s only protected by the pin).

    Once you have your profiles setup you can configure what you want in the slots. There’s actually 12 slots because (just like yubikeys) there is long press and short press on each button. You can assign whatever you like to those 24 slots (12 for each profile). You can do TOTP, U2F, Yubikey HOTP, username/password, tabs or returns, all kinds of things:

    <figure class="wp-block-image size-large"></figure>

    You can do encrypted backups of the contents of the key and of course restore them. There’s also some misc settings like how bright the LED is or how fast the keyboard types or what keyboard layout it uses. There’s some integration with keybase.io to do encrypted files/messages transfer.

    Finally, there’s some advanced prefs about yubikeys and U2F tokens. I was confused by these at devconf, but the manual explains: https://docs.crp.to/usersguide.html#Yubico-one-time-password basically you have to run yubikey-personalize and have it generate “Public Identify, Private Identity, and Secret Key” which you then enter into the onlykey app. I can only assume they couldn’t just do this due to legal concerns. The U2F prefs are there because:

    For the attestation certificates OnlyKey comes with a default attestation certificate and signing key and also allows users or enterprises to import their own attestation certificate/key. This feature allows organizations to only permit FIDO2 keys issued by the organization to be used. Importing attestation certificates and signing keys can be done in the OnlyKey app.

    The manual is pretty easy to read and covers all this better than I can probibly. So, on to the actual reviewing:

    First the good:

    • Open source firmware and app!
    • Nice to know that if someone stole it, they would not be able to access anything using it.
    • Seems to work fine for U2F/webauthn, TOTP, HOTP and user/pass.
    • The LED light is nice to know what state it’s in.
    • I like being able to do encrypted backups.
    • Having a bunch more slots is nice (If you forget which is which, you can press and hold down the 2 button for 5+ seconds and it will print out the labels you gave the slots).

    Now the not so good:

    • You can set a timeout where the key will lock after X minutes. It works and the LED goes off but the application if you have it open will happily think it’s still talking to the unlocked key (until you try and do something then it errors).
    • You can sort of use it for keeping your ssh private key, but not easily. You have to import a key (using only very specific ecc curves) and then use a ‘onlykey-agent’ instead of normal ssh agent. This is a fork of another project and hasn’t been updated from that fork for like 4 years, which is really not encouraging. 🙁
    • It’s a bit odd trying to type in your pin at say a coffee shop or table with a bunch of co-workers around. It seems like you can’t really hide what you are pressing. Still it is more secure than just any unlocked always token.

    For me personally, the lack of a nice way to use it for storing my ssh private key is somewhat of a deal breaker. I really really don’t want that key to get out (even though it is passphrase protected). My current yubikey was able to generate it on key and keep it always stored there. If I didn’t care so much about ssh keys I might well move to using the onlykey day to day. If I traveled much I definitely would look at using it while traveling.

    Linux in the Time of COVID-19

    Posted by Michel Alexandre Salim on April 04, 2020 12:00 AM

    Can’t believe I managed to spend more than a year without updating this blog! In hindsight, perhaps sticking to open-source communication channels should have been in my 2019 resolutions…

    I’ve been meaning to write a comprehensive status update, but perhaps starting small and updating more often is the way to go. So here goes…

    On WFH and moving

    Thanks to the #COVID19 situation, everyone at Facebook has been working from home for a few weeks now. It’s a blessing and a curse – my wife and I were in the process of relocating, so while not having to commute means I have more time to pack, it also means – I was working from a tiny apartment where everything was in the process of being boxed up!

    The move itself was, luckily, relatively uneventful. If you don’t count finding out the day before, just as we’re about to check in, that the airline canceled all flights on our departure date, and we had to scramble to rebook and fly out on the same day. On the bright side, the airport was virtually deserted, everyone was doing the physical distancing thing, and our household goods and vehicle were both also delivered early. Even the cable guy showed up early to set up our Internet access!

    I still don’t have my home office set up yet - another week for that - but my blood pressure monitor has been much happier with me this week.

    On Fedora, CentOS, and that conference talk

    My team had a presentation accepted for SCaLE 18x in early March, but unfortunately had to call it off - we were already on site in Pasadena, but between our company’s updated travel guidance, and not wanting to be sick (or worse, infecting others) just before an expensive house move, I figured caution is warranted. The talk was going to be an update on our Flock 2019 talk ( video here), so since the material has already been approved for public use, here goes.

    The upgrade firehose problem

    Note: This is of increasing relevance as everyone is working from home, and problems that can be easily fixed on site – worse come to worst, you can just reimage a machine to a known good state – can leave an employee without access to the tools they need to perform their job.

    I’ve quipped at work that “move fast, break things” (which to be fair we abandon as a slogan years ago) should really be “move slow, fix things” - not my phrasing, I’m borrowing it from someone on Mastodon but will need to dig up my old toots to see who I got it from.

    Sadly some Linux components are just updated too fast and does not get enough quality assurance. Not throwing anyone under the bus here; as long as hardware manufacturers don’t consider Linux as a primary platform, which is the case on desktops and laptops, there’ll never be enough eyes to catch regressions. But it does seem to get worse recently – in short order we get hit by:

    • trackpad issues, affecting some but not all new ThinkPad T490s (because of course within the same SKU you might still get different components)
    • Intel video lock-ups in kernel 5.4
    • ThinkPads with Nvidia GPUs locking up if you connect an external monitor, again on kernel 5.4
    • built-in microphone not recognized if you have an updated ALSA library but the latest stable Pulseaudio
    • after a pre-release Pulseaudio got pushed to Fedora, it fixes that issue but breaks Bluetooth audio for those on some hardware

    Ideally we either have at least one Fedora tester for each hardware type in our fleet, but that’s not going to be practical in the short term. So what to do instead?

    Move slow

    Rather than consuming the latest upstream kernel within roughly a month of it coming out (when Fedora releases its build), why not use the CentOS kernel? It’s stable (only critical fixes are backported), and since CentOS 8 is relatively new it happens to be the newest kernel officially supported by Nvidia anyway.

    For Chef users, we open sourced cpe_kernel_channel, our cookbook for opting to use the CentOS kernel instead of the regular Fedora kernel.

    The next obvious step is to run CentOS itself rather than Fedora. Happily CentOS 8 runs well enough even on most recent ThinkPad laptops (let’s forget about that Yoga with a suspend issue). The one notable exception is Bluetooth audio support - bouncing bluetooth and pulseaudio repeatedly to get A2DP working is nobody’s idea of fun. We might need to ship backported Fedora components to address this (ironic, yes). If you see recent commits to our IT-CPE repo adding CentOS support, that’s why.

    Fix things

    Apart from moving slower on updates to reduce the chance of regressions being rolled out, the flip side of the coin is to be able to quickly revert from such breakages.

    Switching to Btrfs would help here - being able to snapshot before every Chef run, and rolling back in case a bad change is deployed, would be a huge time saver. There’s some work to do in kickstarting a system with both LUKS and Btrfs, but that’s not an intractable problem; more interesting would be getting Btrfs readded to CentOS.

    What’s next?

    One resolution I failed to keep is to learn a new programming language - being stuck using Python and Ruby at work starts to feel constraining after a while, and somehow Go never really appealed to me personally. Hoping to work on my Rust personal project in the next few weeks!

    Automating a Custom Install of Fedora CoreOS

    Posted by Dusty Mabe on April 04, 2020 12:00 AM
    Introduction With Fedora CoreOS we currently have two ways to do a bare metal install and get our disk image onto the spinning rust of a “non-cloud” server. You can use coreos.inst* kernel arguments to automate the install, or you can boot the Live ISO and get a bash prompt where you can then run coreos-installer directly after doing whatever hardware/network discovery that is necessary. This means you either do a simple automated install where you provide all of the information up front or you are stuck doing something interactive.

    Turning the sdbm hash method into an hmac version

    Posted by Jon Chiappetta on April 03, 2020 11:33 PM

    I increased the number of hash rounds from 3 to 4 also:

    import os,sys
    
    def tt(ll):
    	return (ll & 0xffffffff)
    
    def sdbm(inpt, leng):
    	hshs = 0
    	for x in range(0, leng):
    		hshs = tt(ord(inpt[x]) + tt(hshs << 6) + tt(hshs << 16) - hshs)
    	return hshs
    
    def sdbm_hash(inpt, leng):
    	mixs = [1, 6, 16, 13, 33, 27, 67, 55, 123]
    	hshs = [0, 0, 0, 0, 0, 0, 0, 0, 0]
    	more = 0
    	rnds = len(hshs)
    	for z in range(0, rnds*4):
    		hshs[0] = tt((hshs[0] + mixs[z%rnds]) * mixs[z%rnds])
    		for x in range(0, leng):
    			hshs[0] = (hshs[0] & 0xffff)
    			hshs[0] = tt(ord(inpt[x]) + (hshs[0] << 6) + (hshs[0] << 16) - hshs[0])
    			more = (more ^ (hshs[rnds-1] >> 16))
    			for y in range(rnds-1, 0, -1):
    				hshs[y] = tt((hshs[y] << 16) | (hshs[y-1] >> 16))
    				hshs[y-1] = (hshs[y-1] & 0xffff)
    			hshs[0] = (hshs[0] ^ more)
    	o = ""
    	for h in hshs[1:]:
    		for x in range(3, -1, -1):
    			o += chr((h>>(x*8))&0xff)
    	return o
    
    def sdbm_hmac(mesg, mlen, skey, klen):
    	inner_pad = 0x36 ; outer_pad = 0x5C
    	block_size = 64 ; ikey = "" ; okey = ""
    	tkey = skey ; tlen = klen
    	if (klen > block_size):
    		tkey = sdbm_hash(skey, klen)
    		tlen = len(tkey)
    	for x in range(0, block_size):
    		c = 0
    		if (x < tlen):
    			c = ord(tkey[x])
    		ikey += chr(inner_pad ^ c)
    		okey += chr(outer_pad ^ c)
    	ihsh = sdbm_hash(ikey+mesg, block_size+mlen)
    	ilen = len(ihsh)
    	ohsh = sdbm_hash(okey+ihsh, block_size+ilen)
    	return ohsh
    
    def stoh(s):
    	return "".join([hex(ord(c))[2:].rjust(2, '0') for c in s])
    
    m = sys.argv[1] ; l = len(m)
    k = sys.argv[2] ; n = len(k)
    print(stoh(sdbm_hmac(m, l, k, n)), sdbm(m, l), m, l, sdbm(k, n), k, n)
    
    
    $ python hash.py "b" "b"
    ('eaf66aa0c49321e49e434648c9f62ae9b6216066e44a2d195386094f8bc4de45', 98, 'b', 1, 98, 'b', 1)
    
    $ python hash.py "b" "c"
    ('f99c3a84fe4f13595b36d0076e01d6834f3469d188c4c657e30eb69cc2511511', 98, 'b', 1, 99, 'c', 1)
    
    $ python hash.py "c" "b"
    ('aeba37d948452ed646dc1b636d0613a5e5e0fdb617a06e1c0ed57f188efff1af', 99, 'c', 1, 98, 'b', 1)
    
    $ python hash.py "c" "c"
    ('6b4670a1140f22e417e8f252e1a01f3402aa10497c6f887a4a3692f70a409b2b', 99, 'c', 1, 99, 'c', 1)
    
    
    $ ./hash "this is a test" "b"
    [7382434f9620f4d903d320e6a1f0d711288ae80fb0788ab3e3ed7e2243b1e61f] [1655286693] [this is a test] [14] [98] [b] [1]
    
    $ ./hash "this is a test" "c"
    [0340ae9e91f834e0d43d49916b94f9a04360d4a65b5012f7bb2584a83366347b] [1655286693] [this is a test] [14] [99] [c] [1]
    
    $ ./hash "this is a tesu" "b"
    [a8be21bfc55ac784db79a71ec17d8bfa38b122f856153ca696a3f5dbdce512e2] [1655286694] [this is a tesu] [14] [98] [b] [1]
    
    $ ./hash "this is a tesu" "c"
    [9180b7ef2cddc71530ccf20b351d0a6473c9cd14f9bdac24411935f1160a7a4f] [1655286694] [this is a tesu] [14] [99] [c] [1]
    
    #include <stdio.h>
    #include <string.h>
    
    unsigned int sdbm(char *inpt, int leng) {
    	unsigned int hshs = 0;
    	for (int x = 0; x < leng; ++x) {
    		hshs = (inpt[x] + (hshs << 6) + (hshs << 16) - hshs);
    	}
    	return hshs;
    }
    
    void sdbm_hash(unsigned char *outp, unsigned char *inpt, int leng) {
    	unsigned int mixs[] = {1, 6, 16, 13, 33, 27, 67, 55, 123};
    	unsigned int hshs[] = {0, 0, 0, 0, 0, 0, 0, 0, 0};
    	unsigned int more = 0;
    	int rnds = 9;
    	for (int z = 0; z < rnds*4; ++z) {
    		hshs[0] = ((hshs[0] + mixs[z%rnds]) * mixs[z%rnds]);
    		for (int x = 0; x < leng; ++x) {
    			hshs[0] = (hshs[0] & 0xffff);
    			hshs[0] = (inpt[x] + (hshs[0] << 6) + (hshs[0] << 16) - hshs[0]);
    			more = (more ^ (hshs[rnds-1] >> 16));
    			for (int y = rnds-1; y > 0; --y) {
    				hshs[y] = ((hshs[y] << 16) | (hshs[y-1] >> 16));
    				hshs[y-1] = (hshs[y-1] & 0xffff);
    			}
    			hshs[0] = (hshs[0] ^ more);
    		}
    	}
    	for (int x = 1, y = 0; x < rnds; ++x) {
    		for (int z = 3; z > -1; --z, ++y) { 
    			outp[y] = ((hshs[x] >> (z * 8)) & 0xff);
    		}
    	}
    }
    
    void sdbm_hmac(unsigned char *outp, unsigned char *mesg, int mlen, unsigned char *skey, int klen) {
    	int block_size = 64, hash_size = 32;
    	unsigned char inner_pad = 0x36, outer_pad = 0x5C;
    	unsigned char ikey[block_size], okey[block_size], ihsh[hash_size], thsh[hash_size];
    	unsigned char buff[block_size+mlen+hash_size];
    	unsigned char *tkey = skey; int tlen = klen;
    	if (klen > block_size) {
    		sdbm_hash(thsh, skey, klen);
    		tkey = thsh; tlen = hash_size;
    	}
    	for (int x = 0; x < block_size; ++x) {
    		unsigned char padc = 0;
    		if (x < tlen) { padc = tkey[x]; }
    		ikey[x] = (inner_pad ^ padc);
    		okey[x] = (outer_pad ^ padc);
    	}
    	bcopy(ikey, buff, block_size);
    	bcopy(mesg, buff+block_size, mlen);
    	sdbm_hash(ihsh, buff, block_size+mlen);
    	bcopy(okey, buff, block_size);
    	bcopy(ihsh, buff+block_size, hash_size);
    	sdbm_hash(outp, buff, block_size+hash_size);
    }
    
    void stoh(char *outp, unsigned char *inpt) {
    	char *hexs = "0123456789abcdef";
    	for (int x = 0, y = 0; x < 32; ++x) {
    		outp[y] = hexs[(inpt[x] >> 4) & 0xf]; ++y;
    		outp[y] = hexs[inpt[x] & 0xf]; ++y;
    	}
    }
    
    int main(int argc, char *argv[]) {
    	char *m = argv[1]; int l = strlen(m);
    	char *k = argv[2]; int n = strlen(k);
    	unsigned char h[32]; char o[65]; bzero(o, 65);
    	sdbm_hmac(h, (unsigned char *)m, l, (unsigned char *)k, n); stoh(o, h);
    	printf("[%s] [%u] [%s] [%d] [%u] [%s] [%d]\n", o, sdbm(m, l), m, l, sdbm(k, n), k, n);
    	return 0;
    }
    
    

    Kiwi TCMS 8.2

    Posted by Kiwi TCMS on April 03, 2020 07:25 PM

    We're happy to announce Kiwi TCMS version 8.2!

    IMPORTANT: this is a small release which updates 3rd party libraries, provides minor improvements, minor API changes and some new translations. You can explore everything at https://public.tenant.kiwitcms.org!

    Supported upgrade paths:

    5.3   (or older) -> 5.3.1
    5.3.1 (or newer) -> 6.0.1
    6.0.1            -> 6.1
    6.1              -> 6.1.1
    6.1.1            -> 6.2 (or newer)
    

    Docker images:

    kiwitcms/kiwi       latest  7c1b947b9a43    561 MB
    kiwitcms/kiwi       6.2     7870085ad415    957 MB
    kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955 MB
    kiwitcms/kiwi       6.1     b559123d25b0    970 MB
    kiwitcms/kiwi       6.0.1   87b24d94197d    970 MB
    kiwitcms/kiwi       5.3.1   a420465852be    976 MB
    

    Changes since Kiwi TCMS 8.1

    Improvements

    • Update bleach from 3.1.1 to 3.1.4
    • Update django from 3.0.4 to 3.0.5
    • Update django-colorfield from 0.2.1 to 0.2.2
    • Update pygithub from 1.46 to 1.47
    • Update python-gitlab from 2.0.1 to 2.1.2
    • Update marked(js) to version 0.8.2
    • Change default MariaDB charset and collation to utf8mb4. Will only affect new installations. Closes Issue #327
    • Document TCMS_PLAN_ID ENV variable supported by automation framework plugins
    • Test case Search page now allows searching for records containing the specified text. Closes #1209 @Schwarzkrieger
    • Provide ../site-packages/tcms_settings_dir/ when installing Kiwi TCMS which is an empty pkgutil-style namespace where other packages can drop their configuration
    • Hide empty values in Execution trends chart tooltips

    API

    • Remove Auth.login_krbv() method
    • Method TestRun.update() will now accept %Y-%m-%d %H:%M:%S timestamp format. The previous format %Y-%m-%d is also supported
    • Method TestExecution.create() now defaults to first neutral status instead of searching for the hard-coded IDLE. That means newly created test executions which do not specify status will be created with the first neutral status found in the database

    Refactoring

    • Fix pylint errors. Closes Issue #1510 (@cmbahadir)
    • Add tests for TestRunAdmin.delete_view() (Mariyan Garvanski)
    • Revert "[l10n] Add Serializer class which returns untranslated models"

    social-auth-kerberos v0.2.4

    A new version of our Kerberos authentication backend has been released as well. For more info check https://github.com/kiwitcms/python-social-auth-kerberos#changelog. This version is included with Kiwi TCMS Enterprise.

    tcms-api v8.2.0

    New version of our tcms-api library has been released as well. Notable changes include the bug-fixes for Kerberos support and the ability to use Kerberos on Windows. For more information see https://github.com/kiwitcms/tcms-api/#changelog.

    tap-plugin & junit.xml-plugin v8.2

    Both plugins are now using the latest version of tcms-api library and include additional improvements like being able to specify existing TestPlan and setting stop_date for the automated TestRun. For more information see https://github.com/kiwitcms/tap-plugin#changelog and https://github.com/kiwitcms/junit.xml-plugin/#changelog

    How to upgrade

    Backup first! If you are using Kiwi TCMS as a Docker container then:

    cd path/containing/docker-compose/
    docker-compose down
    docker pull kiwitcms/kiwi
    docker pull centos/mariadb-103-centos7
    docker-compose up -d
    docker exec -it kiwi_web /Kiwi/manage.py migrate
    

    WHERE: docker-compose.yml has been updated from your private git repository! The file provided in our GitHub repository is an example. Not for production use!

    WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

    # starting from an older Kiwi TCMS version
    docker-compose down
    docker pull kiwitcms/kiwi:<next_upgrade_version>
    edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
    docker-compose up -d
    docker exec -it kiwi_web /Kiwi/manage.py migrate
    # repeat until you have reached latest
    

    Happy testing!

    Kiwi TCMS 8.2

    Posted by Kiwi TCMS on April 03, 2020 07:25 PM

    We're happy to announce Kiwi TCMS version 8.2!

    IMPORTANT: this is a small release which updates 3rd party libraries, provides minor improvements, minor API changes and some new translations. You can explore everything at https://public.tenant.kiwitcms.org!

    Supported upgrade paths:

    5.3   (or older) -> 5.3.1
    5.3.1 (or newer) -> 6.0.1
    6.0.1            -> 6.1
    6.1              -> 6.1.1
    6.1.1            -> 6.2 (or newer)
    

    Docker images:

    kiwitcms/kiwi       latest  7c1b947b9a43    561 MB
    kiwitcms/kiwi       6.2     7870085ad415    957 MB
    kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955 MB
    kiwitcms/kiwi       6.1     b559123d25b0    970 MB
    kiwitcms/kiwi       6.0.1   87b24d94197d    970 MB
    kiwitcms/kiwi       5.3.1   a420465852be    976 MB
    

    Changes since Kiwi TCMS 8.1

    Improvements

    • Update bleach from 3.1.1 to 3.1.4
    • Update django from 3.0.4 to 3.0.5
    • Update django-colorfield from 0.2.1 to 0.2.2
    • Update pygithub from 1.46 to 1.47
    • Update python-gitlab from 2.0.1 to 2.1.2
    • Update marked(js) to version 0.8.2
    • Change default MariaDB charset and collation to utf8mb4. Will only affect new installations. Closes Issue #327
    • Document TCMS_PLAN_ID ENV variable supported by automation framework plugins
    • Test case Search page now allows searching for records containing the specified text. Closes #1209 @Schwarzkrieger
    • Provide ../site-packages/tcms_settings_dir/ when installing Kiwi TCMS which is an empty pkgutil-style namespace where other packages can drop their configuration
    • Hide empty values in Execution trends chart tooltips

    API

    • Remove Auth.login_krbv() method
    • Method TestRun.update() will now accept %Y-%m-%d %H:%M:%S timestamp format. The previous format %Y-%m-%d is also supported
    • Method TestExecution.create() now defaults to first neutral status instead of searching for the hard-coded IDLE. That means newly created test executions which do not specify status will be created with the first neutral status found in the database

    Refactoring

    • Fix pylint errors. Closes Issue #1510 (@cmbahadir)
    • Add tests for TestRunAdmin.delete_view() (Mariyan Garvanski)
    • Revert "[l10n] Add Serializer class which returns untranslated models"

    social-auth-kerberos v0.2.4

    A new version of our Kerberos authentication backend has been released as well. For more info check https://github.com/kiwitcms/python-social-auth-kerberos#changelog. This version is included with Kiwi TCMS Enterprise.

    tcms-api v8.2.0

    New version of our tcms-api library has been released as well. Notable changes include the bug-fixes for Kerberos support and the ability to use Kerberos on Windows. For more information see https://github.com/kiwitcms/tcms-api/#changelog.

    tap-plugin & junit.xml-plugin v8.2

    Both plugins are now using the latest version of tcms-api library and include additional improvements like being able to specify existing TestPlan and setting stop_date for the automated TestRun. For more information see https://github.com/kiwitcms/tap-plugin#changelog and https://github.com/kiwitcms/junit.xml-plugin/#changelog

    How to upgrade

    Backup first! If you are using Kiwi TCMS as a Docker container then:

    cd path/containing/docker-compose/
    docker-compose down
    docker pull kiwitcms/kiwi
    docker pull centos/mariadb-103-centos7
    docker-compose up -d
    docker exec -it kiwi_web /Kiwi/manage.py migrate
    

    WHERE: docker-compose.yml has been updated from your private git repository! The file provided in our GitHub repository is an example. Not for production use!

    WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

    # starting from an older Kiwi TCMS version
    docker-compose down
    docker pull kiwitcms/kiwi:<next_upgrade_version>
    edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
    docker-compose up -d
    docker exec -it kiwi_web /Kiwi/manage.py migrate
    # repeat until you have reached latest
    

    Happy testing!

    Take back your dotfiles with Chezmoi

    Posted by Fedora Magazine on April 03, 2020 08:00 AM

    In Linux, dotfiles are hidden text files that are used to store various configuration settings for many such as Bash and Git to more complex applications like i3 or VSCode.

    Most of these files are contained in the ~/.config directory or right in the home directory. Editing these files allows you to customize applications beyond what a settings menu may provide, and they tend to be portable across devices and even other Linux distributions. But one talking point across the Linux enthusiast community is how to manage these dotfiles and how to share them.

    We will be showcasing a tool called Chezmoi that does this task a little differently from the others.

    The history of dotfile management

    If you search GitHub for dotfiles, what you will see are over 100k repositories after one goal: Store people’s dotfiles in a shareable and repeatable manor. However, other than using git, they store their files differently.

    While Git has solved code management problems that also translates to config file management, It does not solve how to separate between distributions, roles (such as home vs work computers) secrets management, and per device configuration.

    Because of this, many users decide to craft their own solutions, and the community has responded with multiple answers over the years. This article will briefly cover some of the solutions that have been created.

    Experiment in an isolated environment

    Do you want to try these below solutions quickly in a contained environment? Run:

    $ podman run --rm -it fedora

    … to create a Fedora container to try the applications in. This container will automatically delete itself when you exit the shell.

    The install problem

    If you store your dotfiles in Git repository, you will want to make it easy for your changes to automatically be applied inside your home directory, the easiest way to do this at first glance is to use a symlink, such as ln -s ~/.dotfies/bashrc ~/.bashrc. This will allow your changes to take place instantly when your repository is updated.

    The problem with symlinks is that managing symlinks can be a chore. Stow and RCM (covered here on Fedora Magazine) can help you manage those, but these are not seamless solutions. Files that are private will need to be modified and chmoded properly after download. If you revamp your dotfiles on one system, and download your repository to another system, you may get conflicts and require troubleshooting.

    Another solution to this problem is writing your own install script. This is the most flexible option, but has the tradeoff of requiring more time into building a custom solution.

    The secrets problem

    Git is designed to track changes. If you store a secret such as a password or an API key in your git repository, you will have a difficult time and will need to rewrite your git history to remove that secret. If your repository is public, your secret would be impossible to recover if someone else has downloaded your repository. This problem alone will prevent many individuals from sharing their dotfiles with the public world.

    The multi-device config problem

    The problem is not pulling your config to multiple devices, the problem is when you have multiple devices that require different configuration. Most individuals handle this by either having different folders or by using different forks. This makes it difficult to share configs across the different devices and role sets

    How Chezmoi works

    Chezmoi is a tool to manage your dotfiles with the above problems in mind, it doesn’t blindly copy or symlink files from your repository. Chezmoi acts more like a template engine to generate your dotfiles based on system variables, templates, secret managers, and Chezmoi’s own config file.

    Getting Started with Chezmoi

    Currently Chezmoi is not in the default repositories. You can download the current version of Chezmoi as of writing with the following command.

    $ sudo dnf install https://github.com/twpayne/chezmoi/releases/download/v1.7.17/chezmoi-1.7.17-x86_64.rpm

    This will install the pre-packaged RPM to your system.

    Lets go ahead and create your repository using:

    $ chezmoi init

    It will create your new repository in ~/.local/share/chezmoi/. You can easily cd to this directory by using:

    $ chezmoi cd

    Lets add our first file:

    chezmoi add ~/.bashrc 

    … to add your bashrc file to your chezmoi repository.

    Note: if your bashrc file is actually a symlink, you will need to add the -f flag to follow it and read the contents of the real file.

    You can now edit this file using:

    $ chezmoi edit ~/.bashrc

    Now lets add a private file, This is a file that has the permissions 600 or similar. I have a file at .ssh/config that I would like to add by using

    $ chezmoi add ~/.ssh/config

    Chezmoi uses special prefixes to keep track of what is a hidden file and a private file to work around Git’s limitations. Run the following command to see it:

    $ chezmoi cd

    Do note that files that are marked as private are not actually private, they are still saved as plain text in your git repo. More on that later.

    You can apply any changes by using:

    $ chezmoi apply

    and inspect what is different by using

    $ chezmoi diff

    Using variables and templates

    To export all of your data Chezmoi can gather, run:

    $ chezmoi data

    Most of these are information about your username, arch, hostname, os type and os name. But you can also add our own variables.

    Go ahead and run:

    $ chezmoi edit-config

    … and input the following:

    [data]
             email = "fedorauser@example.com"
             name = "Fedora Mcdora"

    Save your file and run chezmoi data again. You will see on the bottom that your email and name are now added. You can now use these with templates with Chezmoi. Run:

    $ chezmoi add  -T --autotemplate ~/.gitconfig

    … to add your gitconfig as a template into Chezmoi. If Chezmoi is successful in inferring template correctly, you could get the following:

    [user]
             email = "{{ .email }}"
             name = "{{ .name }}"

    If it does not, you can change the file to this instead.

    Inspect your file with:

    $ chezmoi edit ~/.gitconfig

    After using

    $ chezmoi cat ~/.gitconfig

    … to see what chezmoi will generate for this file. My generated example is below:

    [root@a6e273a8d010 ~]# chezmoi cat ~/.gitconfig 
     [user]
         email = "fedorauser@example.com"
         name = "Fedora Mcdora"
     [root@a6e273a8d010 ~]# 

    It will generate a file filled with the variables in our chezmoi config.
    You can also use the varibles to perform simple logic statements. One example is:

    {{- if eq .chezmoi.hostname "fsteel" }}
    # this will only be included if the host name is equal to "fsteel"
    {{- end }}

    Do note that for this to work the file has to be a template. You can check this by seeing if the file has a “.tmpl” appended to its name on the file in chezmoi cd, or by readding the file using the -T option

    Keeping secrets… secret

    To troubleshoot your setup, use the following command.

    $ chezmoi doctor 

    What is important here is that it also shows you the password managers it supports.

    [root@a6e273a8d010 ~]# chezmoi doctor
     warning: version dev
          ok: runtime.GOOS linux, runtime.GOARCH amd64
          ok: /root/.local/share/chezmoi (source directory, perm 700)
          ok: /root (destination directory, perm 550)
          ok: /root/.config/chezmoi/chezmoi.toml (configuration file)
          ok: /bin/bash (shell)
          ok: /usr/bin/vi (editor)
     warning: vimdiff (merge command, not found)
          ok: /usr/bin/git (source VCS command, version 2.25.1)
          ok: /usr/bin/gpg (GnuPG, version 2.2.18)
     warning: op (1Password CLI, not found)
     warning: bw (Bitwarden CLI, not found)
     warning: gopass (gopass CLI, not found)
     warning: keepassxc-cli (KeePassXC CLI, not found)
     warning: lpass (LastPass CLI, not found)
     warning: pass (pass CLI, not found)
     warning: vault (Vault CLI, not found)
     [root@a6e273a8d010 ~]# 

    You can use either of these clients, or a generic client, or your system’s Keyring.

    For GPG, you will need to add the following to your config using:

    $ chezmoi edit-config
    [gpg]
       recipient = "<Your GPG keys Recipient"

    You can use:

    $ chezmoi add --encrypt

    … to add any files, these will be encrypted in your source respository and not exposed to the public world as plain text. Chezmoi will automatically decrypt them when applying.

    We can also use them in templates. For example, a secret token stored in Pass (covered on Fedora Magazine). Go ahead and generate your secret.

    In this example, it’s called “githubtoken”:

    rwaltr@fsteel:~] $ pass ls
     Password Store
     └── githubtoken
     [rwaltr@fsteel:~] $ 

    Next, edit your template, such as your .gitconfig we created earlier and add this lines.

    token = {{ pass "githubtoken" }}

    Then lets inspect using:

    $ chezmoi cat ~/.gitconfig
    [rwaltr@fsteel:~] $ chezmoi cat ~/.gitconfig 
     This is Git's per-user configuration file.
     [user]
               name = Ryan Walter
               email = rwalt@pm.me
               token = mysecrettoken
     [rwaltr@fsteel:~] $ 

    Now your secrets are properly secured in your password manager, your config can be publicly shared without risk!

    Final notes

    This is only scratching the surface. Please check out Chezmoi’s website for more information. The author also has his dotfiles public if you are looking for more examples on how to use Chezmoi.

    Experiment – Turning the SDBM mixing algorithm into a hash function

    Posted by Jon Chiappetta on April 03, 2020 04:47 AM

    From this page: http://www.cse.yorku.ca/~oz/hash.html

    import os,sys
    
    def tt(ll):
    	return (ll & 0xffffffff)
    
    def sdbm(inpt, leng):
    	hshs = 0
    	for x in range(0, leng):
    		hshs = tt(ord(inpt[x]) + tt(hshs << 6) + tt(hshs << 16) - hshs)
    	return hshs
    
    def sdbm_hash(inpt, leng):
    	mixs = [1, 6, 16, 13, 33, 27, 67, 55, 123]
    	hshs = [0, 0, 0, 0, 0, 0, 0, 0, 0]
    	more = 0
    	rnds = len(hshs)
    	for z in range(0, rnds*3):
    		hshs[0] = tt((hshs[0] + mixs[z%rnds]) * mixs[z%rnds])
    		for x in range(0, leng):
    			hshs[0] = (hshs[0] & 0xffff)
    			hshs[0] = tt(ord(inpt[x]) + (hshs[0] << 6) + (hshs[0] << 16) - hshs[0])
    			more = (more ^ (hshs[rnds-1] >> 16))
    			for y in range(rnds-1, 0, -1):
    				hshs[y] = tt((hshs[y] << 16) | (hshs[y-1] >> 16))
    				hshs[y-1] = (hshs[y-1] & 0xffff)
    			hshs[0] = (hshs[0] ^ more)
    	o = ""
    	for h in hshs[1:]:
    		for x in range(3, -1, -1):
    			o += chr((h>>(x*8))&0xff)
    	return o
    
    def stoh(s):
    	return "".join([hex(ord(c))[2:].rjust(2, '0') for c in s])
    
    m = sys.argv[1] ; l = len(m)
    print(stoh(sdbm_hash(m, l)), sdbm(m, l), m, l)
    
    
    $ python hash.py "b" 
    ('197f907989061db579beba252025bfbf7f4848a47da0e5a9bc8eb73c6fdb8904', 98, 'b', 1)
    
    $ python hash.py "c" 
    ('f39b31f66506067376f8f88f620e731cd4292a6aeb71da43d297fffc3f5f92c4', 99, 'c', 1)
    
    $ python hash.py "bb" 
    ('d427d6ba1edaabab2ba3c6ddd70ccf3d0ceff23fcd6de3d2c0af7d0ac95dd62d', 6428800, 'bb', 2)
    
    $ python hash.py "bc" 
    ('4bfdd409d3de82913ba7405894f444d8858a539abe98e312d64474abf6b68e3a', 6428801, 'bc', 2)
    
    $ ./hash "this is a test"
    [dc053c4426ce396ace93a2817b388c465bb5188336d5f8816384a1cef0a2039d] [1655286693] [this is a test] [14]
    
    $ ./hash "this is a tesu"
    [cfbb3301ae39658cf1a1874dd48d8dbf9da4aee0831f781e30066be5f67eb819] [1655286694] [this is a tesu] [14]
    
    #include <stdio.h>
    #include <string.h>
    
    unsigned int sdbm(char *inpt, int leng) {
    	unsigned int hshs = 0;
    	for (int x = 0; x < leng; ++x) {
    		hshs = (inpt[x] + (hshs << 6) + (hshs << 16) - hshs);
    	}
    	return hshs;
    }
    
    void sdbm_hash(unsigned char *outp, unsigned char *inpt, int leng) {
    	unsigned int mixs[] = {1, 6, 16, 13, 33, 27, 67, 55, 123};
    	unsigned int hshs[] = {0, 0, 0, 0, 0, 0, 0, 0, 0};
    	unsigned int more = 0;
    	int rnds = 9;
    	for (int z = 0; z < rnds*3; ++z) {
    		hshs[0] = ((hshs[0] + mixs[z%rnds]) * mixs[z%rnds]);
    		for (int x = 0; x < leng; ++x) {
    			hshs[0] = (hshs[0] & 0xffff);
    			hshs[0] = (inpt[x] + (hshs[0] << 6) + (hshs[0] << 16) - hshs[0]);
    			more = (more ^ (hshs[rnds-1] >> 16));
    			for (int y = rnds-1; y > 0; --y) {
    				hshs[y] = ((hshs[y] << 16) | (hshs[y-1] >> 16));
    				hshs[y-1] = (hshs[y-1] & 0xffff);
    			}
    			hshs[0] = (hshs[0] ^ more);
    		}
    	}
    	for (int x = 1, y = 0; x < rnds; ++x) {
    		for (int z = 3; z > -1; --z, ++y) { 
    			outp[y] = ((hshs[x] >> (z * 8)) & 0xff);
    		}
    	}
    }
    
    void stoh(char *outp, unsigned char *inpt) {
    	char *hexs = "0123456789abcdef";
    	for (int x = 0, y = 0; x < 32; ++x) {
    		outp[y] = hexs[(inpt[x] >> 4) & 0xf]; ++y;
    		outp[y] = hexs[inpt[x] & 0xf]; ++y;
    	}
    }
    
    int main(int argc, char *argv[]) {
    	char *m = argv[1]; int l = strlen(m);
    	unsigned char h[32]; char o[65]; bzero(o, 65);
    	sdbm_hash(h, (unsigned char *)m, l); stoh(o, h);
    	printf("[%s] [%u] [%s] [%d]\n", o, sdbm(m, l), m, l);
    	return 0;
    }
    
    

    Refactoring in Ansible: extract Variable

    Posted by Adam Young on April 02, 2020 07:37 PM

    “Let the complexity emerge.” Probably the best advice I ever got in coding. Do something in as straight-forward manner as possible. When you find your self repeating code, extract it. Here’s an example from an ansible playbook I’m working on.

    I’ve gotten a couple tasks written so far.

    ---
    - hosts: localhost
      tasks:
    
        - name: Creates directory
          file:
            path:  /home/ayoung/ocp-ansible/stage
            state: directory
    
        - name: regen install-config 
          copy:
            src:  /home/ayoung/ocp-ansible/files/install-config.yaml.orig
            dest: /home/ayoung/ocp-ansible/stage/install-config.yaml
    
        - name: validate install-config
          command: /home/ayoung/apps/ocp4.4/openshift-install create install-config  --dir /home/ayoung/ocp-ansible/stage
          environment:
            OS_CLOUD: fsi-moc
    

    I’ve copied the directory name a few times. Lets start by introducing a variable section into the play book. I’ll use the first task. Ah…but first…git! This project is following the pattern I wrote about several years ago. Here is what I have in my first commit:

    Generate the install-config
    
    # Please enter the commit message for your changes. Lines starting
    # with '#' will be ignored, and an empty message aborts the commit.
    #
    # Date:      Thu Apr 2 14:13:09 2020 -0400
    #
    # On branch master
    #
    # Initial commit
    #
    # Changes to be committed:
    #       new file:   .gitignore
    #       new file:   bin/regen.sh
    #       new file:   files/install-config.yaml.orig
    #       new file:   inventory/rhfsi.yaml
    #       new file:   playbooks/regen-install-config.yaml
    

    The playbook I am refactoring is playbooks/regen-install-config.yaml but it will soon be sharing values with others. However, the first step is just to pull the variable out into its own section.

    $ git diff
    diff --git a/playbooks/regen-install-config.yaml b/playbooks/regen-install-config.yaml
    index 77e285c..b0c50c9 100644
    --- a/playbooks/regen-install-config.yaml
    +++ b/playbooks/regen-install-config.yaml
    @@ -1,10 +1,12 @@
     ---
     - hosts: localhost
    +  vars:
    +    stage_dir: /home/ayoung/ocp-ansible/stage
       tasks:
     
         - name: Creates directory
           file:
    -        path:  /home/ayoung/ocp-ansible/stage
    +        path:  "{{ stage_dir }}"
             state: directory
     
         - name: regen install-config
    

    To test this out, I rerun the playbook…see if you can find the mistake:

    $ bin/regen.sh 
    [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
    
    PLAY [localhost] *****************************************************************************************************************************************************************************
    
    TASK [Gathering Facts] ***********************************************************************************************************************************************************************
    ok: [localhost]
    
    TASK [Creates directory] *********************************************************************************************************************************************************************
    ok: [localhost]
    
    TASK [regen install-config] ******************************************************************************************************************************************************************
    changed: [localhost]
    
    TASK [validate install-config] ***************************************************************************************************************************************************************
    changed: [localhost]
    
    PLAY RECAP ***********************************************************************************************************************************************************************************
    localhost                  : ok=4    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
    
    [ayoung@ayoungP40 ocp-ansible]$ ls stage/
    install-config.yaml
    

    Did you spot the mistake? I didn’t know what I had prior to running the playbook. I might have fooled myself. So, I remove the stage directory by hand and rerun.


    $ rm -rf stage/
    $ bin/regen.sh 
    ...
    $ diff stage/
    install-config.yaml            .openshift_install.log         .openshift_install_state.json  
    [ayoung@ayoungP40 ocp-ansible]$ diff files/install-config.yaml.orig stage/install-config.yaml 
    23c23
    <   - cidr: 192.0.2.0/24 
    ---
    >   - cidr: 192.0.2.0/24
    41d40
    < 
    

    A minor tweak in formatting, as I expected. But, removing directories by hand is dangerous. Let's create a new playbook to cleanup. I'll call it playbooks/cleanup.yaml

    - hosts: localhost
      vars:
        stage_dir: /home/ayoung/ocp-ansible/stage
      tasks:
        - name: Creates directory
          file:
            path:  "{{ stage_dir }}"
            state: absent
    

    And hey...there is that duplicated code again....Commit this, but continue to refactor.

    This is a case of "manual testing" that I think is OK. Essentially., my test script is:

    • check that stage/ does not exist
    • run regen.sh
    • check that stage/install-config.yaml exists
    • run cleanup.sh
    • check that stage/ does not exist
      run regen.sh

    Once this works, commit to git.

    Now lets further extract that variable to its own file. First, create a new directory vars and a yaml file to store the variable:

    $ cat vars/main.yaml 
    ---
    stage_dir: /home/ayoung/ocp-ansible/stage
    

    And remove the variable from the regen and cleanup playbooks. Run them to ensure that they fail.

    [ayoung@ayoungP40 ocp-ansible]$ bin/regen.sh [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ***************************************************************************************************************************************************************************** TASK [Gathering Facts] *********************************************************************************************************************************************************************** ok: [localhost] TASK [Creates directory] ********************************************************************************************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'stage_dir' is undefined\n\nThe error appears to be in '/home/ayoung/ocp-ansible/playbooks/regen-install-config.yaml': line 6, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n - name: Creates directory\n ^ here\n"} PLAY RECAP *********************************************************************************************************************************************************************************** localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0 [ayoung@ayoungP40 ocp-ansible]$ bin/cleanup.sh [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ***************************************************************************************************************************************************************************** TASK [Gathering Facts] *********************************************************************************************************************************************************************** ok: [localhost] TASK [Creates directory] ********************************************************************************************************************************************************************* fatal: [localhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'stage_dir' is undefined\n\nThe error appears to be in '/home/ayoung/ocp-ansible/playbooks/cleanup.yaml': line 5, column 7, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n tasks:\n - name: Creates directory\n ^ here\n"} PLAY RECAP *********************************************************************************************************************************************************************************** localhost

    Now run them using the external variable file. Here is the git diff:

    $ git diff HEAD
    diff --git a/bin/cleanup.sh b/bin/cleanup.sh
    index 61a2d28..ccc532d 100755
    --- a/bin/cleanup.sh
    +++ b/bin/cleanup.sh
    @@ -1,2 +1,2 @@
     #!/bin/sh 
    -ansible-playbook playbooks/cleanup.yaml
    +ansible-playbook   -e @vars/main.yaml  playbooks/cleanup.yaml
    diff --git a/bin/regen.sh b/bin/regen.sh
    index 2b7af04..75b3ef6 100755
    --- a/bin/regen.sh
    +++ b/bin/regen.sh
    @@ -1,2 +1,2 @@
     #!/bin/sh 
    -ansible-playbook playbooks/regen-install-config.yaml
    +ansible-playbook -e @vars/main.yaml playbooks/regen-install-config.yaml
    diff --git a/playbooks/cleanup.yaml b/playbooks/cleanup.yaml
    index 213ab37..c48fd7a 100644
    --- a/playbooks/cleanup.yaml
    +++ b/playbooks/cleanup.yaml
    @@ -1,7 +1,6 @@
     ---
     - hosts: localhost
       vars:
    -    stage_dir: /home/ayoung/ocp-ansible/stage
       tasks:
         - name: Creates directory
           file:
    diff --git a/playbooks/regen-install-config.yaml b/playbooks/regen-install-config.yaml
    index b0c50c9..e6103ce 100644
    --- a/playbooks/regen-install-config.yaml
    +++ b/playbooks/regen-install-config.yaml
    @@ -1,7 +1,6 @@
     ---
     - hosts: localhost
    :...skipping...
    diff --git a/bin/cleanup.sh b/bin/cleanup.sh
    index 61a2d28..ccc532d 100755
    --- a/bin/cleanup.sh
    +++ b/bin/cleanup.sh
    @@ -1,2 +1,2 @@
     #!/bin/sh 
    -ansible-playbook playbooks/cleanup.yaml
    +ansible-playbook   -e @vars/main.yaml  playbooks/cleanup.yaml
    diff --git a/bin/regen.sh b/bin/regen.sh
    index 2b7af04..75b3ef6 100755
    --- a/bin/regen.sh
    +++ b/bin/regen.sh
    @@ -1,2 +1,2 @@
     #!/bin/sh 
    -ansible-playbook playbooks/regen-install-config.yaml
    +ansible-playbook -e @vars/main.yaml playbooks/regen-install-config.yaml
    diff --git a/playbooks/cleanup.yaml b/playbooks/cleanup.yaml
    index 213ab37..c48fd7a 100644
    --- a/playbooks/cleanup.yaml
    +++ b/playbooks/cleanup.yaml
    @@ -1,7 +1,6 @@
     ---
     - hosts: localhost
       vars:
    -    stage_dir: /home/ayoung/ocp-ansible/stage
       tasks:
         - name: Creates directory
           file:
    diff --git a/playbooks/regen-install-config.yaml b/playbooks/regen-install-config.yaml
    index b0c50c9..e6103ce 100644
    --- a/playbooks/regen-install-config.yaml
    +++ b/playbooks/regen-install-config.yaml
    @@ -1,7 +1,6 @@
     ---
     - hosts: localhost
       vars:
    -    stage_dir: /home/ayoung/ocp-ansible/stage
       tasks:
     
         - name: Creates directory
    diff --git a/vars/main.yaml b/vars/main.yaml
    new file mode 100644
    index 0000000..5ac011f
    --- /dev/null
    +++ b/vars/main.yaml
    @@ -0,0 +1,2 @@
    +---
    +stage_dir: /home/ayoung/ocp-ansible/stage
    
    

    Commit this, and continue to extract the variable from other portions of the file:

    $ git diff
    diff --git a/playbooks/regen-install-config.yaml b/playbooks/regen-install-config.yaml
    index e6103ce..30e76f1 100644
    --- a/playbooks/regen-install-config.yaml
    +++ b/playbooks/regen-install-config.yaml
    @@ -11,10 +11,10 @@
         - name: regen install-config 
           copy:
             src:  /home/ayoung/ocp-ansible/files/install-config.yaml.orig
    -        dest: /home/ayoung/ocp-ansible/stage/install-config.yaml
    +        dest: "{{ stage_dir }}/install-config.yaml"
     
         - name: validate install-config
    -      command: /home/ayoung/apps/ocp4.4/openshift-install create install-config  --dir /home/ayoung/ocp-ansible/stage
    +      command: /home/ayoung/apps/ocp4.4/openshift-install create install-config  --dir "{{ stage_dir }}"
           environment:
             OS_CLOUD: fsi-moc
    

    Continue the process with other repeated strings. Lets extrace a base_dir variable that will be used both for finding the source directory and the binaries.

    [ayoung@ayoungP40 ocp-ansible]$ git diff
    diff --git a/playbooks/regen-install-config.yaml b/playbooks/regen-install-config.yaml
    index 30e76f1..5a95ef5 100644
    --- a/playbooks/regen-install-config.yaml
    +++ b/playbooks/regen-install-config.yaml
    @@ -1,6 +1,7 @@
     ---
     - hosts: localhost
       vars:
    +    base_dir: /home/ayoung/ocp-ansible
       tasks:
     
         - name: Creates directory
    @@ -10,11 +11,11 @@
     
         - name: regen install-config 
           copy:
    -        src:  /home/ayoung/ocp-ansible/files/install-config.yaml.orig
    +        src:  "{{ base_dir }}/files/install-config.yaml.orig"
             dest: "{{ stage_dir }}/install-config.yaml"
     
         - name: validate install-config
    -      command: /home/ayoung/apps/ocp4.4/openshift-install create install-config  --dir "{{ stage_dir }}"
    +      command: "{{ base_dir }}/bin/openshift-install create install-config --dir {{ stage_dir }}"
           environment:
             OS_CLOUD: fsi-moc
    

    And keep going. Move the variable to the common variables file so we can use base_dir to build the stage_dir variable.

    $ git diff
    diff --git a/playbooks/regen-install-config.yaml b/playbooks/regen-install-config.yaml
    index 5a95ef5..78dbfdc 100644
    --- a/playbooks/regen-install-config.yaml
    +++ b/playbooks/regen-install-config.yaml
    @@ -1,7 +1,6 @@
     ---
     - hosts: localhost
       vars:
    -    base_dir: /home/ayoung/ocp-ansible
       tasks:
     
         - name: Creates directory
    diff --git a/vars/main.yaml b/vars/main.yaml
    index 5ac011f..01de10c 100644
    --- a/vars/main.yaml
    +++ b/vars/main.yaml
    @@ -1,2 +1,3 @@
     ---
    -stage_dir: /home/ayoung/ocp-ansible/stage
    +base_dir: /home/ayoung/ocp-ansible
    +stage_dir: "{{ base_dir }}/stage"
    

    The basic rules: Test often. Small steps. Commit Successes to Git. Extract repetition. Have fun.

    PHP version 7.3.17RC1 and 7.4.5RC1

    Posted by Remi Collet on April 02, 2020 01:03 PM

    Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

    RPM of PHP version 7.4.5RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 32 or remi-php74-test repository for Fedora 30-31 and Enterprise Linux 7-8.

    RPM of PHP version 7.3.17RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30-31 or remi-php73-test repository for Enterprise Linux.

    emblem-notice-24.pngPHP version 7.2 is now in security mode only, so no more RC will be released.

    emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

    Parallel installation of version 7.4 as Software Collection:

    yum --enablerepo=remi-test install php74

    Parallel installation of version 7.3 as Software Collection:

    yum --enablerepo=remi-test install php73

    Update of system version 7.4:

    yum --enablerepo=remi-php74,remi-php74-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module reset php
    dnf module enable php:remi-7.4
    dnf --enablerepo=remi-modular-test update php\*

    Update of system version 7.3:

    yum --enablerepo=remi-php73,remi-php73-test update php\*

    or, the modular way (Fedora and RHEL 8):

    dnf module reset php
    dnf module enable php:remi-7.3
    dnf --enablerepo=remi-modular-test update php\*

    Notice: version 7.4.5RC1 is in Fedora rawhide for QA.

    emblem-notice-24.pngEL-8 packages are built using RHEL-8.1

    emblem-notice-24.pngEL-7 packages are built using RHEL-7.7

    emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

    Software Collections (php73, php74)

    Base packages (php)

    Fedora Council January 2020 in-person meeting

    Posted by Fedora Community Blog on April 02, 2020 06:00 AM
    Fedora community elections

    The Fedora Council stuck around Brno the day after DevConf.CZ to have a day-long working session. This is part of our newly-adopted regular cadence of in-person meetings. We mostly used this day to follow up on some items from the November meeting, including the vision statement.

    Updated vision statement

    When we published our draft vision statement in January, we got a lot of great feedback from the Fedora community. The Council discussed this and ended up with a streamlined version:

    The Fedora Project envisions a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities.

    We published this last month for another round of discussion and the Council recently approved this as the final version. I’m proud of the work we did to come up with something simple that reflects the values of our community. Thank you to everyone who provided feedback, you really helped us make this better.

    Objective closeouts

    At the meeting, Dominik Perpeet told us that he considers the CI Objective complete. The team has done amazing work in the last few releases. Rawhide can now gate single- and multi-package updates, which gives us a powerful tool to make Rawhide a more usable platform for our users. Now it’s up to us to use this tooling. Dominik will be writing a Community Blog post to wrap up the Objective.

    We also agreed that the Modularity Objective, as currently scoped, is complete. I wrote about this in more detail earlier this month, but to reiterate: the Modularity development within Red Hat has moved to a new team. This team will pick up the work and continue to improve Modularity within Fedora.

    Video meetings

    We agreed in November to bring back our regular video meetings, so in January we put a plan into action. The video meetings will happen on the second Wednesday of each month at the normal Council meeting time. We will have a featured guest each month, and if time permits will also conduct routine business.

    We’ve done two of these so far. In February, our new FCAIC introduced herself. If you haven’t met Marie Nordin yet, check out her video. In March, Aoife Moloney from the Community Platform Engineering (CPE) team shared the team’s workflow and took questions from the community.

    The April and May meetings already have guests scheduled. If you have suggestions for future meetings, please add them to the wish list on the wiki and Ben Cotton will work to get them scheduled. You may notice that we switched from Jitsi to Bluejeans for these meetings. While we want to use open source tools whenever we can, the experience on Jitsi made it very difficult to have conversations. We chose Bluejeans because it does not require users to create an account, which we know is important to our community.

    Community Platform Engineering

    Speaking of Aoife Moloney, she also stopped by our meeting in January since she was in town for DevConf.CZ. Aoife gave us a preview of the presentation she gave in the March video meeting and took questions from the Council. The primary goals of the changes in CPE’s workflow are to increase the visibility of work and to make sure the team isn’t overloaded.

    The Fedora Council will be the contact point for large infrastructure work coming from the Fedora community. We have a great relationship with Aoife and the team, so while we’re going to have some bumps as everyone figures out the new process, I’m confident that the end result will be a better, more reliable infrastructure for building Fedora.

    Council initiatives

    Ben Cotton presented a plan for focusing our event strategy on recruiting new contributors. He is working on developing this into something our Ambassadors and Advocates can put into action. Look for a Community Blog post soon.

    Marie Nordin is putting her design skills to use to improve the org chart. This is not just about the visual look, which needs improvement, but also includes rethinking some of the teams under Mindshare. Some of the teams are overfragmented compared to the number of contributors, which adds overhead and can prevent collaboration. Marie is working with the Mindshare Committee to explore combining some teams to revitalize them. Again, look for a Community Blog post soon.

    Lastly, some people have been asking about the logo redesign from last year. You may recall that the Council approved a new design, but we were too late to get it in Red Hat legal’s budget. With a new fiscal year, we’re ready to move forward on that. Mo Duffy is making a few minor tweaks to the approved version, and once that’s done, we will give it to the legal team for registration globally. Expect to see the new logo sometime this year.

    The post Fedora Council January 2020 in-person meeting appeared first on Fedora Community Blog.

    Changing or rename Oracle user schema

    Posted by Robbi Nespu on April 02, 2020 01:33 AM

    Renaming or changing a schema is not an easy task in Oracle operation but if you really want to rename the schema go for the traditional way of exporting the existing schema and import into a new schema.

    Step shown in this tutorial is using Oracle 11g, it maybe won’t work for newer version.

    > select * from v$version;
    |================================================================================|
    |BANNER                                                                          |
    |================================================================================|
    |Oracle Database 11g Release 11.2.0.4.0 - 64bit Production                       |
    |PL/SQL Release 11.2.0.4.0 - Production                                          |
    |CORE 11.2.0.4.0 Production                                                      |
    |TNS for Linux: Version 11.2.0.4.0 - Production                                  |
    |NLSRTL Version 11.2.0.4.0 - Production                                          |
    
    

    Data Pump Mapping to the imp Utility

    Please take note, Data Pump import often doesn’t have a one-to-one mapping of the legacy utility parameter. Data Pump import automatically provides many features of the old imp utility.

    For example, COMMIT=Y isn’t required because Data Pump import automatically commits after each table is imported. Table below describes how legacy import parameters map to Data Pump import.

    Original imp Parameter Similar Data Pump impdp Parameter
    FROMUSER REMAP_SCHEMA
    TOUSER REMAP_SCHEMA

    How I gonna rename my ORACLE schema

    Let say, I accidently create user schema HOST_USER as suppose to be host1

    -- Create user schema (mistaken username here..shit)
    create user HOST_USER IDENTIFIED BY password4sk default TABLESPACE host_sk;
    -- creating table space
    create TABLESPACE host_sk datafile 'host_sk.dbf' size 1G autoextend on maxsize 8G;
    -- create database role
    create role HOST_SK_ROLE;
    -- granting some privilage to role we created
    grant  
       CREATE SESSION, ALTER SESSION, CREATE MATERIALIZED VIEW, CREATE PROCEDURE, 
       CREATE SEQUENCE, CREATE SYNONYM, CREATE TABLE, CREATE TRIGGER, CREATE TYPE, 
       CREATE VIEW, DEBUG CONNECT SESSION
    to HOST_SK_ROLE;
    -- grant that role to user (that I mistaken created previously) 
    -- and give tablespace quota to them
    GRANT HOST_SK_ROLE TO HOST_USER;
    ALTER USER HOST_USER QUOTA unlimited ON host_sk;
    -- create table
    CREATE TABLE "HOST_USER"."STOCK_BALANCE_WS" 
       ("TRANSFERID" NUMBER(9,0), "ARTICLE_ID" VARCHAR2(14), 
      "QUANTITY" NUMBER(6,0)) SEGMENT CREATION IMMEDIATE 
      PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
     NOCOMPRESS LOGGING
      STORAGE(INITIAL 1048576 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "HOST_SK" ;
    -- insert some data inside
    INSERT INTO HOST_USER.ARTICLE_SW 
           (TRANSFERID, ARTICLE_ID, ARTICLE_NAME, DESCRIPTION, WEIGHT) 
    VALUES (1, '1003', 'CONDOM DUREX', 'Super studs', 10);
    

    As you see, I already do lot of thing with my database, then I just realize schema should be HOST1 instead of HOST_USER! I want to rename the schema. Unfortunately, oracle don’t allow to change schema name easily.

    There is a trick by importing and map to HOST1 schema (if you follow my step, please don’t just copy paste. create the target user it if you don’t have and as long that user have same privilage and tablespace it will be fine)

    Export with oracle data pump

    $ expdp HOST_USER/password4sk directory=tmp schemas=HOST_USER dumpfile=old_schema_to_remap.dmp  LOGFILE=exp_schema_to_remap.log
    Export: Release 11.2.0.4.0 - Production on Thu Apr 2 04:17:08 2020
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
    Starting "HOST_USER"."SYS_EXPORT_SCHEMA_01":  HOST_USER/******** directory=tmp schemas=HOST_USER dumpfile=old_schema_to_remap.dmp LOGFILE=exp_schema_to_remap.log
    Estimate in progress using BLOCKS method...
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    Total estimation using BLOCKS method: 0 KB
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/COMMENT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
    Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
    Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
    . . exported "HOST_USER"."ARTICLE_SW"                     0 KB       0 rows
    Master table "HOST_USER"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
    ******************************************************************************
    Dump file set for HOST_USER.SYS_EXPORT_SCHEMA_01 is:
      /tmp/old_schema_to_remap.dmp
    Job "HOST_USER"."SYS_EXPORT_SCHEMA_01" successfully completed at Thu Apr 2 04:17:21 2020 elapsed 0 00:00:13
    
    

    Import to other target user via remap_schema parameter

    $ impdp userid=host1/password4sk directory=tmp dumpfile=old_schema_to_remap remap_schema=HOST_USER:host1 LOGFILE=imp_schema_to_remap.log
    Import: Release 11.2.0.4.0 - Production on Thu Apr 2 04:19:07 2020
    
    Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
    
    Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
    Master table "HOST1"."SYS_IMPORT_FULL_01" successfully loaded/unloaded
    Starting "HOST1"."SYS_IMPORT_FULL_01":  userid=host1/******** directory=tmp dumpfile=old_schema_to_remap remap_schema=HOST_USER:host1 LOGFILE=imp_schema_to_remap.log
    Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
    Processing object type SCHEMA_EXPORT/TABLE/TABLE
    Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
    . . imported "HOST1"."ARTICLE_SW"                            0 KB       0 rows
    Job "HOST1"."SYS_IMPORT_FULL_01" successfully completed at Thu Apr 2 04:19:09 2020 elapsed 0 00:00:01
    
    

    after import via data pump, the next step is just to drop the old schema.

    This step is much easier than redo everything IMHO. Anyway, please becareful. I am novice oracle dba, my step maybe not suitable for you :smile:

    editorconfig-geany available for Fedora via Copr

    Posted by Dominic "dmaphy" Hopf on April 01, 2020 09:59 PM
    Just a small note for interested Geany users on Fedora: The Geany plugin for EditorConfig [1] is now available as a RPM package via Copr. You can pull it from [2] or directly install it with the following commands:

    sudo dnf copr enable dmaphy/editorconfig-geany
    sudo dnf -y install editorconfig-geany

    I'm still working on getting it to the official Fedora repositories and will surely let you know when done.

    [1] https://github.com/editorconfig/editorconfig-geany
    [2] https://copr.fedorainfracloud.org/coprs/dmaphy/editorconfig-geany/

    Experiment: Running a UDP socket [client/server] ARP broadcaster for each WiFi AP (better client association tracking)

    Posted by Jon Chiappetta on April 01, 2020 08:43 PM

    If you are running multiple WiFI APs that are part of the same flat network and bridged together it can be tricky keeping a current list of which client is connected to which AP (sometimes clients can just stop responding on one AP and then wake up and appear on another AP). In addition, the one common question a router needs to know is, is this client connected to me OR are they connected through one of my peer routers and which one is the best one?… I wrote a small service which allows an AP to periodically broadcast out, relay/forward, and receive the current WiFi client associations it has and the best association is chosen to amongst all the APs to be able to keep a common set of ARP entries for each access point! The tables are way cleaner and way more consistent with each other now that I have this thing running. I wrote it in both C and python which are supported on OpenWRT routers and can communicate with each other using a common key/pwd. You simply list which radio interfaces the AP has and the IP address you gave the AP on the flat network and it should be ready to send, forward, & receive ARP related messages. The program will write out a standard file with the following information formatted in it and it can also be used with the relayd service modification post below:

    /tmp/arp.txt: [dev] [mac/peer] [ip]
    wlan1 00:11:22:dd:ee:ff 192.168.100.123
    

    The code can be found here if you are interested!:

    C: https://github.com/stoops/broadarp/blob/init/barp.c
    Py2: https://github.com/stoops/broadarp/blob/python/barp.py
    relayd-mod: https://github.com/stoops/relayd/compare/master…stoops:static [post]

    🙂
    Jon

    PAM testing using pam_wrapper and dbusmock

    Posted by Bastien Nocera on April 01, 2020 04:53 PM
    On the road to libfprint and fprintd 2.0, we've been fixing some long-standing bugs, including one that required porting our PAM module from dbus-glib to sd-bus, systemd's D-Bus library implementation.

    As you can imagine, I have confidence in my ability to write bug-free code at the first attempt, but the foresight to know that this code will be buggy if it's not tested (and to know there's probably a bug in the tests if they run successfully the first time around). So we will have to test that PAM module, thoroughly, before and after the port.

    Replacing fprintd

    First, to make it easier to run and instrument, we needed to replace fprintd itself. For this, we used dbusmock, which is both a convenience Python library and way to write instrumentable D-Bus services, and wrote a template. There are a number of existing templates for a lot of session and system services, in case you want to test the integration of your code with NetworkManager, low-memory-monitor, or any number of other services.

    We then used this to write tests for the command-line utilities, so we can both test our new template and test the command-line utilities themselves.

    Replacing gdm

    Now that we've got a way to replace fprintd and a physical fingerprint reader, we should write some tests for the (old) PAM module to replace sudo, gdm, or the login authentication services.

    Co-workers Andreas Schneier and Jakub Hrozek worked on pam_wrapper, an LD_PRELOAD library to mock the PAM library, and Python helpers to write simple PAM services. This LWN article explains how to test PAM applications, and PAM modules.

    After fixing a few bugs in pam_wrapper, and combining with the fprintd dbusmock work above, we could wrap and test the fprintd PAM module like it never was before.

    Porting to sd-bus

    Finally, porting the PAM module to sd-bus was pretty trivial, a loop of 1) writing tests that work against the old PAM module, 2) porting a section of the code (like the fingerprint reader enumeration, or the timeout support), and 3) testing against the new sd-bus based code. The result was no regressions that we could test for.

    Conclusion

    Both dbusmock, and pam_wrapper are useful tools in your arsenal to write tests, and given those (fairly) easy to use CIs in GNOME and FreeDesktop.org's GitLabs, it would be a shame not to.

    You might also be interested in umockdev, to mock a number of device types, and mocklibc (which combined with dbusmock powers polkit's unattended CI)

    Cockpit 216

    Posted by Cockpit Project on April 01, 2020 12:00 AM

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 216.

    SELinux: Automatic application of solutions that set booleans

    The SELinux page can now automatically apply solutions that set SELinux booleans.

    Apply this solution

    In addition, command lines in solution details are now formatted properly.

    Machines: Drop virsh backend support

    This release drops virsh backend support from the Machines page and switches to using libvirt-dbus exclusively. As a result, as of this release, cockpit-machines is no longer supported on Ubuntu 18.04.

    Overview: New “last login” banner

    Similar to text logins at the console, or via SSH, Cockpit now displays a banner showing the last successful login. If there have been failed login attempts, the number of attempts and the time of the most recent failed attempt will be shown as well.

    Last login banner

    A side-effect of this work is that Cockpit now correctly updates /var/log/lastlog (the record of last login times) and btmp (the record of failed login attempts).

    Try it out

    Cockpit 216 is available now:

    Away from home

    Posted by Jonathan Dieter on March 31, 2020 09:09 PM

    It’s been months since my last post, so I was planning to sit down and write a post about how we’re using podman and ostree in my work. But, as I sat down, I realized that I just can’t write about that right now.

    This month has been difficult on many levels. Here in Ireland, as in much of the world, we’re unable to leave our homes except to buy necessities and exercise (within a 2km radius of our home). We’ve lived in Ireland for almost two years now, and it has become home for us. I’ve worked remotely before and normally enjoy the quiet of working from home, but, given our current circumstances, I just want to leave.

    You see, nine days ago, Kristina Dieter, my sister-in-law, passed away from cancer. She was 37. She married my brother, Jason, when they were both nineteen and next year would have been their 20th anniversary. She was an amazing sister-in-law, did an incredible job of raising their four kids (though I suppose my brother gets credit for that too), and was a light of encouragement to those around her. There’s so much more that I could say, but I think it’s best to just link to what my brother wrote on Instagram.

    At a time like this, I just want to be home, where my family is. I can’t. Traveling back to the States would be exposing my family there and my family here to danger. So, at this point, all I can do is pray and grieve here, at home and yet so far from it. And cling to the promise that “God himself will be with them. He will wipe every tear from their eyes, and there will be no more death or sorrow or crying or pain. All these things are gone forever.”

    PHP 7.4 et NextCloud 18 sous OpenMediaVault 4

    Posted by Guillaume Kulakowski on March 31, 2020 07:10 PM

    J’ai depuis plusieurs années un NAS sous OpenMediaVault. Bien que je fasse les mises à jour au grès de leurs sorties et alors que j’utilise la dernière version stable d’OMV, force est de constater que celle-ci est encore basée sur Debian 9. Du coup, qui dit Debian 9, dit PHP 7.0. Or cette version n’est […]

    Cet article PHP 7.4 et NextCloud 18 sous OpenMediaVault 4 est apparu en premier sur Guillaume Kulakowski's blog.

    Sandboxing WebKitGTK Apps

    Posted by Michael Catanzaro on March 31, 2020 03:56 PM

    When you connect to a Wi-Fi network, that network might block your access to the wider internet until you’ve signed into the network’s captive portal page. An untrusted network can disrupt your connection at any time by blocking secure requests and replacing the content of insecure requests with its login page. (Of course this can be done on wired networks as well, but in practice it mainly happens on Wi-Fi.) To detect a captive portal, NetworkManager sends a request to a special test address (e.g. http://fedoraproject.org/static/hotspot.txt) and checks to see whether it the content has been replaced. If so, GNOME Shell will open a little WebKitGTK browser window to display http://nmcheck.gnome.org, which, due to the captive portal, will be hijacked by your hotel or airport or whatever to display the portal login page. Rephrased in security lingo: an untrusted network may cause GNOME Shell to load arbitrary web content whenever it wants. If that doesn’t immediately sound dangerous to you, let’s ask me from four years ago why that might be bad:

    Web engines are full of security vulnerabilities, like buffer overflows and use-after-frees. The details don’t matter; what’s important is that skilled attackers can turn these vulnerabilities into exploits, using carefully-crafted HTML to gain total control of your user account on your computer (or your phone). They can then install malware, read all the files in your home directory, use your computer in a botnet to attack websites, and do basically whatever they want with it.

    If the web engine is sandboxed, then a second type of attack, called a sandbox escape, is needed. This makes it dramatically more difficult to exploit vulnerabilities.

    The captive portal helper will pop up and load arbitrary web content without user interaction, so there’s nothing you as a user could possibly do about it. This makes it a tempting target for attackers, so we want to ensure that users are safe in the absence of a sandbox escape. Accordingly, beginning with GNOME 3.36, the captive portal helper is now sandboxed.

    How did we do it? With basically one line of code (plus a check to ensure the WebKitGTK version is new enough). To sandbox any WebKitGTK app, just call webkit_web_context_set_sandbox_enabled(). Ta-da, your application is now magically secure!

    No, really, that’s all you need to do. So if it’s that simple, why isn’t the sandbox enabled by default? It can break applications that use WebKitWebExtension to run custom code in the sandboxed web process, so you’ll need to test to ensure that your application still works properly after enabling the sandbox. (The WebKitGTK sandbox will become mandatory in the future when porting applications to GTK 4. That’s thinking far ahead, though, because GTK 4 isn’t supported yet at all.) You may need to use webkit_web_context_add_path_to_sandbox() to give your web extension access to directories that would otherwise be blocked by the sandbox.

    The sandbox is critically important for web browsers and email clients, which are constantly displaying untrusted web content. But really, every app should enable it. Fix your apps! Then thank Patrick Griffis from Igalia for developing WebKitGTK’s sandbox, and the bubblewrap, Flatpak, and xdg-desktop-portal developers for providing the groundwork that makes it all possible.

    False Dichotomy

    Posted by Russel Doty on March 31, 2020 02:28 PM

    A previous article suggested that the two ways to deal with potentially disruptive technologies are to either invest in all potentially disruptive technologies or to ignore all potentially disruptive technologies.

    One of the most powerful ways to go wrong is the False Dichotomy, which is also known as false choice, black and white thinking, or either/or thinking. Typically this involves a situation which is presented as a binary choice of either A or B, with the implicit or explicit assumption that these are the only possible choices – there are no alternatives C, D, or E.

    In real world situations this is almost never the case. There are always options, or at least variations on the two choices presented. In many cases the alternatives presented are the extreme positions, ignoring the many alternatives between them.

    Further, in the case of disruptive innovation the best alternative may be neither A nor B but “kumquat” – something completely unexpected and entirely outside the range of alternatives being considered! A popular saying is “On a scale of 1 to 10, what is your favorite color in the alphabet?”. While these examples appear nonsensical, they illustrate the need to consider alternatives that may not be obvious – an approach often called thinking outside the box.

    Two points are worth making: first, there is almost never The Right Answer – that is, a single correct answer that solves all problems and where any other answer is Wrong. Instead, there are a range of alternatives that can be made to work with various levels of effort and trade-offs. Part of product planning is to explore these alternatives and to determine the benefits, cost, and risk associated with each.

    Second, interesting problems are invariably multi-variate. Instead of a single parameter that can be optimized, several interacting parameters must be considered. Any real world situation is going to involve a series of trade-offs, typically between capabilities, cost, investment, resources, integration, and time. Other factors include side effects and consequences: for example, one material being considered for a product may have all desired physical properties but be toxic.

    Also important is understanding whether a constraint is an absolute constraint like the speed of light¹ or is flexible. In the example above, a toxic material might be used if it is carefully packaged.

    Looking for The Right Answer can lead to ignoring acceptable solutions and approaches that can be made to work. A better approach is to consider multiple potential solutions, determine the strengths and weaknesses of each – including what can be done to address these weaknesses – and choose the best overall solution. Note that the best overall solution will often include elements from multiple approaches – even elements from both halves of the false dichotomy!

    For product development the challenge is to understand customer needs well enough to provide a product that meets their needs at a price they are willing to pay. Note that the customer has the final word on what their needs are – if a feature is not wanted or used by a customer, that feature does not meet their needs. The ideal is a product that meets current customer needs and can be extended to meet future needs.

    While product definition is often done informally, there are structured approaches that can be used. A powerful technique is Design Thinking.

    Design Thinking uses a five step process of: defining (or redefining) the problem, needfinding and benchmarking, ideating, building, and testing. Design Thinking is a team based approach, best done with a multidisciplinary team bringing different knowledge, expertise, and viewpoints to the project.

    Much of the power of Design Thinking comes from applying a structured methodology to complex problems – much more than a blog post is needed to really understand it, much less apply it. Fortunately there are many resources available, including books and courses. Both edX and Coursera offer courses on the subject, with edX even offering a five course “micromasters” program.


    ¹ This is something of a trick example. The speed of light in a vacuum can’t be exceeded. However light travels through other materials at different (slower) speeds. For example, light in a fibre optic cable is roughly 30% slower than in a vacuum. There is a lot of interesting work going on around quantum entanglement that may allow information exchange to exceed the speed of light – this would definitely be a disruptive technology! Thus this is actually an example of the importance of understanding your constraints and exploring novel options to possibly work around fixed constraints!

    Introduction to the Python HTTP header

    Posted by Peter Czanik on March 31, 2020 11:24 AM

    You can create your own custom headers for the HTTP destination using the Python HTTP header plugin of syslog-ng and Python scripts. The included example configuration just adds a simple counter to the headers but with a bit of coding you can resolve authentication problems or fine tune how data is handled at cloud-based logging and SIEM platforms, like Sumologic.

    How it works?

    Like all other Python bindings in syslog-ng, there are two parts of the Python HTTP header. One is the syslog-ng configuration. In this case, it is part of the HTTP destination configuration. The other part is the actual Python code, which can be either included in the syslog-ng configuration or stored in an external Python script.

    Configuration

    The python-http-header configuration is part of the HTTP destination configuration. Here is an example:

    destination d_http {
        http(
            python_http_header(
                class("TestCounter")
                options("header", "X-Test-Python-Counter")
                options("counter", 11)
                # this means that syslog-ng will trying to send the http request even when this module fails
                mark-errors-as-critical(no)
            )
            url("http://127.0.0.1:8888")
        );
    };

    From the listed options, only class() is mandatory. This defines the name of the Python class, which implements the Python HTTP header.

    Values defined in options() are passed to the __init__() method of the class. This way you pass initial values to the Python code from the syslog-ng configuration.

    The mark-errors-as-critical() defines what happens if the Python code fails for whatever reasons. If set to “yes”, messages are discarded. Use this if you use Python HTTP header for authentication, as after a failed authentication you cannot send messages anyway. Set it to “no” if headers carry only useful but not mandatory information. For example, when you send logs to Sumologic, headers can contain extra information, but they can be useful and can reach the destination even without the extra fields. In either case, an error message is generated where you can see what happened.

    Python code

    You can store the Python code either in the syslog-ng configuration or externally. There are many advantages of storing code externally but it also adds complexity. For shorter, easy to understand code, including the code in the configuration, it is easier. In this case, it needs to be enclosed in a python {} block:

    python {
    from syslogng import Logger
    logger = Logger()
    class TestCounter():
        def __init__(self, options):
            self.header = options["header"]
            self.counter = int(options["counter"])
            logger.debug(f"TestCounter class instantiated; options={options}")
        def get_headers(self, body, headers):
            logger.debug(f"get_headers() called, received body={body}, headers={headers}")
           
            response = ["{}: {}".format(self.header, self.counter)]
            self.counter += 1
            return response
        def __del__(self):
            logger.debug("Deleting TestCounter class instance")
    };

    You can see here the same class name as defined in the configuration part. Below that there are three methods. From those only get_headers() is mandatory. It receives both the existing headers and the body from syslog-ng. The latter is useful, when you need to create a header based on the content of the body. The get_headers method returns one or more headers to syslog-ng. The above code implements a very simple counter where the value of the counter is incremented each time a new HTTP request is sent.

    The optional __init__() method initializes python-http-headers based on options received from the syslog-ng configuration. In the above code the __init__() method configures the name of the header and sets the initial value of the counter.

    The optional __del__() method is run when syslog-ng is stopped or reloaded. Here it only creates a debug-level message indicating that it was executed.

    Note: the above code only works when you have a single HTTP worker configured in syslog-ng. If you have multiple workers, the Python code will run in concurrent mode. In this case you need to set up locks around counter manipulation.

    Testing

    All you need for testing:

    syslog-ng 3.26 (or later) with Python support installed

    • two open terminal windows

    • netcat

    Preparations

    If your operating system does not have syslog-ng 3.26 (or later) available, check our third party repositories page. On most Linux distributions, features requiring extra dependencies are available in sub packages. For Python support, it means that you also need to install a package called syslog-ng-python, syslog-ng-mod-python, or similar. In case of openSUSE, the netcat (nc) command is installed by default and is part of the netcat-openbsd package. The example configuration is based on the configuration snippets from above. We also add a network source and connect the two using a log statement. On most Linux distributions, you can create configuration snippets under /etc/syslog-ng/conf.d with a .conf extension. Otherwise append the following to syslog-ng.conf:

    python {
    from syslogng import Logger
    logger = Logger()
    class TestCounter():
        def __init__(self, options):
            self.header = options["header"]
            self.counter = int(options["counter"])
            logger.debug(f"TestCounter class instantiated; options={options}")
        def get_headers(self, body, headers):
            logger.debug(f"get_headers() called, received body={body}, headers={headers}")
           
            response = ["{}: {}".format(self.header, self.counter)]
            self.counter += 1
            return response
        def __del__(self):
            logger.debug("Deleting TestCounter class instance")
    };
    source s_network {
      network(port(5555));
    };
    destination d_http {
        http(
            python_http_header(
                class("TestCounter")
                options("header", "X-Test-Python-Counter")
                options("counter", 11)
                # this means that syslog-ng will trying to send the http request even when this module fails
                mark-errors-as-critical(no)
            )
            url("http://127.0.0.1:8888")
        );
    };
    log {
        source(s_network);
        destination(d_http);
        flags(flow-control);
    };

    Once you saved your configuration and reloaded syslog-ng, you are ready for testing.

    Testing

    In this simplified example we only receive the first HTTP message from syslog-ng. The reason is simple: netcat is easily available, but it cannot properly respond to HTTP requests, it just logs what it receives. Start it listening on port 8888:

    nc -l 8888

    Now send a syslog message to port 5555 of syslog-ng using logger:

    logger --rfc3164 -T -n 127.0.0.1 -P 5555 this is a test massage1

    Once you hit enter, you should see a similar output in the terminal window where netcat is running:

    POST / HTTP/1.1
    Host: 127.0.0.1:8888
    User-Agent: syslog-ng 3.26.1/libcurl 7.60.0
    Accept: */*
    X-Syslog-Host: localhost
    X-Syslog-Program: root
    X-Syslog-Facility: user
    X-Syslog-Level: notice
    X-Test-Python-Counter: 11
    Content-Length: 23
    Content-Type: application/x-www-form-urlencoded
    
    this is a test massage1

    While the above example is not even a fully working demo (you do not see the counter to increment) it is more than enough to see that adding headers from Python code works. The extra header is there with the name and initial value as set in the configuration.

    What is next?

    Now, that you have seen Python HTTP header in action, it’s time to develop your own code. The above example is a good starting point. It has all the possible components and it also has debug logging enabled. If you start syslog-ng in the foreground (syslog-ng -Fvde), you can see on screen what initial options the Python code received and the headers and body of the HTTP request sent by syslog-ng.

    If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

    Defining home automation devices in YAML with ESPHome and Home Assistant, no programming required!

    Posted by Christopher Smart on March 31, 2020 09:49 AM

    Having built the core of my own “dumb” smart home system, I have been working on making it smart these past few years. As I’ve written about previously, the smart side of my home automation is managed by Home Assistant, which is an amazing, privacy focused open source platform. I’ve previously posted about running Home Assistant in Docker and in Podman.

    <figure class="wp-block-image size-large"><figcaption>Home Assistant, the privacy focused, open source home automation platform</figcaption></figure>

    I do have a couple of proprietary home automation products, including LIFX globes and Google Home. However, the vast majority of my home automation devices are ESP modules running open source firmware which connect to MQTT as the central protocol. I’ve built a number of sensors and lights and been working on making my light switches smart (more on that in a later blog post).

    I already had experience with Arduino, so I started experimenting with this and it worked quite well. I then had a play with Micropython and really enjoyed it, but then I came across ESPHome and it blew me away. I have since migrated most of my devices to ESPHome.

    <figure class="wp-block-image size-large"><figcaption>ESPHome provides simple management of ESP devices</figcaption></figure>

    ESPHome is smart in making use of PlatformIO underneath, but its beauty lies in the way it abstracts away the complexities of programming for embedded devices. In fact, no programming is necessary! You simply have to define your devices in YAML and run a single command to compile the firmware blob and flash a device. Loops, initialising and managing multiple inputs and outputs, reading and writing to I/O, PWM, functions and callbacks, connecting to WiFi and MQTT, hosting an AP, logging and more is taken care of for you. Once up, the devices support mDNS and unencrypted over the air updates (which is fine for my local network). It supports both Home Assistant API and MQTT (over TLS for ESP8266) as well as lots of common components. There is even an addon for Home Assistant if you prefer using a graphical interface, but I like to do things on the command line.

    When combined with Home Assistant, new devices are automatically discovered and appear in the web interface. When using MQTT, the channels are set with retain flag, so that the devices themselves and their last known states are not lost on reboots (you can disable this for testing).

    That’s a lot of things you get for just a little bit of YAML!

    Getting started

    Getting started is pretty easy, just install esphome using pip.

    pip3 install --user esphome

    Of course, you will need a real physical ESP device of some description. Thanks to PlatformIO, lots of ESP8266 and ESP32 devices are supported. Although built on similar SOC, different devices break out different pins and can have different flashing requirements. Therefore, specifying the exact device is good and can be helpful, but it’s not strictly necessary.

    It’s not just ESP modules that are supported. These days a number of commercial products are been built using ESP8266 chips which we can flash, like Sonoff power modules, Xiaomi temperature sensors, Brilliant Smart power outlets and Mirabella Genio light bulbs (I use one of these under my stairs).

    For this post though, I will use one of my MH-ET Live ESP32Minikit devices as an example, which has the device name of mhetesp32minikit.

    <figure class="wp-block-image size-medium"><figcaption>MH-ET Live ESP32Minikit</figcaption></figure>

    Managing configs with Git

    Everything with your device revolves around your device’s YAML config file, including configuration, flashing, accessing logs, clearing out MQTT messages and more.

    ESPHome has a wizard which will prompt you to enter your device details and WiFi credentials. It’s a good way to get started, however it only creates a skeleton file and you have to continue configuring the device manually to actually do anything anyway. So, I think ultimately it’s easier to just create and manage your own files, which we’ll do below. (If you want to give it a try, you can run the command esphome example.yaml wizard which will create an example.yaml file.)

    I have two Git repositories to manage my ESPHome devices. The first one is for my WIFI and MQTT credentials, which are stored as variables in a file called secrets.yaml (store them in an Ansible vault, if you like). ESPHome automatically looks for this file when compiling firmware for a device and will use those variables.

    Let’s create the Git repo and secrets file, replacing the details below with your own. Note that I am including the settings for an MQTT server, which is unencrypted in the example. If you’re using an MQTT server online you may want to use an ESP8266 device instead and enable TLS fingerprints for a more secure connection. I should also mention that MQTT is not required, devices can also use the Home Assistant API and if you don’t use MQTT those variables can be ignored (or you can leave them out).

    mkdir ~/esphome-secrets
    cd ~/esphome-secrets
    cat > secrets.yaml << EOF
    wifi_ssid: "ssid"
    wifi_password: "wifi-password"
    api_password: "api-password"
    ota_password: "ota-password"
    mqtt_broker: "mqtt-ip"
    mqtt_port: 1883
    mqtt_username: "mqtt-username"
    mqtt_password: "mqtt-password"
    EOF
    git init
    git add .
    git commit -m "esphome secrets: add secrets"

    The second Git repo has all of my device configs and references the secrets file from the other repo. I name each device’s config file the same as its name (e.g. study.yaml for the device that controls my study). Let’s create the Git repo and link to the secrets file and ignore things like the builds directory (where builds will go!).

    mkdir ~/esphome-configs
    cd ~/esphome-configs
    ln -s ../esphome-secrets/secrets.yaml .
    cat > .gitignore << EOF
    /.esphome
    /builds
    /.*.swp
    EOF
    git init
    git add .
    git commit -m "esphome configs: link to secrets"

    Creating a config

    The config file contains different sections with core settings. You can leave some of these settings out, such as api, which will disable that feature on the device (esphome is required).

    • esphome – device details and build options
    • wifi – wifi credentials
    • logger – enable logging of device to see what’s happening
    • ota – enables over the air updates
    • api – enables the Home Assistant API to control the device
    • mqtt – enables MQTT to control the device

    Now that we have our base secrets file, we can create our first device config! Note that settings with !secret are referencing the variables in our secrets.yaml file, thus keeping the values out of our device config. Here’s our new base config for an ESP32 device called example in a file called example.yaml which will connect to WiFi and MQTT.

    cat > example.yaml << EOF
    esphome:
      name: example
      build_path: ./builds/example
      platform: ESP32
      board: mhetesp32minikit
    
    wifi:
      ssid: !secret wifi_ssid
      password: !secret wifi_password
    
    logger:
    
    api:
      password: !secret api_password
    
    ota:
      password: !secret ota_password
    
    mqtt:
      broker: !secret mqtt_broker
      username: !secret mqtt_username
      password: !secret mqtt_password
      port: !secret mqtt_port
      # Set to true when finished testing to set MQTT retain flag
      discovery_retain: false
    EOF

    Compiling and flashing the firmware

    First, plug your ESP device into your computer which should bring up a new TTY, such as /dev/ttyUSB0 (check dmesg). Now that you have the config file, we can compile it and flash the device (you might need to be in the dialout group). The run command actually does a number of things, include sanity check, compile, flash and tail the log.

    esphome example.yaml run

    This will compile the firmware in the specified build dir (./builds/example) and prompt you to flash the device. As this is a new device, an over the air update will not work yet, so you’ll need to select the TTY device. Once the device is running and connected to WiFi you can use OTA.

    INFO Successfully compiled program.
    Found multiple options, please choose one:
      [1] /dev/ttyUSB0 (CP2104 USB to UART Bridge Controller)
      [2] Over The Air (example.local)
    (number): 

    Once it is flashed, the device is automatically rebooted. The terminal should now be automatically tailing the log of the device (we enabled logger in the config). If not, you can tell esphome to tail the log by running esphome example.yaml logs.

    INFO Successfully uploaded program.
    INFO Starting log output from /dev/ttyUSB0 with baud rate 115200
    [21:30:17][I][logger:156]: Log initialized
    [21:30:17][C][ota:364]: There have been 0 suspected unsuccessful boot attempts.
    [21:30:17][I][app:028]: Running through setup()...
    [21:30:17][C][wifi:033]: Setting up WiFi...
    [21:30:17][D][wifi:304]: Starting scan...
    [21:30:19][D][wifi:319]: Found networks:
    [21:30:19][I][wifi:365]: - 'ssid' (02:18:E6:22:E2:1A) ▂▄▆█
    [21:30:19][D][wifi:366]:     Channel: 1
    [21:30:19][D][wifi:367]:     RSSI: -54 dB
    [21:30:19][I][wifi:193]: WiFi Connecting to 'ssid'...
    [21:30:23][I][wifi:423]: WiFi Connected!
    [21:30:23][C][wifi:287]:   Hostname: 'example'
    [21:30:23][C][wifi:291]:   Signal strength: -50 dB ▂▄▆█
    [21:30:23][C][wifi:295]:   Channel: 1
    [21:30:23][C][wifi:296]:   Subnet: 255.255.255.0
    [21:30:23][C][wifi:297]:   Gateway: 10.0.0.123
    [21:30:23][C][wifi:298]:   DNS1: 10.0.0.1
    [21:30:23][C][ota:029]: Over-The-Air Updates:
    [21:30:23][C][ota:030]:   Address: example.local:3232
    [21:30:23][C][ota:032]:   Using Password.
    [21:30:23][C][api:022]: Setting up Home Assistant API server...
    [21:30:23][C][mqtt:025]: Setting up MQTT...
    [21:30:23][I][mqtt:162]: Connecting to MQTT...
    [21:30:23][I][mqtt:202]: MQTT Connected!
    [21:30:24][I][app:058]: setup() finished successfully!
    [21:30:24][I][app:100]: ESPHome version 1.14.3 compiled on Mar 30 2020, 21:29:41

    You should see the device boot up and connect to your WiFi and MQTT server successfully.

    Adding components

    Great! Now we have a basic YAML file, let’s add some components to make it do something more useful. Components are high level groups, like sensors, lights, switches, fans, etc. Each component is divided into platforms which is where different devices of that type are supported. For example, two of the different platforms under the light component are rgbw and neopixelbus.

    One thing that’s useful to know is that platform devices with the name property set in the config will appear in Home Assistant. Those without will be only local to the device and just have an id. This is how you can link multiple components together on the device, then present a single device to Home Assistant (like garage remote below).

    Software reset switch

    First thing we can do is add a software switch which will let us reboot the device from Home Assistant (or by publishing manually to MQTT or API). To do this, we add the reboot platform from the switch component. It’s as simple as adding this to the bottom of your YAML file.

    switch:
      - platform: restart
        name: "Example Device Restart"
    

    That’s it! Now we can re-run the compile and flash. This time you can use OTA to flash the device via mDNS (but if it’s still connected via TTY then you can still use that instead).

    esphome example.yaml run

    This is what OTA updates look like.

    INFO Successfully compiled program.
    Found multiple options, please choose one:
      [1] /dev/ttyUSB0 (CP2104 USB to UART Bridge Controller)
      [2] Over The Air (example.local)
    (number): 2
    INFO Resolving IP address of example.local
    INFO  -> 10.0.0.123
    INFO Uploading ./builds/example/.pioenvs/example/firmware.bin (856368 bytes)
    Uploading: [=====================================                       ] 62% 

    After the device reboots, the new reset button should automatically show up in Home Assistant as a device, under Configuration -> Devices under the name example.

    <figure class="wp-block-image size-large"><figcaption>Home Assistant with auto-detected example device and reboot switch</figcaption></figure>

    Because we set a name for the reset switch, the reboot switch is visible and called Example Device Restart. If you want to make this visible on the main Overview dashboard, you can do so by selecting ADD TO LOVELACE.

    <figure class="wp-block-image size-large"></figure>

    Go ahead and toggle the switch while still tailing the log of the device and you should see it restart. If you’ve already disconnected your ESP device from your computer, you can tail the log using MQTT.

    LED light switch

    OK, so rebooting the device is cute. Now what if we want to add something more useful for home automation? Well that requires some soldering or breadboard action, but what we can do easily is use the built-in LED on the device as a light and control it through Home Assistant.

    On the ESP32 module, the built-in LED is connected to GPIO pin 2. We will first define that pin as an output component using the ESP32 LEDC platform (supports PWM). We then attach a light component using the monochromatic platform to that output component. Let’s add those two things to our config!

    output:
      # Built-in LED on the ESP32
      - platform: ledc
        pin: 2
        id: output_ledpin2
    
    light:
      # Light created from built-in LED output
      - platform: monochromatic
        name: "Example LED"
        output: output_ledpin2

    Build and flash the new firmware again.

    esphome example.yaml run

    After the device reboots, you should now be able to see the new Example LED automatically in Home Assistant.

    <figure class="wp-block-image size-large"><figcaption>Example device page in Home Assistant showing new LED light</figcaption></figure>

    If we toggle this light a few times, we can see the built-in LED on the ESP device fading in and out at the same time.

    <figure class="wp-block-image size-large"></figure>

    Other components

    As mentioned previously, there are many devices we can easily add to a single board like relays, PIR sensors, temperature and humidity sensors, reed switches and more.

    <figure class="wp-block-image size-large"><figcaption>Reed switch, relay, PIR, temperature and humidity sensor (from top to bottom, left to right)</figcaption></figure>

    All we need to do is connect them up to appropriate GPIO pins and define them in the YAML.

    PIR sensor

    A PIR sensor connects to ground and 3-5V, with data connecting to a GPIO pin (let’s use 34 in the example). We read the GPIO pin and can tell when motion is detected because the control pin voltage is set to high. Under ESPHome we can use the binary_sensor component with gpio platform. If needed, pulling the pin down is easy, just set the default mode. Finally, we set the class of the device to motion which will set the appropriate icon in Home Assistant. It’s as simple as adding this to the bottom of your YAML file.

    binary_sensor:
      - platform: gpio
        pin:
          number: 34
          mode: INPUT_PULLDOWN
        name: "Example PIR"
        device_class: motion

    Again, compile and flash the firmware with esphome.

    esphome example.yaml run

    As before, after the device reboots again we should see the new PIR device appear in Home Assistant.

    <figure class="wp-block-image size-large"><figcaption>Example device page in Home Assistant showing new PIR input</figcaption></figure>

    Temperature and humidity sensor

    Let’s do another example, a DHT22 temperature sensor connected to GPIO pin 16. Simply add this to the bottom of your YAML file.

    sensor:
      - platform: dht
        pin: 16
        model: DHT22
        temperature:
          name: "Example Temperature"
        humidity:
          name: "Example Humidity"
        update_interval: 10s

    Compile and flash.

    esphome example.yaml run

    After it reboots, you should see the new temperature and humidity inputs under devices in Home Assistant. Magic!

    <figure class="wp-block-image size-large"><figcaption>Example device page in Home Assistant showing new temperature and humidity inputs</figcaption></figure>

    Garage opener using templates and logic on the device

    Hopefully you can see just how easy it is to add things to your ESP device and have them show up in Home Assistant. Sometimes though, you need to make things a little more tricky. Take opening a garage door for example, which only has one button to start and stop the motor in turn. To emulate pressing the garage opener, you need apply voltage to the opener’s push button input for a short while and then turn it off again. We can do all of this easily on the device with ESPHome and preset a single button to Home Assistant.

    Let’s assume we have a relay connected up to a garage door opener’s push button (PB) input. The relay control pin is connected to our ESP32 on GPIO pin 22.

    <figure class="wp-block-image size-large is-resized"><figcaption>ESP32 device with relay module, connected to garage opener inputs</figcaption></figure>

    We need to add a couple of devices to the ESP module and then expose only the button out to Home Assistant. Note that the relay only has an id, so it is local only and not presented to Home Assistant. However, the template switch which uses the relay has a name is and it has an action which causes the relay to be turned on and off, emulating a button press.

    Remember we already added a switch component for the reboot platform? Now need to add the new platform devices to that same section (don’t create a second switch entry).

    switch:
      - platform: restart
        name: "Example Device Restart"
    
      # The relay control pin (local only)
      - platform: gpio
        pin: GPIO22
        id: switch_relay
    
      # The button to emulate a button press, uses the relay
      - platform: template
        name: "Example Garage Door Remote"
        icon: "mdi:garage"
        turn_on_action:
        - switch.turn_on: switch_relay
        - delay: 500ms
        - switch.turn_off: switch_relay

    Compile and flash again.

    esphome example.yaml run

    After the device reboots, we should now see the new Garage Door Remote in the UI.

    <figure class="wp-block-image size-large"><figcaption>Example device page in Home Assistant showing new garage remote inputs</figcaption></figure>

    If you actually cabled this up and toggled the button in Home Assistant, the UI button turn on and you would hear the relay click on, then off, then the UI button would go back to the off state. Pretty neat!

    There are many other things you can do with ESPHome, but this is just a taste.

    Commit your config to Git

    Once you have a device to your liking, commit it to Git. This way you can track the changes you’ve made and can always go back to a working config.

    git add example.yaml
    git commit -m "adding my first example config"

    Of course it’s probably a good idea to push your Git repo somewhere remote, perhaps even share your configs with others!

    Creating automation in Home Assistant

    Of course once you have all these devices it’s great to be able to use them in Home Assistant, but ultimately the point of it all is to automate the home. Thus, you can use Home Assistant to set up scripts and react to things that happen. That’s beyond the scope of this particular post though, as I really wanted to introduce ESPHome and show how you can easily manage devices and integrate them with Home Assistant. There is pretty good documentation online though. Enjoy!

    Overriding PlatformIO

    As a final note, if you need to override something from PlatformIO, for example specifying a specific version of a dependency, you can do that by creating a modified platformio.ini file in your configs dir (copy from one of your build dirs and modify as needed). This way esphome will pick it up and apply that or you automatically.

    Fedora Join SIG 2019 retrospective

    Posted by Fedora Community Blog on March 31, 2020 06:26 AM

    SIG members

    There are five active members animating the SIG. One new contributor asked to join the SIG in 2019. And other people not formally part of the SIG but that welcome new people and hang around in the Telegram group, proposing new ideas and giving feedback on various topics.

    We get in touch with new people practically every day.

    The majority of newcomers get in touch via Telegram, someone via IRC and the fewer in the mailing list.

    What we have accomplished

    2019 was a year of novelties for the Fedora Join SIG: the community now has a modern forum platform hosted on Discourse, and we have set up a new way for newcomers and wannabe contributors to get in touch with the community.

    Ask Fedora

    Around April, Fedora opened a new forum which replaced the old Askbot instance. You can read the announcement article on the Community Blog. The new forum is powered by Discourse. We spent a lot of time thinking to the best set up for the categories. The goal was to keep them low and related to the distribution lifecycle. So we ended up with two main categories (for each supported language): installation/upgrade, daily usage. For a more precise categorization of the posts, people can use the tags (by using the existing ones or by creating new ones).

    Obviously, as usual, in the first times we received some ranting about the change. Mainly because the broken links someone could encounter on search engines still pointing to the old Ask Fedora instance. But it was our intention to collect useful an frequently asked questions on Askbot and convert them to quick-docs (maybe with the help of some volunteerx). So, apart the first criticisms, now Ask Fedora is well established and there are new discussions every day, with a generally polite and constructive behaviour leading to solve many issues. Another goal of the new Forum was/is to invite and foster new contributions to the community, and not be only a helpdesk channel.

    For more data and statistics about Ask Fedora, probably we will publish a dedicated article in the near future.

    The new welcome workflow

    In collaboration with Mindshare and other bodies of the Project, we have set a path for the community newcomers. You can read the idea in the following article.

    <figure class="wp-block-embed-wordpress wp-block-embed is-type-wp-embed is-provider-fedora-community-blog">
    Fedora Join is trying a new people focused workflow for newcomers
    <iframe class="wp-embedded-content" data-secret="KpHxoKVCFq" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/fedora-join-is-trying-a-new-people-focused-workflow-for-newcomers/embed/#?secret=KpHxoKVCFq" style="position: absolute; clip: rect(1px, 1px, 1px, 1px);" title="“Fedora Join is trying a new people focused workflow for newcomers” — Fedora Community Blog" width="600"></iframe>
    </figure>

    This workflow is still a work in progress, in the sense that we are trying to improve it every day.

    Classrooms

    Between the last months of 2018 and the first months of 2019 we were able to organize six Fedora Classrooms.

    This is the list:

    • Containers 101 with Podman hosted by Alessandro Arrichiello
    • LaTeX 101 hosted by Ankur Sinha “FranciscoD”
    • Building container images with Buildah hosted by Dan Walsh (we suffered big technical issues)
    • L10N 101 hosted by Silvia Sánchez
    • Fedora Silverblue hosted by Micah Abbott
    • Creating Fedora Badges hosted by Riecatnor

    We spotted that, from a technical point of view, it is difficult to host a live classroom especially when there are many attendees. For instance, in the Containers 101 with Podman classroom, where the Bluejeans video platform was used, there were 90 attendees. When we have used Jitsi, we realized that it is not the right tool to use when the attendees are more than 10.

    So we figured that for the future it could be better to run classroom asynchronously: the presenter records the video, we upload it to the Fedora Youtube channel, and we create a dedicated Q&A post on discussion.fedoraproject.org in the classroom category.

    Documentation

    The team did several works in order to document their SOPs (Standard Operation Procedures) and Ask Fedora ones, moving and updating some of the documents from the wiki to Docs.

    We documented how the workflow for newcomers should work https://docs.fedoraproject.org/en-US/fedora-join/welcome/welcome/, and thank to the Design Team we have also a nice banner that you can see on top of this article.

    For Ask Fedora we wrote

    What about the future

    There is always room for improvements.

    We should find ways of encouraging more community members involved in fedora-join to help newcomers. And to encourage more community members from other teams to hang around the Fedora Join SIG community channels in order to help the newcomers in find their path and where they can be helpful in the Project.

    We should also revive the Fedora Classroom program, and again, we need more community members interested in hosting live lessons or audio/video recordings.

    We need your help. We need community members involvement:

    • to help with the new process
    • to help with hanging out in channels to speak to newbies
    • to help with classrooms
    • to work on quick docs, and docs in general

    The post Fedora Join SIG 2019 retrospective appeared first on Fedora Community Blog.

    Introducing ManualBox project

    Posted by Kushal Das on March 31, 2020 06:02 AM

    One of the major security features of the QubesOS is the file vaults, where access to specific files can only happen via user input in the GUI applet. Same goes to the split-ssh, where the user has to allow access to the ssh key (actually on a different VM).

    I was hoping to have similar access control to important dotfiles with passwords, ssh private keys, and other similar files on my regular desktop system. I am introducing ManualBox which can provide similarly access control on normal Linux Desktops or even on Mac.

    GIF of usage

    How to install?

    Follow the installation guide on the Mac in the wiki. For Linux, we are yet to package the application, and you can directly run from the source (without installing).

    git clone https://github.com/kushaldas/manualbox.git
    cd manualbox
    

    On Fedora

    sudo dnf install python3-cryptography python3-qt5 python3-fusepy python3-psutil fuse -y
    

    On Debian

    sudo apt install python3-cryptography python3-pyqt5 python3-fusepy python3-psutil fuse
    

    Usage guide

    To start the application from source:

    On Linux:

    ./devscripts/manualbox
    

    On Mac:

    Click on the App icon like any other application.

    If you are running the tool for the first time, it will create a new manualbox and mount it in ~/secured directory, it will also give you the password, please store it somewhere securely, as you will need it to mount the filesystem from the next time.

    initial screen

    After selecting (or you can directly type) the mount path (must be an empty directory), you should type in the password, and then click on the Mount button.

    File system mounted

    Now, if you try to access any file, the tool will show a system notification, and you can either Allow or Deny via the following dialog.

    Allow or deny access

    Every time you allow file access, it shows the notification message via the system tray icon.

    Accessing file msg

    To exit the application, first click on the Unmount, and right-click on the systray icon, and click on the Exit or close via window close button.

    How to exit from the application

    Usage examples (think about your important dotfiles with passwords/tokens)

    Note: If you open the mounted directory path from a GUI file browser, you will get too many notifications, as these browsers will open the file many times separately. Better to have you GUI application/command line tool to use those files as required.

    Thunderbird

    You can store your thuderbird profile into this tool. That way, thunderbird needs your permission for access when you start the application.

    ls -l ~/.thunderbird/
    # now find your right profile (most people have only one)
    mv ~/.thunderbird/xxxxxx.default/logins.json ~/secured/
    ln -s ~/secured/logins.json ~/.thunderbird/xxxxxx.default/logins.json
    

    SSH private key

    mv ~/.ssh/id_rsa ~/secured/
    ln -s ~/secured/id_rsa ~/.ssh/id_rsa
    

    If you have any issues, please file issues or even better a PR along with the issue :)

    DevConf.CZ 2020

    Posted by Felipe Borges on March 30, 2020 02:32 PM

    Once again, DevConf.CZ, is our meeting-while-freezing winter conference in Brno. For this year I cooked up two talks:

    An hour-long talk about Portals during the first day of the conference. The room was almost full and the questions were very relevant. A few attendees met me after the talk seeking help to make their apps start using Portals and with ideas for new Portals.  You can watch the recordings below:

    <iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="295" src="https://www.youtube.com/embed/3rCIEzfZw1I?feature=oembed" title="Application sandboxing with Flatpak Portals - DevConf.CZ 2020" width="525"></iframe>

    On the last conference day, I had a quick twenty minutes talk about GNOME Boxes in the virtualization track. The audience wasn’t our known faces from the desktop talks, so I got the chance to show Boxes for the first time for a bunch of people. I did a quick presentation with live demos and Q&A. It was a success IMHO. Check the recordings below:

    <iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="allowfullscreen" frameborder="0" height="295" src="https://www.youtube.com/embed/uRjY2plEymQ?feature=oembed" title="GNOME Boxes: Virtualization made simple - DevConf.CZ 2020" width="525"></iframe>

    Besides, I participated in the “Diversity and Inclusion” and “Women in Open source” meetups. It was a good opportunity to see what other teams are doing to be more diverse and also to share my personal experiences with mentoring with Outreachy.

    Langdon White had a talk on Fedora Silverblue raising important questions about the development workflow in it. I was glad some of their issues were already addressed and fixed, but I recommend to those who didn’t attend this talk to watch the recordings. It is important feedback.

    I felt honored to be mentioned in Rebecca Fernandez’s talk about “Growing your career via open source contributions”, where she had slides showing people’s stories, including mine.

    I managed to catch up with the developments of the virgil driver on Windows in order to support Direct3D, and discuss other future developments with folks from the SPICE team.

    Other than that, I attended many podman/containers talks to better understand their development workflows and how we could accommodate these workflows in Silverblue. I spoke to Red Hatters from other teams that need CodeReadyContainers to test their applications, and how we could improve their workflow in Fedora Workstation.

    Lastly, I had a great time with [delicious] food and drinks at the DevConf Party in Fleda, which is 200 meters away from our flat. :-)

    What’s new in the Fedora Security Lab?

    Posted by Fabian Affolter on March 30, 2020 08:33 AM

    Unlike other security distributions is the Fedora Security Lab, speaking about the live media here, not standing alone. The Fedora Security Lab is a package set inside the Fedora Package Collection and a part of that package set is available as live media.

    Everything, I mean everything, that is present in this package set can be used on a regular Fedora installation (some parts are also available for EPEL). You don’t have to switch to a different distribution to perform a security test, an assessment or doing forensics, simple use your day-by-day system.

    tl;dr

    DNS

    • massdns – High-performance DNS stub resolver for bulk lookups and reconnaissance
    • shuffledns – Wrapper around massdns
    • aiodnsbrute – DNS asynchronous brute force utility
    • dnstwist – Domain name permutation engine

    amass is on the list.

    Fuzzer

    • wfuzz – Web fuzzer
    • ffuf – Fast web fuzzer written in Go
    • gobuster – Directory/File, DNS and VHost busting tool

    patator and gospider are on their way

    Slowloris

    • goloris – Slowloris for NGINX DoS
    • slowloris – Low bandwidth DoS tool
    • python-friendlyloris – A Slow Loris package for Python

    Android

    • adb-enhanced – Swiss-army knife for Android testing and development
    • python-adb-shell – Python implementation for ADB shell and file sync

    python-adb, andriller, androguard and androwarn are just around the corner.

    Reverse engineering

    • aeskeyfind – Locate 128-bit and 256-bit AES keys in a captured memory image

    rsakeykind, binee and angr are work-in-progress.

    Misc

    • kerberoast – Kerberos security toolkit for Python
    • httprobe – Probing tool for working HTTP and HTTPS servers
    • python-nessus-file-reader – Python file reader for nessus files

    Using data from spreadsheets in Fedora with Python

    Posted by Fedora Magazine on March 30, 2020 08:00 AM

    Python is one of the most popular and powerful programming languages available. Because it’s free and open source, it’s available to everyone — and most Fedora systems come with the language already installed. Python is useful for a wide variety of tasks, but among them is processing comma-separated value (CSV) data. CSV files often start off life as tables or spreadsheets. This article shows how to get started working with CSV data in Python 3.

    CSV data is precisely what it sounds like. A CSV file includes one row of data at a time, with data values separated by commas. Each row is defined by the same fields. Short CSV files are often easily read and understood. But longer data files, or those with more fields, may be harder to parse with the naked eye, so computers work better in those cases.

    Here’s a simple example where the fields are Name, Email, and Country. In this example, the CSV data includes a field definition as the first row, although that is not always the case.

    Name,Email,Country
    John Q. Smith,jqsmith@example.com,USA
    Petr Novak,pnovak@example.com,CZ
    Bernard Jones,bjones@example.com,UK

    Reading CSV from spreadsheets

    Python helpfully includes a csv module that has functions for reading and writing CSV data. Most spreadsheet applications, both native like Excel or Numbers, and web-based such as Google Sheets, can export CSV data. In fact, many other services that can publish tabular reports will also export as CSV (PayPal for instance).

    The Python csv module has a built in reader method called DictReader that can deal with each data row as an ordered dictionary (OrderedDict). It expects a file object to access the CSV data. So if our file above is called example.csv in the current directory, this code snippet is one way to get at this data:

    f = open('example.csv', 'r')
    from csv import DictReader
    d = DictReader(f)
    data = []
    for row in d:
        data.append(row)

    Now the data object in memory is a list of OrderedDict objects :

    [OrderedDict([('Name', 'John Q. Smith'),
                   ('Email', 'jqsmith@example.com'),
                   ('Country', 'USA')]),
      OrderedDict([('Name', 'Petr Novak'),
                   ('Email', 'pnovak@example.com'),
                   ('Country', 'CZ')]),
      OrderedDict([('Name', 'Bernard Jones'),
                   ('Email', 'bjones@example.com'),
                   ('Country', 'UK')])]

    Referencing each of these objects is easy:

    >>> print(data[0]['Country'])
    USA
    >>> print(data[2]['Email'])
    bjones@example.com

    By the way, if you have to deal with a CSV file with no header row of field names, the DictReader class lets you define them. In the example above, add the fieldnames argument and pass a sequence of the names:

    d = DictReader(f, fieldnames=['Name', 'Email', 'Country'])

    A real world example

    I recently wanted to pick a random winner from a long list of individuals. The CSV data I pulled from spreadsheets was a simple list of names and email addresses.

    Fortunately, Python also has a helpful random module good for generating random values. The randrange function in the Random class from that module was just what I needed. You can give it a regular range of numbers — like integers — and a step value between them. The function then generates a random result, meaning I could get a random integer (or row number!) back within the total number of rows in my data.

    So this small program worked well:

    from csv import DictReader
    from random import Random
    
    d = DictReader(open('mydata.csv'))
    data = []
    for row in d:
        data.append(row)
    
    r = Random()
    winner = data[r.randrange(0, len(data), 1)]
    print('The winner is:', winner['Name'])
    print('Email address:', winner['Email'])

    Obviously this example is extremely simple. Spreadsheets themselves include sophisticated ways to analyze data. However, if you want to do something outside the realm of your spreadsheet app, Python may be just the trick!


    Photo by Isaac Smith on Unsplash.

    Resolving mDNS across VLANs with Avahi on OpenWRT

    Posted by Christopher Smart on March 30, 2020 02:52 AM

    mDNS, or multicast DNS, is a way to discover devices on your network at .local domain without any central DNS configuration (also known as ZeroConf and Bonjour, etc). Fedora Magazine has a good article on setting it up in Fedora, which I won’t repeat here.

    If you’re like me, you’re using OpenWRT with multiple VLANs to separate networks. In my case this includes my home automation (HA) network (VLAN 2) from my regular trusted LAN (VLAN 1). Various untrusted home automation products, as well as my own devices, go into the HA network (more on that in a later post).

    In my setup, my OpenWRT router acts as my central router, connecting each of my networks and controlling access. My LAN can access everything in my HA network, but generally only establish related TCP traffic is allowed back from HA to LAN. There are some exceptions though, for example my Pi-hole DNS servers which are accessible from all networks, but otherwise that’s the general setup.

    With IPv4, mDNS communicates by sending IP multicast UDP packets to 224.0.0.251 with source and destination ports both using 5353. In order to receive requests and responses, your devices need to be running an mDNS service and also allow incoming UDP traffic on port 5353.

    As multicast is local only, mDNS doesn’t work natively across routed networks. Therefore, this prevents me from easily talking to my various HA devices from my LAN. In order to support mDNS across routed networks, you need a proxy in the middle to transparently send requests and responses back and forward. There are a few different options for a proxy, such as igmpproxy, but i prefer to use the standard Avahi server on my OpenWRT router.

    Keep in mind that doing this will also mean that any device in your untrusted networks will be able to send mDNS requests into your trusted networks. We could stop the mDNS requests with an application layer firewall (which iptables is not), or perhaps with connection tracking, but we’ll leave that for another day. Even if untrusted devices discover addresses in LAN, the firewall is stopping the from actually communicating (at least on my setup).

    Set up Avahi

    Log onto your OpenWRT router and install Avahi.

    opkg update
    opkg install avahi-daemon

    There is really only one thing that must be set in the config, and that is to enable reflector (proxy) support. This goes under the [reflector] section and looks like this.

    [reflector]
    enable-reflector=yes

    While technically not required, you can also set which interfaces to listen on. By default it will listen on all networks, which includes WAN and other VLANs, so I prefer to limit this just to the two networks I need.

    On my router, my LAN is the br-lan device and my home automation network on VLAN 2 is the eth1.2 device. Your LAN is probably the same, but your other networks will most likely be different. You can find these in your router’s Luci web interface under Network -> Interfaces. The interfaces option goes under the [server] section and looks like this.

    [server]
    allow-interfaces=br-lan,eth1.2

    Now we can start and enable the service!

    /etc/init.d/avahi-daemon start
    /etc/init.d/avahi-daemon enable

    OK that’s all we need to do for Avahi. It is now configured to listen on both LAN and HA interfaces and act as a proxy back and forth.

    Firewall rules

    As mentioned above, devices need to have incoming UDP port 5353 open. In order for our router to act as a proxy, we must enable this on both LAN and HA network interfaces (we’ll just configure for all interfaces). As mDNS multicasts to a specific address with source and destination ports both using 5353, we can lock this rule down a bit more.

    Log onto your firewall Luci web interface and go to Network -> Firewall -> Traffic Rules tab. Under Open ports on router add a new rule for mDNS. This will be for UDP on port 5353.

    <figure class="wp-block-image size-large"></figure>

    Find the new rule in the list and edit it so we can customise it further. We can set the source to be any zone, source port to be 5353, where destination zone is the Device (input) and the destination address and port are 224.0.0.251 and 5353. Finally, set action should be accept. If you prefer to not allow all interfaces, then create two rules instead and restrict source zone for one to LAN and to your untrusted network for the other. Hit Save & Apply to make the rule!

    <figure class="wp-block-image size-large"></figure>

    We should now be able to resolve mDNS from LAN into the untrusted network.

    Testing

    To test it, ensure your Fedora computer is configured for mDNS and can resolve yourself. Now, try and ping a device in your untrusted network. For me, this will be study.local which is one of my home automation devices in my study (funnily enough).

    ping study.local

    When my computer in LAN tries to discover the device running in the study, the communication flow looks like this.

    • My computer (192.168.0.125) on LAN tries to ping study.local but needs to resolve it.
    • My computer sends out the mDNS UDP multicast to 224.0.0.251:5353 on the LAN, requesting address of study.local.
    • My router (192.168.0.1) picks up the request on LAN and sends same multicast request out on HA network (10.0.0.1).
    • The study device on HA network picks up the request and multicasts the reply of 10.0.0.202 back to 224.0.0.251:5353 on the HA network.
    • My router picks up the reply on HA network and re-casts it on LAN.
    • My computer picks up the reply on LAN and thus learns the address of the study device on HA network.
    • My computer successfully pings study.local at 10.0.0.202 from LAN by routing through my router to HA network.

    This is what a packet capture looks like.

    16:38:12.489582 IP 192.168.0.125.5353 > 224.0.0.251.5353: 0 A (QM)? study.local. (35)
    16:38:12.489820 IP 10.0.0.1.5353 > 224.0.0.251.5353: 0 A (QM)? study.local. (35)
    16:38:12.696894 IP 10.0.0.202.5353 > 224.0.0.251.5353: 0*- [0q] 1/0/0 (Cache flush) A 10.0.0.202 (45)
    16:38:12.697037 IP 192.168.0.1.5353 > 224.0.0.251.5353: 0*- [0q] 1/0/0 (Cache flush) A 10.0.0.202 (45)

    And that’s it! Now we can use mDNS to resolve devices in an untrusted network from a trusted network with zeroconf.

    Episode 189 - Video game hackers - speedrunning

    Posted by Open Source Security Podcast on March 30, 2020 12:08 AM
    Josh and Kurt talk about video games and hacking. Specifically how speed runners are really just video game hackers.

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/13751699/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes


      Modifying the OpenWRT relayd package C source code to help set better static ARP entries (ATF_PERM)

      Posted by Jon Chiappetta on March 29, 2020 05:43 PM

      If you are bridging 2+ wireless networks together, it can get tricky for the ARP mappings to take place efficiently throughout the network as you move around from AP to AP. I wrote a small shell script that runs on each router which reports in its WiFi-AP associations & DHCP entries that are part of the overall /20 network. This client mapping data is sent around to each router to determine who has the proper client and where/how to find them. I noticed that the relayd bridging app was putting in ARP entries that were becoming stale/incorrect over time as you moved around so I modified it’s source code to outsource this work to a file that has an updated master set of static interface entries based on each DHCP/WIFI association for each router on the network.

      (I lost access to my fossjon github account due to 2-factor auth being lost)

      https://github.com/stoops/relayd/compare/master…stoops:static

      (the steps to compile this on a armv7l router itself are in the README I added)

      For example, a shell script running on the router (with all the relevant network information) can determine and write out a small mapping file that this modified framework can use to keep a cleaner and better ARP tableset (/tmp/arp.txt):

      wlan1 00:2b:3d:5a:4a:9e 192.168.18.103
      

      The added capability in relayd will then ensure that for any of the bridged interfaces listed, the only ARP entry that will exist is for the one specified in the file and that it will be set to permanent status instead (deletion of incorrect entries added in).

      I am running this as a test on my parents home network to see how long it lasts for, modifying C is tricky for me! 🙂

      Making a git forge decision

      Posted by Fedora Community Blog on March 27, 2020 06:19 PM

      After evaluating over 300 user stories from multiple stakeholders, the Community Platform Engineering (CPE) team have aligned on a decision for the git forge that CPE will operate for the coming years. We are opting for GitLab for our dist git and project hosting and will continue to run pagure.io with community assistance.

      A lot of comments and concerns were raised about the suitability of GitHub as a forge of choice. The preference from all stakeholders (Fedora, CentOS, RHEL, CPE) is that GitHub is not a contender and not a preference. With that in mind, we have decided to not analyse it as an option and respect the wider wishes of our stakeholders. Therefore the rest of this analysis focuses on Pagure versus GitLab as our choice.

      Looking at the user story list, we have a picture of a standard set of practices that users expect to have from a git forge. The basics of storing code, accessing it, merging, forking and the traditional git workflow are satisfied by both gorges under investigation. 

      A key requirement coming to us is security. The need for HTTPS pushes and the need for more stringent branch control via protected & private branches are key operating requirements of the CentOS stakeholders. The need to interface with internal and external users in a private capacity whereby embargoed content can be worked on in private is a necessary requirement. 

      Another key requirement is usability and accessibility. It is clear that our current forge solution is used as a mixture of ticket tracker, work planning, code repository, and storage of documents and other artifacts. The barrier to usage needs to be low to attract drive by users and a strong representation was made for the need to have more accessible ways to interface with the system from a GUI to a command line client.

      Developer-centric needs came from multiple sources. Integrations with daily workflow, integrations within the IDE, integrations in an always-ready and always-on approach (SLA requirements were high) as well as the ability to use the forge as a means to improve the code base (auto notifications of issues, interactive PR reviews etc.) and way of working by providing analytical output was also raised.

      A big factor in a decision here needs to be both the immediate usability to meet stakeholder needs that includes an immovable deliverable for CentOS Stream which CPE must deliver by the end of the year. 

      Another major factor is the stability, availability and responsiveness of the platform chosen. While no forge meets the full suite of requirements, the issue of stability, availability and some of the richer features that were requested are currently not available in Pagure. GitLab provides the most feature-rich experience out of the box and the recommendation of the CPE management is to opt for GitLab as our chosen forge for dist-git and general project hosting. For pagure.io, we want to offer it to the community to maintain. CPE would provide “power and ping” and the rest of it will be up to the community willing to do the work. If no one steps up to pick the maintenance of pagure.io, it will be a candidate application to sunset. Some top level requirements which helped us arrive at this decision:

      • There is a need for CentOS Stream to integrate with a kernel workflow that is an automated bot driven merging solution (merge trains). This allows for richer CI capabilities and minimises the need for human interaction
      • GitLab provides subgroups allowing for more granular permissions for repos
      • GitLab allows for project planning capability which could make multiple trackers such as Taiga redundant, allowing for the planning and tracking to reside within the repo. It would enrich the current ticket based solution that Pagure has evolved into for some groups
      • 24/7 availability in an SLA model and not hosted by the CPE team freeing up resourcing and removing the need to staff a dedicated team for a git forge SLA which would necessitate a follow-the-sun ops model and a heavy investment in stability and observability of the Pagure solution.

      The opportunity cost to invest our finite resources into bringing Pagure up to the minimum standard that we require by the end of the year would mean feature starving both Fedora and CentOS for the next 18-24 months as we strive for the optimal standard. As a team, we spend 40% of our available resources on keeping the lights on day to day with a very small amount of that improving our technical debt situation. We are spending 30% of our team on delivering CentOS Stream. The available bandwidth for the team is not at a point that we could safely and with confidence deliver the required features to make Pagure work as our forge of choice.

      It additionally would have a longer term impact with our lights on work needing to expand to move Pagure to an SLA, tilting our resourcing plan for that body of work towards 60% of our capacity. We feel this is not a responsible decision that we can make as the inward investment in a forge is not something that we can do at the expense of planned initiatives that are on our backlog. Some of them include a better packager workflow, more investment in CI/CD to remove CPE from manual work and empower the community to do more things in our infrastructure, more observability and monitoring of our infra and services, movement of services towards the Cloud to make use of a modern tech stack and that’s before we consider immovable service progression that we simply have to undertake, for example, the new AAA system.

      However, we do not want to abandon Pagure and our plan going forward is thus:

      • Offer the maintenance of pagure.io to anyone in the community interested in leading it.
      • Engage with GitLab on the possibility of a SaaS offering so that CPE can attain key requirements of uptime, availability and throughput as well as ensuring tooling integrations (such as Fedora Messaging among others) are preserved. Legal considerations with respect to control of code will be our first discussion point with them enabling us to make a SaaS versus self-hosted decision.
      • Keep Pagure running with our oversight while we analyse a sunset timeline which will give a minimum of 12 months notice once we have a plan firmed up. We will fix blocker bugs, address critical vulnerabilities and keep the lights on in the same manner that we have committed to over the last 14 months where Pagure has not been a staffed and supported initiative.
      • Where possible, when we have to update our tooling, we will attempt to refactor our tooling to be forge-agnostic, allowing our communities the choice of storing their code on GitLab or continuing to use pagure.io
      • Watch closely for collaboration with other Communities on Pagure and provide them with guidance and oversight to help the Pagure community grow. We recognise that this is a growing and unique ecosystem and we genuinely want to see it succeed and will do our best to support it in that capacity. To that end we will publish the roadmap difference between Pagure and GitLab to allow the Community to focus on feature enhancements to bridge that gap.
      • Facilitate our Communities and assist them in standing up a version of Pagure that can be driven and maintained by the community allowing a pure open source principles approach for those who seek it.

      We recognise how difficult a decision this is and we empathise with the emotional attachment to Pagure. It is why we want to have a mutually-beneficial approach to ultimately allow Pagure to grow and flourish and allow our community members to setup and work with any Forge they wish. This ultimately allows the CPE team to focus on adding value to a greater scale of initiatives. This approach allows us to focus on value added services and initiatives that will benefit a large percentage of our communities instead of focusing on a singular foundational service which would ultimately consume our finite resourcing and limit our impact on both Communities.

      — Jim and Leigh

      The post Making a git forge decision appeared first on Fedora Community Blog.

      Kiwi TCMS is Open Source Seed Award winner

      Posted by Kiwi TCMS on March 27, 2020 08:47 AM

      Kiwi TCMS is the proud winner of a $10,000 award from Mozilla, Indeed, Open Collective, Ford Foundation & Simply Secure. Read below for the full story!

      At the end of January Zahari alerted our team about the Open Source Speed Dating FOSDEM 2020 event and Alex was very swift in filing the application form. Just as we landed in Brussels, ready to host Testing and Automation devroom and the Open Source Test Management stand, we got the news - Kiwi TCMS has been selected as a participant.

      What followed was a very hasty day of preparing a 5 min pitch and rehearsing it as much as possible so we can be ready to present our project. Alex prepared the pitch and made final review and polishing together with Anton. For the record everything was written down on paper, including important facts about the project and schedule - when and where is our slot, how is Alex going to get there, when does he need to leave to be on time, etc. We believe that preparation was key here and that's why our team always tries to be prepared when we participate at events! It was as good as it can get, no more changes!

      On Feb 1st all hell broke loose - it was day #1 of FOSDEM, the Testing an Automation devroom was full with amazing speakers and packed with people, watch videos here, there was barely time to eat or drink water and at 5PM Alex had to rush across town to pitch Kiwi TCMS!

      Then everything went like clockwork - weather was warm for the season, Alex decided to walk from ULB to La Tricoterie, both so he doesn't get stuck in traffic but also to regulate stress level and be clear minded for what comes next. He arrived just on time to meet with new folks and have a glass of wine before taking his turn with the judges.

      Open Source Speed Dating is a format where projects pitch to a team of 3 judges who then follow up with various questions. Their goal is to assess how suitable your project is for the money they are giving away but also how would actually receiving an award help the project. You do get guidance how to prepare and what sort of information the judges are looking for. However you have no idea who the other participants are and who are you competing against! All you have is a 15 minutes slot where you have to give the best of you and hope it is enough.

      Afterwards we reunited together, did even more walking, played the SPACESHIP at Let Me Out escape room and finished with a mandatory team dinner in the hearth of Brussels.

      Following an internal selection process and due diligence we finally received the award. $10,000 for open source!

      As a side note we also got to know who the other winners are, which can be seen from Open Source Speed Dating records: F-Droid, ossia, MNT Research GmbH and Kiwi TCMS!

      We’re giving all of it to our community

      All money from the Kiwi TCMS Collective will be going towards funding development tasks. Like Alex told the judges - this will help us enable more hands working on Kiwi TCMS and complete pending work faster. Stay tuned for our bounty program announcement!

      Happy testing!

      Kiwi TCMS is Open Source Seed Award winner

      Posted by Kiwi TCMS on March 27, 2020 08:47 AM

      Kiwi TCMS is the proud winner of a $10,000 award from Mozilla, Indeed, Open Collective, Ford Foundation & Simply Secure. Read below for the full story!

      At the end of January Zahari alerted our team about the Open Source Speed Dating FOSDEM 2020 event and Alex was very swift in filing the application form. Just as we landed in Brussels, ready to host Testing and Automation devroom and the Open Source Test Management stand, we got the news - Kiwi TCMS has been selected as a participant.

      What followed was a very hasty day of preparing a 5 min pitch and rehearsing it as much as possible so we can be ready to present our project. Alex prepared the pitch and made final review and polishing together with Anton. For the record everything was written down on paper, including important facts about the project and schedule - when and where is our slot, how is Alex going to get there, when does he need to leave to be on time, etc. We believe that preparation was key here and that's why our team always tries to be prepared when we participate at events! It was as good as it can get, no more changes!

      On Feb 1st all hell broke loose - it was day #1 of FOSDEM, the Testing an Automation devroom was full with amazing speakers and packed with people, watch videos here, there was barely time to eat or drink water and at 5PM Alex had to rush across town to pitch Kiwi TCMS!

      Then everything went like clockwork - weather was warm for the season, Alex decided to walk from ULB to La Tricoterie, both so he doesn't get stuck in traffic but also to regulate stress level and be clear minded for what comes next. He arrived just on time to meet with new folks and have a glass of wine before taking his turn with the judges.

      Open Source Speed Dating is a format where projects pitch to a team of 3 judges who then follow up with various questions. Their goal is to assess how suitable your project is for the money they are giving away but also how would actually receiving an award help the project. You do get guidance how to prepare and what sort of information the judges are looking for. However you have no idea who the other participants are and who are you competing against! All you have is a 15 minutes slot where you have to give the best of you and hope it is enough.

      Afterwards we reunited together, did even more walking, played the SPACESHIP at Let Me Out escape room and finished with a mandatory team dinner in the hearth of Brussels.

      Following an internal selection process and due diligence we finally received the award. $10,000 for open source!

      As a side note we also got to know who the other winners are, which can be seen from Open Source Speed Dating records: F-Droid, ossia, MNT Research GmbH and Kiwi TCMS!

      We’re giving all of it to our community

      All money from the Kiwi TCMS Collective will be going towards funding development tasks. Like Alex told the judges - this will help us enable more hands working on Kiwi TCMS and complete pending work faster. Stay tuned for our bounty program announcement!

      Happy testing!

      Fedora 32 Upgrade Test Day 2020-04-02

      Posted by Fedora Community Blog on March 27, 2020 06:12 AM
      Fedora 32 Upgrade Test Day

      Thursday 2020-04-02 through Monday 2020-04-06, is the Fedora 32 Upgrade Test Day(s)! As part of the preparation for Fedora 32, we need your help to test if everything runs smoothly!

      Why Upgrade Test Day?

      We’re approaching the Final Release date for Fedora 32. Most users will be upgrading to Fedora 32 and this test day will help us understand if everything is working perfectly. This test day will cover both a GNOME graphical upgrade and an upgrade done using DNF .

      We need your help!

      All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

      Share this!

      Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

      The post Fedora 32 Upgrade Test Day 2020-04-02 appeared first on Fedora Community Blog.

      All systems go

      Posted by Fedora Infrastructure Status on March 27, 2020 01:15 AM
      New status good: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

      Major service disruption

      Posted by Fedora Infrastructure Status on March 27, 2020 01:06 AM
      New status major: Everything seems to be working. for services: Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, Fedora pastebin service, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Module Build Service, Mirror List, Mirror Manager, Fedora Packages App, Pagure, Fedora People, Package maintainers git repositories, Fedora Container Registry, ABRT Server, Fedora websites, Fedora Wiki, Zodbot IRC bot

      Part 6: What do we do now?

      Posted by Josh Bressers on March 26, 2020 04:46 PM

      Well, we’ve made it to the end. What started out as a short blog post ended up being 7 posts long. If you made it this far I commend you for your mental fortitude.

      I’m going to sum everything up with these 4 takeaways.

      1. Understand the problem we want to solve
      2. Push back on scanner vendors
      3. Work with your vendors
      4. Get involved in open source

      Understand the problem we want to solve

      In security it’s sometimes easy to lose sight of what we’re really trying to do. Running a scanner isn’t a goal in itself, the goal is to improve security, or it should be if it isn’t. Make sure you never forget what’s really happening. Sometimes in the excitement of security, the real reason we’re doing what we do can be lost.

      I always hate digging out the old trope “what’s the problem we’re trying to solve” but in this instance I think it’s a good question to ask yourself. Defining problems is really hard. Staying on goal is even harder.

      If we think our purpose is to run the scanners, what becomes our goal? The goal will be to have a clean scan. We know a clean scan is impossible, so what really happens is our purpose starts to twist itself around a disfigured version of reality. I’ve said many times the problem is really insecure applications, or at least that’s the problem I tend to think about. You have to figure this out for yourself. If you have a scanner running make sure you know why.

      Push back on scanner vendors

      When a scan has 80% false positives, that’s not because your project isn’t built well, it’s because the scanner has a lot of serious bugs. High false positive rates mean the product is broken, it doesn’t mean your project is broken. Well, it might be, but probably not. The security industry has come to accept incredibly high false positive rates as normal. We have to break this cycle. It holds the industry back and it makes our jobs horrible. If you are a scanner vendor go make a sign that says “ZERO FALSE POSITIVES” and hang it on the wall. That’s your new purpose.

      Setup a weekly or monthly call with your vendor. Make sure they understand your purpose and goals (remember your purpose isn’t just to run a scanner). Make them help you through rough patches. If you feel pain because of their product, they should feel it with you. A vendor who won’t work with you is a vendor who needs to be replaced. Good vendors are your partner, your success is their success. Your pain is their pain.

      Work with your vendors

      Now, when you find a scanner that has a lot of bugs, you basically have two choices. You can give up (and sometimes this is an acceptable decision). Or you can work with the vendor. At this stage in the technology, it’s very important we start working with these vendors. Report every false positive as a bug. Make them answer hard questions about the results you see. If nobody pushes back we’re going to see worse results in the future, not better. Products improve because of feedback. They don’t improve if we all just pretend everything is fine. I do think part of the reason code and application scanning seems to have plateaued is because we accepted poor results as normal.

      If you are a vendor, remember that reports about false positives are gifts. Make sure you treat false positives like they are  important bugs. Like all new markets, there will be winners and there will be losers. If your scanner reports the most results, but most of those are false positives, that’s not progress. The scanners with the most false positives will be on the losing side of history.

      Get involved in open source

      And lastly, help. This sounds like the old “patches welcome” we all love to throw around, but in this case I’m quite serious. Your product is basically open source. Some of the projects you are working with could use a hand to fix some of these findings. As annoying as you think a huge scan report is, imagine getting one when you’re working for free. It’s insulting and degrading. If you have actual real scan results that need fixing in an open source project, don’t dump it over the fence, get in there and help fix problems. If you include a dependency in your project, sending a patch upstream is the same as patching your own application.

      If you’ve never contributed to open source it can be a terrifying. I just spent longer than I want to admit trying to find a nice “getting started in open source” guide. I wasn’t terribly impressed with any of what came up (if you know of one let me know, I’ll link it at the bottom). I’m going to start writing a post titled “How to get involved in open source for security people”, but until then, my advice is just go help. You know how github works, get in there and help. Be patient and kind, apologize when you mess up and don’t be afraid to ask stupid questions. Some say there are no stupid questions. There totally are, but that’s not a reason to be a jerk when asking or answering.

      What now?

      The one ask I have of everyone reading this is to help educate other on this extremely complicated an important topic of security scanners. It’s important we approach others with empathy and understanding. Security has a long history of being ill tempered and hard to work with. If someone is misunderstanding how a security scanner works or what it does, it’s our opportunity to help them understand. These scanners are too important to ignore, and they need a lot of work. We don’t change an industry by being annoying idiots. We change it by being respected partners.

      I think we have an opportunity to see composition scanners make things better in the future. Composition security has been a taboo topic for a very long time. Many of us knew what was happening but we didn’t do anything because we didn’t have a solution. Security through obscurity works, until it doesn’t. There’s a lot of work to do, and we have to do it together.

      Now go and teach someone something new.

      Come Socialize at the Fedora Social Hour!

      Posted by Fedora Community Blog on March 26, 2020 03:36 PM

      COVID-19 is getting pretty real, with social distancing, shelter-in-place, and lockdown orders in effect in areas around the world. Some of us are perhaps getting sick of the company we are stuck with, and others of us are feeling pretty isolated without any company at all.

      Fedora Project Leader Matthew Miller and contributor Neal Gompa had the idea for a Fedora Social Hour where folks could video chat in and get a little (virtual) human contact and conversation.

      Sound like a welcome break from isolation to you? Check out the details below!

      Fedora Social Hour

      Thursday, April 2nd at 11 PM UTC

      ( Convert to your timezone )

      How to join:

      We will be hosting this social meetup on matrix.org! No need to download a client, although you’ll need to sign up for an account to participate if you do not already have one. You can view the chat before signing up to see if you want to participate. Here is the URL:

      https://riot.im/app/#/room/#fedora-social-hour:matrix.org

      Last week we did a trial run using Mozilla Hubs. It’s a fun little VR-based chat system with some interesting quirks! One of the requests we had was if there could be some kind of music playing in the background. It looks like Riot.im has a built-in Spotify integration, as well as Jitsi and Etherpad integration. So we’ll be playing around with and testing these goodies out!

      <figure class="wp-block-image size-large"><figcaption>A little preview of how the Riot.im room will look, with Jitsi video chat in the upper right, an etherpad in the upper left, and the main chat below (anonymized here for privacy)</figcaption></figure>

      Do know that Riot.im is an open source client for matrix.org, which is an open source chat protocol (kind of like a next-gen IRC.) If you would prefer to join up with us using IRC, though, here’s how you can do that:

      See you there!

      The post Come Socialize at the Fedora Social Hour! appeared first on Fedora Community Blog.

      Tech Training Outreach – SCaLE 18x (2020)

      Posted by Fedora Community Blog on March 26, 2020 07:00 AM

      Executive Summary

      Our team delivered tech training, help, outreach, and swag items during the SCaLE 18x 2020 event.

      At a Glance:

      • What: An open source software conference in Pasadena, California
      • Where: Pasadena Convention Center
      • When: 5 – 8 March 2020

      Our Team in the Field

      This report is for the following Ambassadors / Red Hat Staff:

      <figure class="wp-block-image size-medium"><figcaption>Brian and Scott welcomed guests to SCaLE 18x.</figcaption></figure>

      What is SCaLE?

      SCaLE exhibits an awesome four day tech conference with open source training and general presentations. This year also marks the eighteenth year in the LA area. Wonder what happened last year? Check out SCALE 17x

      First of all, this convention spans the gamut of Linux and technology ideas. The Conference Chair, Mr. Ilan Rabinovitch, and Technical Committee Chair, Hriday Balachandran guided our team through check-in. As a result, the entire SCaLE team arranged an all-around info-filled show.

      Conference Highlights

      Day 1: Thursday 5 March

      We met in Pasadena to bring supplies for the expo portion of the conference.

      First of all, our team found exhibit booth number 512.

      Brian and Scott furnished the Fedora event box, banners and banner stands (thanks Veronica!), and tablecloth in the morning. Later that afternoon, Perry dropped off additional supplies.

      After that, Scott and Perry returned in the early evening for pre-setup. We observed a little dust on the tablecloth and some folds on the top of one of the banners. Neither, however, resulted in showstoppers.

      <figure class="wp-block-image size-medium"><figcaption>Our team inspected banners and took care of booth pre-setup tasks</figcaption></figure>

      We also checked in (as applicable) to our various hotels. The room appeared clean and comfortable. It had a lovely wall view 🙂 .

      Iván and Alex arrived in the evening after a lengthy day of customs and travel.

      Day 2: Friday 6 March

      On the following morning, we set up additional computer equipment and swag in our Exhibit Hall booth.

      We finished setting up well before our start time of 2 P.M. Even more, the conference supplied us a booth in the center of a high-traffic area and a stone’s throw from the Red Hat booth. Due to this prime spot we estimated 300-400 guests visits.

      Because we had a good amount of ambassadors, we took shifts as needed for grabbing a meal, taking breaks, or learning new things.

      <figure class="wp-block-image size-medium"><figcaption>Our staff learned how to make things do stuff in expo workshops</figcaption></figure>
      1. With really great questions and feedback
      2. Who clearly extensively run and breathe Fedora
      3. Who provided the secret code words from Fedora social networking streams.

      Additionally, our team helped connect guests with answers and results.

      We surprised a few guests that have never used Fedora that it is free to use. Furthermore, some had no idea that some of our Spins, for example Security, even exist.

      Also, some general questions we asked guests tended towards open-ended (not direct “yes” or “no” answers) to enhance discussion, such as:

      • So tell me what brings you here today?
      • How do you use Fedora?
      • If you’re not using Fedora, what do you use and why?
      • Do you have any suggestions or comments for us to pass upstream to Fedora?
      <figure class="wp-block-image size-medium"><figcaption>The Fedora and Red Hat Teams get together for a delicious feast</figcaption></figure>

      Day 3: Saturday 7 March

      Similarly, we returned the next morning to resume meeting with guests.

      Our second peak day drew in an estimated 350-450 guests.

      Later that evening, Perry, Alex, and Iván presente Fedora 31 Highlights. This tech training workshop focused on what’s new in the current release. Almost 20 guests attended this live outreach talk.

      Afterward, we assembled for a fun enjoyable Game Night.

      <figure class="wp-block-image size-medium"><figcaption>Some guests built sculptures during Game Night</figcaption></figure>

      Day 4: Sunday 8 March

      The last day, in contrast, brought 200 guests to our booth. Above all, many returned to provide feedback/suggestions at our table.

      <figure class="wp-block-image size-large"><figcaption>Ben Cotton (right) chats with a guest about Fedora</figcaption></figure>

      Ultimately, Ben, Perry, and Veronica packed up the booth for transport.

      Suggestion / Feedback Box Items

      At our booth, we had a sign-in sheet for visitors to offer feedback, suggestions, and comments about Fedora.

      From the data collected, we pulled out some key highlights.

      It might help to study whether making more youth-friendly stickers/swag and encouraging under-18 participation in conferences might maybe result in more youth attendance numbers.

      There were 38 respondents on the sign-in sheet. Of those,

      1. Down-rev Distro: 7 guests appeared to be using very dated versions of Fedora (less than or equal to Fedora 29) or haven’t used it since Fedora 29. One still uses Fedora 8 (?).
      2. Marketing/Demographics: 13 have not used Fedora yet or do not use Fedora. 1 only uses CentOS/RHEL due to work.
      3. Marketing/Design Team: 7 had positive booth feedback, ranging from “Best booth ever!,” “Cool booth” (twice), “Cool booth swag,” “I love the stickers,” “STICKER (sic) ARE GREAT!”

      Moreover, various had general positive feedback on the Fedora distribution, ranging from:

      1. All: “Keep up the great work!” “Love it” “Keep doing great!” “FEDORA!” “FEDORA BECAUSE REASON!” “Thanks for fedora (sic) ♡” “#LoveFedora” “#Fedora” “Keep up the great work!” “I teach it at Santa Monica College. It’s good!” “Go on with the good work” “Awesome vers! Thx.” “Occassional (sic) user, love it” “Thanks 🙂” “I’ll definitely try it!”
      2. Marketing/Design: “STICKERS/SUPPORT”
      3. Environment: “Great desktop and GNOME support!”
      4. Spins: “Thank you for keeping the Cinnamon spin”

      In contrast, guests provided suggestions range from:

      1. FAS: “Please fix [FAS] user [account problem] (account name suppressed)”
      2. Stability/Architecture: “Awesome! Dell XPS 15 (7590 [2019 version]) freezes a bit though”
      3. Tool request: “Photo”
      4. Accessibility: 1 felt accessibility is very important.
      5. USB Sticks: 10 would like USB sticks for installation

      In addition, guests providing swag idea recommendations include:

      1. “Shirts”
      2. “Stickers”
      3. “Hex Stickers”

      What Worked

      <figure class="wp-block-image size-large"><figcaption>Preview of this year’s Fedora ribbon. Design by FAS: duffy</figcaption></figure>
      • The ribbons and lanyards definitely proved eye-catching with Fedora fans.
      • Having our table nearby the Red Hat Team certainly helped.
      • The super-key keycap stickers, shiny “POWERED BY fedora” chassis stickers, “Women in Fedora” stickers, and colorful kid-friendly stickers delighted guests.

      Future Event Box Swag Suggestions

      • T-shirts: 25 of each size. Also, having a small set of shirts for our ambassador team to wear would probably promote the Fedora wordmark and double as uniform.
      • About 100 thumb drives available for imaging with Fedora so people new to the distro can easily try it out.
      • Fedora ribbons of different designs
      • Hex stickers of different designs
      • Youth-friendly stickers

      Other General Suggestions

      • Have Fedora sponsor a Fedora Day at SCaLE.
      • Game Night sponsorship might prove advantageous.

      Final Thoughts

      In closing, guests appeared to leave the expo with the feeling of community. Consequently, other distro reps stopped by to greet because we spark open chat.

      Overall, the whole team expressed excited attitudes. Most notably, feedback from guests on the talk indicated that it sounded helpful.

      In conclusion, that’s it for news from SCaLE 18x. We hope to see you next year!

      <figure class="wp-block-image size-medium"><figcaption>Perry, Alex, and the Team say, “See you next year!”</figcaption></figure>

      The post Tech Training Outreach – SCaLE 18x (2020) appeared first on Fedora Community Blog.

      On being part of the Fedora community

      Posted by Fedora Community Blog on March 25, 2020 08:22 PM

      Hi, everyone. As I am sure you know, I often say that the “Friends” value of the Fedora Foundations is the one that’s personally most important to me. I want to remind everyone that when you are a Fedora contributor — a developer, a writer, an advocate, or any other role in our community — it’s important to keep the spirit of “be excellent to each other” in mind.

      Our Code of Conduct says: members of the Fedora community should be respectful when dealing with other contributors as well as with people outside the Fedora community and with users of Fedora. Please be extra-aware of how your actions even outside of our mailing lists, forums, and channels reflect upon Fedora as a whole.

      We just adopted a new vision statement: The Fedora Project envisions a world where everyone benefits from free and open source software built by inclusive, welcoming, and open-minded communities.  We are continually working to make Fedora an inclusive place where all are welcome. I wish it did not need to be said, but here it is: personal attacks, innuendo, and inciting language are examples of things that do not create a welcoming community, and will not be tolerated in Fedora. We understand that even friends can disagree at times, and that emotions can lead to escalation. The Code of Conduct ticket queue is a safe place where folks can open up an issue to resolve difficult situations. Please make use of it if you ever feel it is warranted.

      As I mentioned on the magazine, these are uncertain times in the face of Covid-19. It is more important than ever that we care for and treat each other well, as we are on the internet working virtually more than ever. On a final note- I am sending more well wishes for the health and safety of our Fedora family. Remember to be excellent to each other. Thanks!

      — Matthew Miller, Fedora Project Leader

      The post On being part of the Fedora community appeared first on Fedora Community Blog.

      Hacking the video stream for BlueJeans on Linux

      Posted by Tom 'spot' Callaway on March 25, 2020 08:02 PM
      Like most of the rest of the world, I'm working from home and stuck inside. I saw some folks who had virtual backgrounds setup on Zoom, and I wondered if something like that was possible for the videoconferencing service that my employer (Red Hat) uses, BlueJeans. The short answer is: No. Bluejeans has no native support for anything other than a regular video cam stream.

      But this is Linux. We don't stop at the short answer.

      I started thinking, surely, it has to be possible to "man in the middle" the video stream. And indeed, it is. I did all of this on Fedora 32 (x86_64), but it should work anywhere else.

      Step 1: v4l2loopback

      v4l2loopback is a kernel module which creates virtual V4L2 loopback video devices. V4L2 devices are what most (all?) webcams supported by Linux are.
      This module is not in the upstream kernel, so you need to pull the sources from git and build it locally. The github home is: https://github.com/umlaeute/v4l2loopback.

      Don't forget to install kernel-devel and kernel-headers that correspond to the running kernel on your system:
      dnf install kernel-devel kernel-headers

      Now, we need to clone the v4l2loopback source code, build it as a module for our kernel, and then install it:
      [spot@localhost ~]$ git clone https://github.com/umlaeute/v4l2loopback.git
      Cloning into 'v4l2loopback'...
      remote: Enumerating objects: 65, done.
      remote: Counting objects: 100% (65/65), done.
      remote: Compressing objects: 100% (40/40), done.
      remote: Total 1771 (delta 28), reused 43 (delta 19), pack-reused 1706
      Receiving objects: 100% (1771/1771), 811.39 KiB | 8.11 MiB/s, done.
      Resolving deltas: 100% (991/991), done.
      [spot@localhost ~]$ cd v4l2loopback
      [spot@localhost v4l2loopback]$ make
      Building v4l2-loopback driver...
      make -C /lib/modules/`uname -r`/build M=/home/spot/v4l2loopback modules
      make[1]: Entering directory '/usr/src/kernels/5.5.0-0.rc7.git1.2.fc31.x86_64'
        CC [M]  /home/spot/v4l2loopback/v4l2loopback.o
        Building modules, stage 2.
        MODPOST 1 modules
        CC [M]  /home/spot/v4l2loopback/v4l2loopback.mod.o
        LD [M]  /home/spot/v4l2loopback/v4l2loopback.ko
      make[1]: Leaving directory '/usr/src/kernels/5.5.0-0.rc7.git1.2.fc31.x86_64'
      [spot@localhost v4l2loopback]$ sudo make install
      make -C /lib/modules/`uname -r`/build M=/home/spot/v4l2loopback modules_install
      make[1]: Entering directory '/usr/src/kernels/5.5.0-0.rc7.git1.2.fc31.x86_64'
        INSTALL /home/spot/v4l2loopback/v4l2loopback.ko
      At main.c:160:
      - SSL error:02001002:system library:fopen:No such file or directory: crypto/bio/bss_file.c:69
      - SSL error:2006D080:BIO routines:BIO_new_file:no such file: crypto/bio/bss_file.c:76
      sign-file: certs/signing_key.pem: No such file or directory
        DEPMOD  5.5.0-0.rc7.git1.2.fc31.x86_64
      make[1]: Leaving directory '/usr/src/kernels/5.5.0-0.rc7.git1.2.fc31.x86_64'
      
      SUCCESS (if you got 'SSL errors' above, you can safely ignore them)
      [spot@localhost v4l2loopback]$ sudo depmod -a

      Now, we can load the v4l2loopback module to create a virtual V4L2 video device:
      [spot@localhost v4l2loopback]$ sudo modprobe v4l2loopback devices=1 video_nr=10 card_label="OBS Cam" exclusive_caps=1

      You can change the card label string to whatever you want. This creates /dev/video10 and labels it as OBS Cam.
      At this point, I played with pushing content to it via ffmpeg and seeing the result via ffplay, but while fun, this was not what I was going for.

      Step 2: obs-v4l2sink

      obs-v4l2sink is a plugin for OBS (Open Broadcaster Software) Studio that allows it to write video output to a V4L2 device. In order to build this, you need some more dependencies (and you need to have rpmfusion enabled):
      sudo dnf install qt5-qtbase-devel obs-studio obs-studio-devel

      Now, pull the source down from github (https://github.com/CatxFish/obs-v4l2sink):
      [spot@localhost ~]$ git clone https://github.com/CatxFish/obs-v4l2sink.git
      Cloning into 'obs-v4l2sink'...
      remote: Enumerating objects: 94, done.
      remote: Total 94 (delta 0), reused 0 (delta 0), pack-reused 94
      Unpacking objects: 100% (94/94), 40.07 KiB | 683.00 KiB/s, done.
      [spot@localhost ~]$ cd obs-v4l2sink/

      Next, I had to hack one of the .cmake files so it would find the OBS cmake files from the rpmfusion package:
      diff --git a/external/FindLibObs.cmake b/external/FindLibObs.cmake
      index ab0a3de..7758ee3 100644
      --- a/external/FindLibObs.cmake
      +++ b/external/FindLibObs.cmake
      @@ -95,7 +95,7 @@ if(LIBOBS_FOUND)
       
              set(LIBOBS_INCLUDE_DIRS ${LIBOBS_INCLUDE_DIR} ${W32_PTHREADS_INCLUDE_DIR})
              set(LIBOBS_LIBRARIES ${LIBOBS_LIB} ${W32_PTHREADS_LIB})
      -       include(${LIBOBS_INCLUDE_DIR}/../cmake/external/ObsPluginHelpers.cmake)
      +       include(/usr/lib64/cmake/LibObs/ObsPluginHelpers.cmake)
       
              # allows external plugins to easily use/share common dependencies that are often included with libobs (such as FFmpeg)
              if(NOT DEFINED INCLUDED_LIBOBS_CMAKE_MODULES)
      

      With that change, now, I could build this from source:
      [spot@localhost obs-v4l2sink]$ mkdir build && cd build
      [spot@localhost build]$ cmake -DLIBOBS_INCLUDE_DIR="/usr/include/obs" -DCMAKE_INSTALL_PREFIX=/usr ..
      -- The C compiler identification is GNU 10.0.1
      -- The CXX compiler identification is GNU 10.0.1
      -- Check for working C compiler: /usr/lib64/ccache/cc
      -- Check for working C compiler: /usr/lib64/ccache/cc -- works
      -- Detecting C compiler ABI info
      -- Detecting C compiler ABI info - done
      -- Detecting C compile features
      -- Detecting C compile features - done
      -- Check for working CXX compiler: /usr/lib64/ccache/c++
      -- Check for working CXX compiler: /usr/lib64/ccache/c++ -- works
      -- Detecting CXX compiler ABI info
      -- Detecting CXX compiler ABI info - done
      -- Detecting CXX compile features
      -- Detecting CXX compile features - done
      -- Found Libobs: /usr/bin/../lib64/libobs.so  
      -- Configuring done
      -- Generating done
      -- Build files have been written to: /home/spot/obs-v4l2sink/build
      [spot@localhost build]$ make -j4
      Scanning dependencies of target v4l2sink_autogen
      [ 20%] Automatic MOC and UIC for target v4l2sink
      [ 20%] Built target v4l2sink_autogen
      Scanning dependencies of target v4l2sink
      [ 40%] Building CXX object CMakeFiles/v4l2sink.dir/src/v4l2sink.cpp.o
      [ 60%] Building CXX object CMakeFiles/v4l2sink.dir/v4l2sink_autogen/mocs_compilation.cpp.o
      [ 80%] Building CXX object CMakeFiles/v4l2sink.dir/src/v4l2sinkproperties.cpp.o
      /home/spot/obs-v4l2sink/src/v4l2sink.cpp: In function ‘bool v4l2device_close(void*)’:
      /home/spot/obs-v4l2sink/src/v4l2sink.cpp:217:1: warning: no return statement in function returning non-void [-Wreturn-type]
        217 | }
            | ^
      [100%] Linking CXX shared module v4l2sink.so
      [100%] Built target v4l2sink
      [spot@localhost build]$ sudo make install
      [ 20%] Automatic MOC and UIC for target v4l2sink
      [ 20%] Built target v4l2sink_autogen
      [100%] Built target v4l2sink
      Install the project...
      -- Install configuration: ""
      -- Installing: /usr/lib/obs-plugins/v4l2sink.so
      -- Up-to-date: /usr/share/obs/obs-plugins/v4l2sink/locale
      -- Installing: /usr/share/obs/obs-plugins/v4l2sink/locale/zh-TW.ini
      -- Installing: /usr/share/obs/obs-plugins/v4l2sink/locale/en-US.ini
      -- Installing: /usr/share/obs/obs-plugins/v4l2sink/locale/de-DE.ini
      

      If you're paying close attention, you'll notice that it installed the plugin into /usr/lib, and we need it to be in /usr/lib64. Move that file on over.
      [spot@localhost build]$ sudo mv /usr/lib/obs-plugins/v4l2sink.so /usr/lib64/obs-plugins/v4l2sink.so
      


      Step 3: OBS
      Now, open OBS. You should see a selection under the "Tools" menu for V4L2 Video Output, this means the plugin is loaded:

      Set the path to the virtual device we created in Step 1 (/dev/video10):

      Click Start. The dialog doesn't go away, but it is running. You can close the dialog.
      At this point, you need to add some sources. The first source you should add is a Video Capture Source. Click the "+" under Sources, and select Video Capture Device (V4L2). Create new, name it whatever you want. Hit OK. In the Properties dialog that follows, change the Device to your built in camera. NOT YOUR VIRTUAL DEVICE. Change any of the tunables you need here and hit OK.
      Now, you should see the live feed from your webcam in the main OBS window. On my system it didn't take up the full space, so I dragged that box (from the bottom right corner) to fill the space:

      You might also want to lock this source (click the lock next to the Video Capture Source), so that you don't accidentally move it around.

      Step 4: Effects
      If you skip this step entirely, you should be able to get BlueJeans working EXACTLY like it would normally, just with OBS in the middle. But that's not why we're here, so lets add two effects. First, I want to add a Red Hat watermark. I downloaded a PNG of the logo with transparency, then I add an additional Source, this time it is an Image type. I renamed it to Red Hat Logo and hit OK. The Properties dialog prompts me to select the file, I do, then hit OK. You should see it over top of your webcam video feed now (if not, reorder the Sources list so that it is above the Video Capture device in the list). You can lock the logo's position or make it invisble by clicking the lock or eye next to it in the Sources list. BlueJeans adds a small grey overlay to the bottom of all video windows, so I put the logo on the top of mine, but you can move it around until you have it where you like it.

      Now, I wanted to add a fun effect, so I downloaded a short video of a butterfly flying across a black background: Video used under license from Freestock.com Once downloaded, I added a third source, this time a Media source. I renamed it to Butterfly and hit OK. In the Properties dialog, I selected the file. I also told it to loop and use hardware decoding when available, then hit OK. You should see the butterfly flying happily in the top left of your feed.

      We need to do one more thing, we need to set the black background in the video to be invisible. We do this by adding a Color Key Effect Filter. First, stretch and move the butterfly source so that it is where you want it. Then, right click on the Butterfly source and select Effects. Hit the plus under Effect Filters and select Color Key. Now, change the Key Color Type to "Custom Color" and select black (#ff000000). You should now just see the butterfly in the preview window. Make sure you have the Opacity at 100%, we want to see our Video Capture source behind the butterfly! Hit Close and you should see the butterfly flying by!


      I set it to loop because I want to be able to turn the butterfly on (and off) as needed, and I can do that by toggling the visibility (clicking the eye next to the Butterfly source). If it was not looping, it would play once and stop, whether or not it is visible.

      Step 5: BlueJeans
      NOTE: I could not get this to work with Google Chrome/Chromium. There are lots of people who have posted to the internet trying to get help to reliably change webcam from the default (first found device) in Chrome/Chromium without much success. So, I used Firefox. You need to be sure you haven't given BlueJeans blanket permission to use the webcam, or it will keep trying to use your built-in device. Click on the little "toggle" next to to the URL and make sure there is no permissions for the camera under "Permissions".

      Now, when you open BlueJeans, it will prompt you to give it permission to use the camera and microphone. Be sure to select OBS Cam (or whatever you named your virtual device). You can check the remember box now. I did not, because there are some meetings where the other participants may not appreciate a water mark (or they're all Mac users and none of this works for them). Hit Allow, and you're off! Everything else is BlueJeans as normal.


      Step 6: Notes
      When BlueJeans shows you your webcam video, it flips it. This is not how anyone else sees it, don't panic or flip it in OBS. On my system, this works in near realtime, and it only uses about 9% of the CPU. I keep the OBS window open on a separate monitor, so that I can trigger effects during the call if I want to.

      The butterfly video isn't perfect, the Color Key eats out a bit of the body. A better video (with transparency/color key in mind) would solve this issue.

      You don't need to hit "Start Streaming" or "Start Recording" in OBS. The plugin we built, installed, configured, and enabled is piping the video from OBS to /dev/video10. If you want to stop, you just go back to Tools->V4L2 Video Output and hit Stop (or close OBS).

      Oh, remember when I mentioned a virtual background? Well, you can totally do that with this technique, but you really need a solid background, ideally in a color that never shows up on you or your clothes. In the film business, they use "green screen" (aka chroma key or color key) to accomplish this. I don't have a green screen or even a single color wall in my office, so I couldn't do anything else here, but if you do, you can add a Color Key Effect filter to the Video Capture source (right click it) to remove the "background". Then it will become transparent and you can add an image or video source and layer it appropriately.

      If anyone comes up with a clever way to create a virtual background without the need for a "green screen", please share it!

      Language detection in @redken_bot

      Posted by Pablo Iranzo Gómez on March 25, 2020 07:00 PM

      Introduction

      Before the move to Python3, redken had per-group language configuration by using i18n, with the upgrade/rewrite of Python3 code there were some issues and I had to remove the support, defaulting everything to English (most of the outputs where already in English so not a great loss).

      On another side, having to manually configure each channel could be problematic as most users just add the bot to their groups but don’t care about other settings that might be useful like welcome message, inactivity for kicking out inactive users, etc.

      Telegram’s approach

      Initial version was added by expanding regular function that processes the Telegram-Server message to also take consideration of the user-indicated language and use that to store a new field in the database with the language, but that came with problems:

      • Not all users had the language configured, so most times it was “”
      • When talking on a group, the id was the group one, so if a user had language set to something and then other user wrote on the chat, it was just storing the ‘last used’ language interface.

      Python to the rescue

      As the bot is written in Python, I did a quick search for language detection and found langdetect which is a port of a Java library to Python that can find certain words, etc in the texts and give some hint about the language.

      So, instead of using the almost-always-empty user defined language for the interface, it started using langdetect for getting the message language based on the text.

      This of course, solved on of the problems (no-configured language by users), still left the one on the ‘last-language-used’ and introduced a new one:

      • langdetect does a guess based on common words, accents, characters, etc but it’s just that: a guess.

      The approach then was to use something that was introduced in @descuenbot and was described in this article: calculate averages based on prior count and new value.

      In this way, the language moved from being just a string to become a dictionary, storing count and lang: % values, for example:

      1
      {"count": 272, "en": 2.2, "es": 75.36, "it": 3.74, "ca": 3.48, "fi": 2.3, "fr": 1.64, "pt": 1.73, "et": 0.97, "ro": 0.68, "de": 1.11, "hr": 0.68, "sw": 1.09, "tl": 0.96, "lt": 0.68, "sk": 1.22, "so": 1.16, "da": 1.18, "sv": 0.67, "tr": 0.58, "hu": 0.46, "vi": 0.43, "sl": 0.41, "no": 0.37}
      

      This is my status, from 272 messages, the library has detected Spanish 75% of times, plus some other messages. As you might infer, there are lot of languages listed there that I never used, so it’s really important to keep this values refreshing with higher message counts.

      For groups, this becomes even more interesting, as the message languages get updates for each user that speaks in the channel, giving ‘faster’, good results on the language being used:

      English group:

      1
      {"count": 56, "tr": 1.79, "en": 80.36, "da": 1.79, "af": 1.79, "sl": 1.79, "ca": 1.79, "es": 1.79, "fi": 1.79, "nl": 1.79, "it": 1.79, "sq": 1.79, "so": 1.79}
      

      Spanish group:

      1
      {"count": 140, "es": 61.42, "fi": 0.68, "ca": 1.46, "it": 5.72, "tr": 0.68, "sw": 1.46, "pt": 5.74, "so": 2.12, "en": 8.58, "pl": 0.68, "sv": 1.48, "hr": 0.68, "sk": 0.67, "cy": 3.58, "tl": 1.43, "sl": 0.68, "no": 1.44, "de": 0.69, "da": 0.7}
      

      Of course, before integrating this code into @redken_bot, I did a small program to validate it:

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      27
      28
      29
      30
      31
      32
      33
      34
      35
      36
      37
      38
      39
      40
      41
      42
      43
      44
      45
      46
      47
      48
      49
      50
      51
      52
      53
      54
      55
      56
      57
      58
      59
      60
      61
      62
      63
      64
      65
      66
      67
      68
      69
      70
      71
      from langdetect import detect
      import json
      
      # Add some sentences to an array to test
      text = []
      text.append("It could be that your new system is not getting as much throughput to your hard disks as it should be")
      text.append("Il mio machina e piu veloce",)
      text.append("Je suis tres desolè",)
      text.append("El caballo blando de santiago era blanco",)
      text.append("My tailor is rich",)
      text.append("En un lugar de la Mancha de cuyo nombre no quiero acordarme",)
      text.append("Good morning, hello, good morning hello")
      text.append("No es cierto angel de amor que en esta apartada orilla no luce el sol sino brilla")
      text.append("Tears will be falling under the heavy rain")
      text.append("Que'l heure est il?")
      text.append("Caracol, col col, saca los cuernos al sol")
      
      
      # Create dictionary empty for this to work
      language = {}
      language["count"] = 0
      
      # Process each line in text
      for line in text:
          # As we'll be iterating later over a dictionary, prepare updates in a different one
          updates = {}
      
          # Detec language in the line received
          language_code = detect(line)
      
          print(" ")
          print("Processing Line with detected language ", language_code, "|", line)
      
          updates["count"] = language["count"] + 1
      
          # Check if we need to add key to language
          if language_code not in language:
              print("New key in language, preparing updates")
              # New language % is 100 over the total number of updates received before, so 100% for the first message in a group
              updates[language_code] = 100 / updates["count"]
      
          # Process each key we already had in language
          for key in language:
              # If the new language matches the one detected, give it a 100%, else, 0% , so that we work on % for each language
              if key == language_code:
                  value = 100
              else:
                  value =0
      
              # As we store message count in the same dictionary, we just skip it
              if key != "count" :
      
                  print("Processing key %s in language for average updates" % key)
                  updates[key] = float("{0:.2f}".format(language[key] + ((value - language[key]) / updates["count"])))
                  print("New average: %s for language %s" % (updates[key], key))
      
          print("Updates: %s" % updates)
          language.update(updates)
      
          print(language)
      
          # Validate that final sum of % is close to 100% (consider rounding problems)
          accum = 0
          for key in language:
              if key != 'count':
                  accum = accum + language[key]
      
          print(float("{0:.2f}".format(accum)))
      
      # Dump the json of the final detected languages
      print(json.dumps(language))
      

      Which, when executed, gives as final results:

       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      23
      24
      25
      26
      Processing Line with detected language  fr | Que'l heure est il?
      Processing key en in language for average updates
      New average: 40.0 for language en
      Processing key it in language for average updates
      New average: 10.0 for language it
      Processing key fr in language for average updates
      New average: 20.0 for language fr
      Processing key es in language for average updates
      New average: 30.0 for language es
      Updates: {'count': 10, 'en': 40.0, 'it': 10.0, 'fr': 20.0, 'es': 30.0}
      {'count': 10, 'en': 40.0, 'it': 10.0, 'fr': 20.0, 'es': 30.0}
      100.0
      
      Processing Line with detected language  es | Caracol, col col, saca los cuernos al sol
      Processing key en in language for average updates
      New average: 36.36 for language en
      Processing key it in language for average updates
      New average: 9.09 for language it
      Processing key fr in language for average updates
      New average: 18.18 for language fr
      Processing key es in language for average updates
      New average: 36.36 for language es
      Updates: {'count': 11, 'en': 36.36, 'it': 9.09, 'fr': 18.18, 'es': 36.36}
      {'count': 11, 'en': 36.36, 'it': 9.09, 'fr': 18.18, 'es': 36.36}
      99.99
      {"count": 11, "en": 36.36, "it": 9.09, "fr": 18.18, "es": 36.36}
      

      Conclusion

      Above code was adapted to redken, so when a new user message was received, both the group when the user wrote a sentence and the user itself, got a new dictionary of languages detected.

      This approach, of using the ‘moving %’, just requires prior value for language and items count to calculate the new one reducing both the information needed to be stored to a minimum.

      In the future, when I’m adding back strings for languages, I can automate how redken reacts per channel (unless overridden) so that it provides messages in a more natural way for users.

      Enjoy!

      AMD Ryzen - PBO, overclocking and undervolting

      Posted by Radka Janek on March 25, 2020 06:51 PM

      Previously in what’s becoming a Ryzen (3rd gen) min-maxing series, we learned about memory limitations. This time around I’d like to introduce you to three different things that you can do with the clock and voltage. Tweaking PBO for gaming/workstation use, undervolting for server workloads, or overclocking to (not) destroy your processor.

      No matter what you read here, go and enable XMP for your memory :D BAM 10% free performance right there.

      Terminology

           
      PBO Precision Boost Overdrive  
      OC Overclock  
      VRM Voltage Regulator Module  
      VCORE VDDCR CPU voltage The stuff we’ll be talking about as just “voltage”

      PBO and the other alternatives

      PBO (Precision Boost Overdrive) is essentially that thing that makes your Ryzen CPU boost above it’s standard clock. Now if you google what it is, AMD describes it as a “powerful new feature designed to improve multithreaded performance.” That is far from truth, in fact, it boosts single to low-thread-count workloads much more than all-thread-count workload. In fact, you’re gonna have much better high-thread-count performance if you disable PBO and set your clock manually to the maximum safe voltage. (More on that later.) However, this will decrease your single to low-thread-count performance. And as such we’ve got two main use-cases here.

      • Workstation - Use tweaked PBO.
      • Server - Use manual all core scalar. Some people call it “overclocking” but we’re not really increasing the clock beyond what we would experience with PBO. We just fix it to that value.
        • Maxed out performance - use maximum safe voltage (We will learn about it below.)
        • Undervolting - use low voltage, and a bit lower clock. Recommended way to go for servers of any kind.

      What happens if you run a heavy workload on a server with PBO enabled and stock? Well, take for example the Folding@Home initiative to fight COVID-19. They struggle with giving people work units, and if you split your CPU into multiple smaller workers, you will experience very high temperatures, too high to run 24/7. You should really undervolt it. Here is what I observed with a really good case airflow and Noctua D15 on my 3950X and 3900X alike.

      PBO Enabled

      Workload Temperature
      All threads at 10% 60°C
      Single thread at 100% 85°C
      Half threads at 100% 80°C
      All threads at 100% 70°C

      PBO Disabled - undervolted to 1.1V @ 4GHz

      Workload Temperature
      All threads at 10% 45°C
      Single thread at 100% 52°C
      Half threads at 100% 65°C
      All threads at 100% 70°C

      OC Max would be similar to the above undervolting, just add 15°C more to it.

      As you can probably deduce without me, PBO is great for gaming and other workstation use. Not so much for severs.

      PBO Tweaking

      You can tweak PBO a little bit to gain a tiny bit more performance. Varies from chip to chip, it can be anywhere between 1% to 10% depending on workload.

      You can follow Buildzoid’s guide on YouTube but he rambles a lot so here is TLDR:

      Find PBO settings in your bios and set it all to manual values:

      • PPT Limit: 300
      • TDC Limit: 230
      • EDC Limit: 230
      • Scalar manual 4x
      • Max boost clock +200MHz
      • Thermal throttle to your liking, I’m running with 85 but it never even hits 75°C with my cooling.)

      (Hint: it’s in the AI-Tweaker section for Asus mainboards.)

      OC - Maximum safe voltage (aka FIT voltage)

      I like doing these things on Winblows, however it should be possible on Linux with some user experience differences. The tools should be available.

      To find out what’s the limit above which you should never go, keep PBO enabled, and grab some tools:

      Now fire up prime95 with “small fft” and all of your threads. Next, assuming you’re using winblows gui, in the bottom of the hwinfo sensors window you can click a little “reset min/max” button.

      Let it cook for a few minutes. Your maximum voltage is called CPU Core Voltage (SVI2 TFN) (the max column) - exceeding it even by a small amount will cause degradation over several months. Surprisingly AMD engineers with PhDs actually know what they’re doing and the chip is already running at its maximum possible performance out of the box. This is why I said that it’s not exactly overclocking, we don’t want to break the chip.

      OC Tweaking

      And now the fun part. Don’t actually push this with bad mainboards. Generally 450b chipset mainboards are not fit to run Ryzen 39xx - often even at stock they will cook their VRM struggling to power it. While on the other hand, pretty much all the X570 have good VRM and will be fine. You can consult the mainboard mastersheet or listen to Buildzoid ramble about mainboards (time table in the comments.)

      If you really like to run Buildzoid’s rambling on background, you can also check out this one on chip degradation (He does say a few things wrong here and there, but ain’t nobody perfect and the point stands.)

      Beware, stay away from SoC voltage. Leave it. Don’t touch it. That’s not what we’re doing here. (SoC voltage drives APUs on chips that have them, and I/O die.)

      Undervolting

      Undervolting is simple - set some nice low voltage, and start at some reasonable all core multiplier. This varies from chip to chip of course, for 3950x you can start with 40.5x at 1.15V (Set vcore via offset (1.1V) +0.05)

      Now you’ve gotta test it, as I said already I like to use Winblows for this purpose, I use a few loops of Intel Burn Test on High. You can also use stress-ng on Linux but your mileage may vary. I noticed that it doesn’t generate as much heat for example. (Around 10°C less)

      If it’s absolutely stable and you’re happy with the temperature, you’re done. If it ain’t stable, decrease the clock by 0.25 ~ If it’s stable but too hot, decrease voltage by 0.02-ish

      “Overclocking”

      Now the other way around, eh? So let’s take a bit higher values to begin with. Say 42x and 1.2V. (Again set vcore via offset (1.1V) +0.1)

      Now test it the same way as with undervolting above. If it’s unstable, increase voltage (while keeping it below our safe value.) If it’s too hot, you’ve gotta decrease it or improve your cooling. If it’s stable and you’re nowhere near your maximum safe voltage, you can bump up the clock scalar by 0.25

      Have fun!

      In the next blog post we will look at cooling and case airflow, and I will introduce you to my OCD workstation :)

      Part 5: Which of these security problems do I need to care about?

      Posted by Josh Bressers on March 25, 2020 01:01 PM

      If you just showed up here, go back and start at the intro post, you’ll want the missing context before reading this article. Or not, I mean, whatever.

      I’ve spent the last few posts going over the challenges of security scanners. I think the most important takeaway is we need to temper our expectations. Even a broken clock is right twice a day. So assuming some of the security flaws reported are real, how can we figure out what we should be paying attention to?

      I ran the scan

      If you ran a security scanner, running it is the easy part. What you do with the results of your scan is a challenge. I’ve seen teams just send the scan along to the developers without even looking at it. Never do this. This tells your developers two very important thing. 1) You think your time is worth more than theirs. 2) You aren’t smart enough to parse the scan. Even if one or both of these are true, don’t just dump these scans on someone else. If you ran it, you own it. Suddenly that phone book of a scan is more serious.

      When you have the result of any security report, automated or human created, how you deal with the results depends on a lot of factors. Every organization has different process, different resources, and different goals. It’s super important to keep in mind the purpose of your organization, resolving security scan reports probably isn’t one of them. Why did you run this scan in the first place? If you did it because everyone else is doing it, reading this blog series isn’t going to help you. Fundamentally we want to run these scanners to make our products and services more secure. That’s the context we should read these reports. Which of these findings make my product or service less secure? And which findings should I fix to make it more secure?

      I was given a scan

      If you were given a scan, good luck. As I mention in the previous section. If you were given one of these scans and it’s pretty clear the person giving it to you didn’t read it, there’s nothing wrong with pushing back by asking for some clarification. There’s nothing more frustrating than someone handing you a huge scan with the only comment being “please fix”. As we’ve covered at length, a lot (almost all) of these results are going to be false positives. Now you have to weed through someone else’s problem and try to explain what’s happening.

      I’ve seen cases where a groups claim they can’t run an application unless the scan comes back clean. That’s not a realistic goal. I would compare it to only buying computers that don’t crash. You can have it as a requirement, but you aren’t going to find one no matter how hard you try. Silly requirements lead to silly results.

      Falsifying false positives

      If you ran the scan or you were handed a scan, one of the biggest jobs will be figuring out which results are false positives. I don’t know of a way to do this that isn’t backbreaking manual labor. Every finding has a number of questions that you have to answer “yes” to in order for the finding to matter.

      1. Do you actually include the vulnerable dependency?
      2. Is version you’re using is affected by the issue?
      3. Are you use the feature in your application?
      4. Can attackers exploit the vulnerability?
      5. Can attackers use the vulnerability to cause actual harm?

      As humans it’s hard work to do these steps, it’s likely you can’t do them by yourself. Find some help, don’t try to do everything yourself.

      One really important thing to do as you are answering these questions is to document your work. Write down as much detail as you can because in three months you’re not going to remember any of this. Also, don’t use whatever scanner ID you get from the vendor, use the CVE ID. Every scanner should be reporting CVE IDs (if they don’t, that’s a bug you should report). Then if you run a second scanner you can know right away if something has already been investigated since you’ve already documented the CVE ID. Using only scanner IDs isn’t useful across vendors.

      Parsing the positive positives

      Let’s make the rather large leap from running a scan to having some positive positives to deal with. The false positives have been understood, or maybe the scanners have all been fixed so there aren’t any false positives! (har har har) Now it’s time to deal with the actual findings.

      The first and most important thing to understand is all of the findings aren’t critical. There is going to be a cornucopia of results. Some will be critical, some will be low. Part of our job is to rank everything in an order that makes sense.

      Don’t trust the severity the scanner gives you. A lot of scanners will assign a severity rating to the findings. They have no idea how you’re using a particular piece of code or dependency. Their severity ratings should be treated with extreme suspicion. They could be an easy way for a first pass ranking, but those rating shouldn’t be used for anything after the first pass. I’ll write a bit more on where these severities come from in a future post, the short version is the sausage is made with questionable ingredients.

      It makes a lot of sense to fix the critical findings first, nobody will argue this point. A point that is a bit more contentious is not fixing low and moderate findings, at least not at first. You have finite resources. If fixing the critical issues consume all of your resources, that’s OK. You can mark low findings in a way that says you’re not fixing them now, but might fix them later. If your security team comes back claiming that’s not acceptable and you have to fix everything, I suggest a very hearty “patches welcome” be sent to them. In typical software development minor bugs don’t always gets fixed. Security bugs are just bugs, fix the important stuff first, don’t be afraid to WONTFIX silly things.

      It’s also really important to avoid trying to “fix” everything just to make the scanner be quiet. If your goal is a clean report, you will suffer other consequences due to this. Beware the cobra effect.

      Can’t we all just get along

      The biggest takeaway from all of this is to understand intent and purpose. If you are running a scanner, understand why. If you’re receiving a report, make sure you ask why it was run and the expectations of whoever gave it to you. It’s generally a good idea not to assume malice, these scanners are very new and there is a huge knowledge gap, even with the people who historically would consider themselves security experts. It can get even more complicated because there’s a lot of open source thrown into the mix. The amount of knowledge needed for this problem is enormous, don’t be afraid to ask lots of questions and seek out help.

      If you are doing the scanning, be patient.

      If you are receiving a scan, be patient.

      Remember, it’s nice to be nice.

      Part 6: What do we do now?

      No. Internet voting is still a No Go.

      Posted by Harish Pillay 9v1hp on March 25, 2020 11:38 AM

      I was asked by a friend why is it that we can’t do voting over the Internet.  With all of the digitisation being done globally, and the ongoing COVID-19 issue, shouldn’t Singapore – the Smart Nation – have the general elections (which is due no later than April 2021) be done over the Internet?

      One word answer: No.

      Yes, you have done plenty of Internet banking transactions. You’ve sent money to phone numbers, you’ve received monies etc. You’ve bought stuff using your credit card over the Internet and received the goods. And yes Amazon, Alibaba, Paypal, eBay etc are multi-billion businesses that accept payments over the Internet. It is safe and it works.

      Why? Because of the simple transaction involved: you know what you paid – you can check the ledger and the recipient can check as well. E-commerce sites can see the transactions just as clearly as those involved in the transactions.

      There is no secrecy within a transaction here. There is secrecy across all transactions, but each participant in a transaction knows all the details.

      When you transfer $100 to a bank account over the Internet, you can check that it was delivered/received. You can check that your account was reduced by $100 and the recipient’s increased by $100.

      But if you are NOT part of a transaction, you have no idea what happened. So, global secrecy is enforced and that’s all well (hence money laundering, bribery etc thrives).

      The democratic process of voting has one critical thing that is different from the usual electronic transactions: the participants of the transaction DON’T KNOW WHAT TRANSPIRED because of vote secrecy.

      I can tell the person who I voted for that I did vote for that person, but there is NO WAY for that person to check that A VOTE did indeed come from me. That person will only see a vote.

      The only country to have gone down the path of Internet voting is Estonia. Even then, it is not 100% participation. You cannot do e-voting ON THE DAY OF THE VOTE. Here is a page that discusses the software one can use for e-voting. Note that the site says that votes from mobile phones are not possible.

      Why is Internet or e-voting a hard problem to solve? It is the conflict of two fundamental requirements: Trust and Secrecy.

      In order to vote, you need to TRUST. You need to trust that the system that you are using is indeed secure and safe from being manipulated. Open source software is a necessary but insufficient condition for the trust to be established. I could inspect the code, I could compile the code, I can install the code on the voting machine, but there could be something else running in the machine that I can’t check that could negate what I’ve done by way of software. I will need a fully trustable piece of hardware. Bunny spoke about building trustable hardware at 36C3 last December. Spoiler alert: No.

      On the assumption that we have trust in the hardware, software the standards-based Internet connection that will carry my vote to the vote aggregator, can we trust that end device (hardware and software)? As the vote traverses the Internet, we have to guard against man-in-the-middle attacks, for example, among many other forms of attacks.

      How would I keep my vote secret as that is a tenet of voting? Voting behaviour can be modified and affected by coercion, intimidation and threats (CIT) – real or perceived – hence the need for secrecy. The Estonian i-voting model mitigates the CIT to an extent because you can i-vote AS MANY TIMES before the polling day and have only the last i-Vote that will be counted and/or go to the polling station on the day of the vote and cast the vote. This will then override all the i-Votes.

      The struggle of Trust and Secrecy is holding back Internet or e-voting. The Estonian i-Voting has been reviewed by many people and this report from 2014 recommends that it not continue.

       

       

       

      Release 5.2.2

      Posted by Bodhi on March 25, 2020 08:31 AM

      v5.2.2

      This is a bugfix release.

      Bug fixes

      • Only pass scalar argument to celery (part 2). Avoid the celery enqueuer
        emitting SQL queries to resolve attributes, and therefore opening new
        transactions. (:issue:8b30a825).

      Contributors

      The following developers contributed to this release of Bodhi:

      • Clement Verna