/rss20.xml">

Fedora People

fun with laptops

Posted by Kevin Fenzi on 2024-09-19 01:12:06 UTC

So, rewind to earlier this year: There were 2 laptop announcements of interest to me.

First was the snapdragon X arm laptops that were going to come out. qualcomm was touting that they would have great linux support and they were already working on merging things upstream. Nothing is ever that rosy, but I did pick up a Lenovo Yoga Slim 7x that I have been playing with. Look for a more detailed review and status on that one in a bit. Short summary is that it’s a pretty cool laptop and mainstream linux support is coming along, but it’s not yet reading to be a daily laptop IMHO.

The second was framework announcing a new batch of laptops would be coming out with some nice upgrades, so I pre-ordered one of the ryzen ones. But reader, you may ask: “don’t you already have a framework ryzen laptop? and aren’t they supposed to be upgradable? So why would you order another one?”. To which I answer: yes, and yes, and… because I wanted so many new things it seemed easier to just order a new one and get a spare/second laptop out of it.

I have one of the very first generation framework 13 laptops. It was originally ordered as a intel 11th gen cpu/mb shipped in July of 2021, almost 3.5 years ago. So whats in the newer/latest version that I wanted?

  • Better hinges (the old ones are kinda weak and you can cause the display to ‘flop’ if you carry it by that.
  • New top cover. The old one is the old multipart one, the new ones have a one part aluminum one.
  • New camera thats supposedly better.
  • New battery (ok, I replaced the battery in my old one a while back, but always nice to have a new battery)
  • Replacement input cover (the thing with the keyboard/touchpad). After hammering mine for 3.5 years, the tab and/or alt keys stick and result in moving between windows being frustrating. Also, new one has no windows key, just a ‘super’ key.
  • Higher resolution / refresh rate display. (120hz and 2880×1920 and matte vs 60hz and 2256×1504 and glossy). In particular the glossy is very anoying in highly reflecting areas.

So, I could have replaced all those things, but at that point it seemed like it would be easier to just move to a new chassis and have a spare.

Of course things didn’t go as planned. The laptop arrived and I swapped my memory and nvme drive over to it and… it didn’t boot. Spend a fair bit of time with framework support back and forth. They wanted movies / pictures of most everything and had me do a bunch of things to isolate the problems. They decided it was a bad monitor/display cable and input cover. So, they shipped those replacements to me (they had to replace the display because the cable is attached to it). Unfortunately, they shipped them USPS, so it took about 9 days and because we don’t get USPS here I had to go rescue it from the local postoffice before they sent it back.Today I swapped in the display and input cover and everything worked like a charm. A quick switch of memory and nvme I am am now booted on the new laptop.

How to rebase to Fedora Silverblue 41 Beta

Posted by Fedora Magazine on 2024-09-18 17:39:08 UTC

Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. This article provides the steps to upgrade to the newly released Fedora Linux 41 Beta, and how to revert if anything unforeseen happens.

Before attempting an upgrade to the Fedora Linux 41 Beta, apply any pending upgrades.

Updating using terminal

Because the Fedora LInux 41 Beta is not available in GNOME Software, the whole upgrade must be done through a terminal.

First, check if the 41 branch is available, which should be true now:

$ ostree remote refs fedora

You should see the following line in the output:

fedora:fedora/41/x86_64/silverblue

If you want to pin the current deployment (this deployment will stay as an option in GRUB until you remove it), you can do it by running:

# 0 is entry position in rpm-ostree status
$ sudo ostree admin pin 0

To remove the pinned deployment use following command (2 corresponds to the entry position in the output from rpm-ostree status ):

$ sudo ostree admin pin --unpin 2

Next, rebase your system to the Fedora 41 branch.

$ rpm-ostree rebase fedora:fedora/41/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora Silverblue 41 Beta.

How to revert

If anything bad happens — for instance, if you can’t boot to Fedora Silverblue 41 Beta at all — it’s easy to go back. Pick the previous entry in the GRUB boot menu (you need to press ESC during boot sequence to see the GRUB menu in newer versions of Fedora Silverblue), and your system will start in its previous state. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase to Fedora Silverblue 41 Beta and fall back. So why not do it today?

Known Issues

FAQ

Because there are similar questions in comments for each blog about rebasing to newer version of Silverblue I will try to answer them in this section.

Question: Can I skip versions during rebase of Fedora Linux? For example from Fedora Silverblue 39 to Fedora Silverblue 41?

Answer: Although it could be sometimes possible to skip versions during rebase, it is not recommended. You should always update to one version above (39->40 for example) to avoid unnecessary errors.

Question: I have rpm-fusion layered and I got errors during rebase. How should I do the rebase?

Answer: If you have rpm-fusion layered on your Silverblue installation, you should do the following before rebase:

rpm-ostree update --uninstall rpmfusion-free-release --uninstall rpmfusion-nonfree-release --install rpmfusion-free-release --install rpmfusion-nonfree-release

After doing this you can follow the guide in this blog post.

Question: Could this guide be used for other ostree editions (Fedora Atomic Desktops) as well like Kinoite, Sericea (Sway Atomic), Onyx (Budgie Atomic),…?

Yes, you can follow the Rebasing using the terminal part of this guide for every ostree edition of Fedora. Just use the corresponding branch. For example for Kinoite use fedora:fedora/41/x86_64/kinoite

It’s okay to not do things

Posted by Ben Cotton on 2024-09-18 12:00:00 UTC

This month’s Todoist newsletter was about not doing things. That might seem an odd thing for a todo list manager to promote, but here we are. Naomi, the human behind the newsletter, wrote

The only thing more liberating than checking something off is deciding that you don’t actually have to do it in the first place. Press that sacred delete button, and thank me later. 😌

You see, when you’re in the (good) habit of recording every single task or reminder that pops into your head, your task list can end up pretty full. And this is exactly what we want because now you have a clear mind and can look at all of your tasks objectively.

However, sometimes we get attached to those tasks and put pressure on ourselves to do every single thing that ever crossed our minds or graced our task list.

Rarely have I felt so called out by a newsletter. But it’s true: sometimes you just can’t do all of the things. That’s okay. Last week, I was really busy. I was physically and mentally tired and I decided that writing a new post for Duck Alignment Academy could wait. I like to keep to my regular Wednesday publication cadence, if for no other reason than to become universally adored and sell more books. But if I’m being honest with myself, there’s no real harm in skipping a week when I need to. None of you even said anything!

As much as we might hate to admit it to ourselves, this rule applies to our project’s processes and schedule tasks. They’re all there for a reason, and it’s good if all of the tasks get done. But not all of them are critical and sometimes they have to happen late or not at all. The key is to understand which tasks are critical and which aren’t.

As a general rule, the broader the impact or harder to reverse the effect of skipping a task, the more likely it is that you need to consider it critical. Skipping the release notes on a major version release might make it difficult for users to upgrade. Skipping the blog post on a patch release doesn’t hurt much. A month delay in removing inactive committers probably doesn’t do much harm, but a month delay in granting privileges to a new committer means they can’t fully contribute.

Some tasks get more critical the more they’re skipped. Skipping the inactive committer cleanup for one release cycle is probably fine. Skipping it for several means you have a lot of vectors for an account takeover attack. But if you find a task getting skipped every time with no ill effects, it might be time to reconsider if it’s necessary to have it on the list at all.

This post’s featured photo by Priscilla Du Preez 🇨🇦 on Unsplash.

The post It’s okay to not do things appeared first on Duck Alignment Academy.

Why sudo 1.9.16 enables secure_path by default?

Posted by Peter Czanik on 2024-09-17 14:21:40 UTC

Sudo 1.9.16 is now out, containing mostly bug fixes. However, there are also some new features, like the json_compact option I wrote about a while ago. The other major change is, secure_path is now enabled by default in the sudoers file, and there is a new option to fine-tune its content.

Read more at https://www.sudo.ws/posts/2024/09/why-sudo-1.9.16-enables-secure_path-by-default/

Sudo logo

Announcing Fedora Linux 41 Beta

Posted by Fedora Magazine on 2024-09-17 13:46:55 UTC

Today, the Fedora Project is pleased to announce the availability of Fedora Linux 41 Beta. While we’ll have more to share with the general availability of Fedora Linux 41 in about a month, there is plenty in the beta to get excited about now.

Get the the prerelease of any of our editions from our project website:

You can also update an existing system to the beta using DNF system-upgrade.

What is a Fedora Beta release?

Fedora beta releases are code-complete and will very closely resemble the final release. While the Fedora Project community will be testing this release intensely, we also want you to check and make sure the features you care about are working as intended. The bugs you find and report help make your experience better as well as for millions of Fedora Linux users worldwide! Together, we can help not only make Fedora Linux stronger, but as these fixes and tweaks get pushed upstream to the kernel community, we can contribute to the betterment of the Linux ecosystem and free software holistically.

Some changes of note

Valkey replaces Redis

As Redis recently changed to a proprietary license, we have replaced Redis with Valkey. All software shipped by Fedora is open source and free software, in line with our Freedom foundation. If you are currently using Redis, see How to move from Redis to Valkey for migration help.

Goodbye, Python 2!

Starting with Fedora Linux 41, there will be no Python 2 in Fedora, other than PyPy. Packages requiring Python 2.7 at runtime will need to upgrade to a new version, or be retired also. Developers who still need to test their software on Python 2, or users of software that cannot be updated, can use containers with older Fedora releases.

Proprietary Nvidia driver installation with Secure Boot support

Although it can’t be part of Fedora Linux, we know that the Nvidia binary driver is pragmatically essential for many people. Previously, Nvidia driver installation had been removed from GNOME Software because it didn’t support Secure Boot, which is increasingly-often enabled by default on laptops. This change brings the option back for Fedora Workstation users with Secure Boot supported. This is good news for folks who want to use Fedora Linux for gaming and CUDA. The change also helps Fedora stay relevant for AI/LLVM workloads.

DNF 5 is here

In Fedora Linux 41, the dnf package management command will be updated to version 5. (DNF5 and bootc will be available on image-based Fedora variants such as Atomic desktops and Fedora IoT.) The new packages will make it simpler to build and update bootable container images based on these variants.

DNF and bootc in Image Mode Fedora Variants

In Fedora Linux 41, the DNF package manager will be updated to version 5. This release is faster, smaller, and better. (Pick all three!) You won’t need to change habits — the command is still just dnf, and the basic syntax isn’t different. As one might expect with a major version, there are some incompatible changes. See the DNF 5 documentation for details.

RPM 4.20

Under the hood, our lower-level package management tool is RPM, which also gets a new release, bringing new features for Fedora development. Users won’t see a direct impact immediately, but this update will help us make the distro better overall over time.

Reproducible-builds progress

A post-build cleanup is integrated into the RPM build process, making most Fedora packages now reproducible. That is, you can re-build a package from source and expect the package contents to be exactly identical. If this is interesting to you, check out Fedora Reproducible Builds for more.

New fedora-repoquery tool

Fedora-repoquery is a small command line tool for doing repoqueries of Fedora, EPEL, eln, and Centos Stream package repositories. It wraps dnf repoquery separating the cached repo data under separate repo names for faster cached querying. Repoqueries are frequently used by Fedora developers and users, so a more powerful tool like this is generally useful. 

KDE Plasma Mobile Spin

KDE Plasma Mobile brings the KDE Plasma Desktop to a flexible, mobile format in Fedora 41 as a Spin. This promises to work on both phones, tablets and 2-in-1 laptops.

LXQt 2.0

LXQt in Fedora will be upgraded to v2.0, which notably ports the whole desktop to Qt 6 and adds experimental Wayland support. 

New “Fedora Miracle” spin

The Miracle window manager is a tiling window manager based on the Mir compositor library. While it is a newer project, it contains many useful features such as a manual tiling algorithm, floating window manager support, support for many Wayland protocols, proprietary Nvidia driver support, and much more. Miracle will provide Fedora Linux with a high-quality Wayland experience built with support for all kinds of platforms, including low-end ARM and x86 devices. On top of this, Fedora Linux will be the first distribution to provide a Miracle-based spin, ensuring that it will become the de facto distribution for running Miracle. 

Missing Spins?

A technical glitch means that a few of our spins didn’t build correctly for the beta release — Robotics, Jam, Design Suite, and ARM live images. We still expect these to be part of our final release. If you’re interested in testing these, watch the Nightly Compose Finder for good builds.

Let’s test Fedora 41 Beta together

Since this is a beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the test mailing list or in the #quality:fedoraproject.org channel on Fedora Chat (Matrix). As testing progresses, common issues are tracked in the “Common Issues” category on Ask Fedora.

For tips on reporting a bug effectively, read how to file a bug.

Ruff my dirty code

Posted by Jakub Kadlčík on 2024-09-17 00:00:00 UTC

Static analysis tools have their limitations but regardless they help us quickly discover many types of bugs. That is not a controversial take. What may be disputed is whether enabling them for large projects with a long-standing history is worth the effort and I would say it most definitely is. In this article will take a look at how to report problems only for new code, lowering the barrier to entry to the minimum.

The dilemma

Creating a new project from scratch is so enjoyable, isn’t it? No technical debt, no backward compatibility, no compromises, all the code is beautifully formatted and brilliantly architected. We don’t really need to run any static analysis tools but we do it anyway just so that they can say that “All checks passed!” and that we are awesome. Obviously, I am being sarcastic, but the point is that enabling such tools for new projects is easy.

Now, let’s consider projects that have been developed in the span of decades. Everything is a mess. Running pylint, mypy or ruff overwhelms you with hundreds or thousands of reports and leaves you with the following dilemma - Should you just abandon all hope and pretend that this never happened? Should you pollute your codebase with a bunch of # pylint: disable=foo comments? Or should you devote the next month of your life to rewriting all the problematic code while risking to introduce even more bugs in the process?

There is one more option that worked for our team quite well for years now - running static analysis tools for the whole project but reporting only newly introduced problems.

Reporting only new problems

There is a tool called csdiff which takes two lists of defects (formatted errors from static analysis tools), compares them, and prints only defects that newly appeared or that are missing from the second list. This can be understood either as newly added or fixed defects.

We created a tool called vcs-diff-lint which does the obvious thing. It runs pylint, mypy, and ruff for the main branch of your git repository, and then runs them again for your current branch. There we have our two lists of defects which get internally passed to csdiff. The output looks like this.

$ vcs-diff-lint
Error: RUFF_WARNING:
fedora_distro_aliases/__init__.py:23:20: F821[undefined-name]: Undefined name `requests`

Error: MYPY_ERROR:
fedora_distro_aliases/__init__.py:11: mypy[error]: "None" has no attribute "append"  [attr-defined]

We can clearly see that our code introduced two new errors. Sometimes it may be useful to also see how many existing errors our code fixed. In that case, use vcs-diff-lint --print-fixed-errors.

Please follow the installation instructions here.

Github Action

I don’t trust myself (or anyone else for that matter) to run the vcs-diff-lint tool manually for every proposed change. And neither should you. There is an easy-to-use GitHub action that runs the tool automatically for every pull request. It tries to be as user-friendly as possible and reports the problems as comments directly in your “Files changed” section.

Please follow the installation instructions here or take a look at our setup as an example.

Ruff support

Ruff is all the rage nowadays, and rightfully so. It checks our whole codebase in under a second (20ms actually) while mypy takes its sweet time and finishes around a one-minute mark. Up until recently, the vcs-diff-lint tool supported only pylint and mypy but since the last release, ruff is supported as well. Please give it a try.

As a matter of fact, I am writing this article as a celebration of the new vcs-diff-lint release.

But what about diff-cover

Speaking about differential static analysis, you may have already heard about diff-cover. It has many more contributors and GitHub stars so why would I recommend trying vcs-diff-lint instead?

  • diff-cover runs static analysis for the whole project but reports only problems for lines changed by the patch. This can’t fundamentally catch problems caused by the changed code but lying outside of it. For example when you add a parameter to a function definition but forget to update all of its calls
  • diff-cover doesn’t provide a GitHub action
  • vcs-diff-lint supports ruff and ruff is cool now

Kuala Lumpur APRS Digipeaters Maintenance and Installation by 9M2PJU, 9W2VHF, 9W2YFS and 9M2EDU

Posted by Piju 9M2PJU on 2024-09-16 15:40:47 UTC

Recently, I had the opportunity to join fellow amateur radio enthusiasts 9W2VHF, 9W2YFS, and 9M2EDU for a crucial maintenance and installation session at Bukit Dinding. We aimed to ensure that our communication systems were in top shape and to expand the reach of APRS digipeaters in the area. Here’s a look at what we accomplished between 11 am and 3 pm.

Clearing the Path: Vegetation Management

Our first task was to clear the area around the site. Over time, bushes and trees had grown around our equipment, potentially impacting signal quality. Armed with various tools, we got to work, cutting back the overgrowth to create a safer environment for both the existing equipment and the new installations. This step was essential not just for immediate access but also for the long-term health of the antennas and cables.

WhatsApp-Image-2024-09-16-at-23.26.41_22c3abc3-769x1024 Kuala Lumpur APRS Digipeaters Maintenance and Installation by 9M2PJU, 9W2VHF, 9W2YFS and 9M2EDU

Thorough Inspection of RF Cables and Antennas

Once the site was cleared, we moved on to inspecting the RF cables and antennas. This is always a critical part of maintenance because any issues here can severely affect signal transmission. We checked each cable for signs of wear and tear, ensuring continuity and verifying that the antennas were securely in place and properly aligned. We made any necessary repairs on the spot to ensure optimal performance.

WhatsApp-Image-2024-09-16-at-23.26.41_4ae53962-769x1024 Kuala Lumpur APRS Digipeaters Maintenance and Installation by 9M2PJU, 9W2VHF, 9W2YFS and 9M2EDU

Solar Power System Check

Our next focus was the solar power system that powers the site. We inspected the solar panels, ensuring they were clean and unobstructed, and checked the wiring and battery storage system. This system is vital for the digipeaters to function reliably, especially in off-grid locations like Bukit Dinding. A well-maintained solar system means that we can count on our equipment to operate even in less-than-ideal weather conditions.

WhatsApp-Image-2024-09-16-at-23.26.42_3a5c4afe-1024x576 Kuala Lumpur APRS Digipeaters Maintenance and Installation by 9M2PJU, 9W2VHF, 9W2YFS and 9M2EDU

Installing New APRS Digipeaters

The main highlight of the day was installing two new APRS digipeaters:

  • 9W2SQ-2 APRS Digipeater: We installed this 1-watt digipeater to provide stronger signals and broader coverage. Proper calibration and positioning were key to maximizing its performance. This addition will greatly enhance our ability to relay real-time data such as location and weather updates, which are crucial for the amateur radio community.
  • 9W2SQ-3 LoRa APRS Digipeater: Alongside the higher-powered unit, we set up a 100mW LoRa APRS digipeater. LoRa technology is perfect for long-distance communication with minimal power usage, making it a valuable component in our setup. This digipeater will help us extend APRS data transmission over challenging terrain, ensuring reliable coverage in even the most remote areas.

Testing and Final Checks

After installing the new equipment, we performed thorough tests to ensure everything was functioning correctly. We monitored signal strength and data transmission quality, and I’m pleased to report that the results were excellent. By 3 pm, we had wrapped up the day’s work, confident that the site was now better equipped to serve the amateur radio community.

WhatsApp-Image-2024-09-16-at-23.26.41_c20c47ae-769x1024 Kuala Lumpur APRS Digipeaters Maintenance and Installation by 9M2PJU, 9W2VHF, 9W2YFS and 9M2EDU

Looking Back on a Productive Day

Our maintenance and installation work at Bukit Dinding was a great success. By clearing the area, ensuring the integrity of our existing equipment, and adding new APRS digipeaters, we’ve significantly improved the site’s capabilities. It’s rewarding to know that our efforts help maintain and expand reliable communication for both hobbyists and those who may rely on these systems in critical situations.

image-3-461x1024 Kuala Lumpur APRS Digipeaters Maintenance and Installation by 9M2PJU, 9W2VHF, 9W2YFS and 9M2EDU

Being a part of this project was a reminder of the dedication and passion that drives the amateur radio community. It’s hands-on work like this that keeps our network strong and resilient, and I’m proud to contribute alongside such skilled and committed operators. Bukit Dinding is now better prepared to handle the communication needs of our community, and I’m looking forward to seeing the positive impact these improvements will have.

The post Kuala Lumpur APRS Digipeaters Maintenance and Installation by 9M2PJU, 9W2VHF, 9W2YFS and 9M2EDU appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

Inviting testers for Git forge usecases

Posted by Fedora Magazine on 2024-09-16 08:00:00 UTC

Following @t0xic0der and @humaton’s talk on the Git forge ARC (Advance Reconaissance Crew) investigation during Fedora Linux Release Party 40 and more recently, @humaton’s talk on the topic during Flock To Fedora 2024 – we have opened up our ARC investigation to all contributors within the Fedora Project. Please refer to the ARC initiative page to create or retrieve the use case requirements for the Git forge replacement. As part of that we created community deployments for GitLab and Forgejo.

Testing GitLab instance

The GitLab instance has limited access and needs manual approval of the account. To get an account on this instance please follow the instructions below:

  1. Create an account on the GitLab instance
  2. Open a ticket on Fedora Infrastructure issue
    1. Use Approve my user on GitLab test instance as summary
    2. In Types choose gitlab_testing_request template
    3. Fill in the template
  3. Wait for us to approve your account

Our test GitLab instance doesn’t allow SSH pushes. You need to use HTTPS to push changes to the repository.

Testing Forgejo instance

Forgejo test instance is much more straightforward when creating an account as it is connected to our staging Fedora Account System. To get an account follow the instructions below:

  1. Create an account in staging Fedora Account System (you can skip this step if you already have the staging Fedora account)
  2. Login to Forgejo test instance with your staging Fedora account using Sign In with Fedora Accounts button

Our test Forgejo instance is hosted in the same environment as the GitLab test instance so it doesn’t allow SSH pushes either. You need to use https to push changes to the repository.

Since the Forgejo test instance is using a staging Fedora account you can use the same username for HTTPS push, but for the password you need to generate a token in Settings->Application->Access Tokens.

Sharing feedback with us

To share your testing results use this discussion thread and let us know if either Forgejo or GitLab is suitable for tested use cases.

How to contact us

You can reach us with any question either in this discussion thread or in our Matrix ARC investigation room.

Fedora Mentored Projects – Council Hackfest

Posted by Fedora Community Blog on 2024-09-16 08:00:00 UTC

Last February 2024, after FOSDEM, the Fedora Council met for a couple of days to discuss strategy and future of Fedora. As part of this discussion, the council also discussed the proposed initiatives, Community Ops 2024 Reboot and Mentored Projects 2024. We are glad to announce that the council approved both of them!

What is the Fedora Mentored Projects initiative?

Fedora is a place where people with very different backgrounds and skills come together, from experienced professionals to students just starting out on their journey, and spanning various sectors such as engineering, marketing, design, and more. This diversity makes it the perfect place to promote mentoring programs.

Historically, the Fedora community has participated in the Outreachy and GSoC mentoring programs. However, we aim to simplify the process for mentors and mentees, making it easier to apply to existing mentoring programs under Fedora or even coordinate new ones!

From our initiative wiki, our mission is:

The Mentored Projects team improves the overall onboarding experience for
Fedora Mentored Projects by equipping participants with role handbooks,
by creating community spaces for mentors to connect with each other,
and by advocating the new role handbooks and community spaces. 

What have we already accomplished?

The initiative was created in November 2023, and since then, we have made significant progress. Here’s an update on our work:

  • We presented the initiative to the Fedora Council, receiving their approval and feedback.
  • We identified past mentors and mentees and conducted several interviews to gather feedback on their experiences.
  • We applied to Outreachy, Google Summer of Code, and Google Season of Docs and secured a budget for three Outreachy interns.
  • We’ve begun drafting role handbooks, although they are not yet complete.
  • We created a new private matrix room for mentored project mentors to collaborate and discuss. We used this actively during the project application period with Outreachy.
  • We facilitated the use of a public Matrix room for all project applicants and mentors to ask general questions about participating in Fedora.

What do we want to do next?

There is still plenty of work to do to consider the initiative completed.

  • Finish the role handbooks and translate the one for accepted interns into three major languages.
  • Present the Mentored Projects Showcase at Flock 2024
  • Present the Mentored Projects Initiative at Flock 2024
  • Work with the Mindshare committee to properly schedule future mentorship opportunities’ timeline
  • Create custom swag for mentees and mentors recognition.

End goals

We envision a future where mentors and mentees know what is expected of them when entering a Fedora Mentored Project and have a smooth onboarding process. In addition, there is a culture of recognition for all the effort they put into the projects to be successful.

The post Fedora Mentored Projects – Council Hackfest appeared first on Fedora Community Blog.

Fedora Operations Report

Posted by Fedora Community Blog on 2024-09-15 23:13:00 UTC

I’ve stopped calling it weekly…until it’s actually weekly again. Read on for a little roundup of some of the things happening around the Project!

Fedora Linux Development

Fedora Linux 40

We hope you are enjoying this release of Fedora! To shine a light on some of the folks who helped with in the quality department of this release, make sure to read the blog post Heroes of Fedora Linux 40. If you would like to help out with any Fedora release, make sure to get involved with Test Days. We are always happy to have more testers!

Fedora Linux 41

The F41 Beta will be arriving on Tuesday 17th September! We are really looking forward to hearing how the Beta performs, we know it wont be perfect, but its important to get some feedback so we can make sure F41 Final continues the trend of ‘the best version yet!’. Keep an eye out for announcements on the Beta in Fedora Magazine and in the usual places like discourse and mailing lists, and if you find a bug, please report it on the Blocker Bugs app.

We are going in to Final Freeze on 15th October, so please make sure you have made any necessary changes or updates to your work by then, as we are still on track to release F41 Final on 5th Novmeber 2024 at this time. Keep an eye on the schedule for a more detailed view of tasks and remember that F39 will EOL on 19th November 2024.

Fedora Linux 42 & Co.

A.K.A, the answer to life, the universe and everything. But this is also a fast approaching Fedora release! Development is now underway and change requests are starting to come in. We also have some changes accepted for this release too, so for a preview of what the release should include, check out the Change Set page. Here is also quick reminder of some important dates for changes if you are hoping to land one in time for this release:

  • December 18th – Changes requiring infrastructure changes
  • December 24th – Changes requiring mass rebuild
  • December 24th – System Wide changes
  • January 14th – Self Contained changes
  • February 4th – Changes need to be Testable
  • February 4th – Branching
  • February 18th – Changes need to be Complete

Be sure to keep an eye on the Fedora Linux 42 release schedule, and for those who are super-duper organized (I am very jealous), the Fedora Linux 43 and Fedora Linux 44 schedules are now live too to bookmark.

Hot Topics

Git Forge Replacement

The git forge replacement effort is still going strong, with members of the ARC team working through testing these user stories against instances of GitLab and Forgejo that are deployed in the Communishift app. We are asking anyone who would like to help out with testing to get involved by requesting access to these instances and working through the use cases in the investigation tracker. The team are collecting results as comments in the discussion thread for now and are working with Fedora QA on creating Test Days in the coming weeks.

At Flock this was a general topic of conversation, and as someone who is overseeing this work on behalf of the council, I was able to meet with Tomas Hckra who is leading this investigation and work on a proposed timeline. We are aiming to adhere to this high level timeline:

  • End of October ’24 – the report comparing the two git forges against the list of user stories/requirements is finished and submitted to Fedora Council for review.
  • Around the same time – the report is also made public
  • Mid – End of November ’24 – A decision is made on what git forge the project will migrate to
  • December ’24 – A migration plan for the project is created and shared publicly
  • January ’25 – A Giant Change Proposal, likely spanning many releases, is created and proposed.
  • Sometime a little later (probably Feb/Mar ’25) – The change proposal is iterated on based on feedback if appropriate, and the Change is broken down into smaller changes targeting specific Fedora releases so the migration can be rolled out slowly and with care

I expect next year we will see the git forge evaluation well concluded, a migration plan in place, and the project will begin to migrate to the new platform. The ideal milestone is that by F44 we will successfully be using the new git forge to build and release Fedora Linux, and by F45 we will look back on pagure with great fondness for the service it did provide us all for so many years 🙂

You can get involved with and follow the conversation by using the #git-forge-future tag on discussions.fpo and joining the ARC matrix room.

The post Fedora Operations Report appeared first on Fedora Community Blog.

The Power of Chirp Spread Spectrum (CSS): History, Techniques, and Applications

Posted by Piju 9M2PJU on 2024-09-15 07:45:12 UTC

Introduction

Chirp Spread Spectrum (CSS) is a unique modulation technique renowned for its robustness, long-range capabilities, and interference resistance. Though less commonly known than other modulation methods, CSS has significantly impacted wireless communication, particularly in the Internet of Things (IoT). This article explores the history of CSS, how it works, its various uses, benefits, and further resources for understanding its significance.

1. The History of Chirp Spread Spectrum

CSS has its roots in mid-20th-century military and aerospace research. During World War II, the need for secure and interference-resistant communication led to the development of various spread spectrum techniques, including CSS. The U.S. military explored these methods to create communication systems that were difficult to jam or intercept, enhancing operational security.

During the Cold War, CSS was further developed and used in military applications such as radar and sonar systems. In radar, CSS enabled accurate range detection and target identification, while in sonar, it helped in detecting and identifying underwater objects. The technology was also utilized in military satellite communications, where its ability to maintain signal integrity over long distances and resist interference proved invaluable.

2. Understanding CSS Modulation Techniques

CSS employs chirps—signals that sweep in frequency either up or down over time. Each symbol in a CSS-modulated signal is represented by a chirp, which can sweep linearly or exponentially. Key aspects of CSS modulation include:

  • Frequency Sweeping: The signal sweeps across a wide bandwidth, making it less susceptible to interference.
  • Spread Spectrum: CSS spreads the signal over a broad range, enhancing its resistance to noise and interference.
  • Processing Gain: The wide bandwidth of chirp signals provides a processing gain, allowing receivers to detect signals even below the noise floor.
  • Doppler Tolerance: CSS can handle frequency shifts caused by the relative motion between transmitter and receiver, making it suitable for mobile and long-distance applications.

3. Applications of CSS

CSS has been adapted for various uses beyond its military origins, including:

  • LoRa Technology: One of the most prominent applications of CSS is in LoRa (Long Range) technology, used in IoT networks for long-range, low-power communication between devices like sensors and trackers.
  • Radar Systems: CSS is used in radar for object detection and ranging, providing high-resolution measurements resistant to interference.
  • Smart Agriculture: In smart farming, LoRa-enabled CSS sensors monitor environmental factors like soil moisture and temperature across vast fields, transmitting data over long distances.
  • Smart Cities: LoRa networks using CSS are employed in smart city infrastructure for managing street lighting, waste collection, and environmental monitoring.
  • Asset Tracking: CSS is utilized in logistics and supply chain management to track goods and assets over wide areas.

4. Benefits of CSS

CSS offers several advantages that make it ideal for a range of communication needs:

  • Long-Range Communication: Its ability to transmit data over long distances with low power makes CSS suitable for applications like IoT, where devices may be spread across wide areas.
  • Interference Resistance: The spread spectrum nature of CSS makes it highly resistant to interference and noise, ensuring reliable communication even in congested environments.
  • Low Power Consumption: Suitable for battery-operated devices, CSS enables extended operational lifetimes by allowing low-power transmission.
  • Robustness to Multipath Fading: CSS is less affected by multipath fading, which occurs when signals take multiple paths to reach the receiver, making it effective in urban or indoor environments.

5. CSS in Modern Technology

In addition to military applications, CSS’s transition to civilian use has led to its implementation in various technologies. It has been particularly influential in IoT networks, where LoRa technology leverages CSS for connecting low-power devices across long distances.

Further Reading

  • Understanding LoRa and Its Applications in IoT: LoRa, which stands for Long Range, is a wireless communication technology that utilizes CSS for transmitting data over long distances with low power consumption. This reading delves into how LoRa works, its use cases in IoT, and the role CSS plays in enabling its long-range capabilities.
  • The Evolution of Spread Spectrum Techniques: Spread spectrum techniques, including CSS, have evolved over time, finding applications in both military and civilian communication systems. This resource explores the history and development of spread spectrum methods, their various forms (like Frequency Hopping Spread Spectrum and Direct Sequence Spread Spectrum), and their impact on modern wireless communication.
  • How Chirp Spread Spectrum Works: A Technical Breakdown: For those interested in the technical aspects of CSS, this reading provides a detailed explanation of how chirp signals are generated, modulated, and demodulated. It covers the mathematical underpinnings of CSS, including frequency sweeping and processing gain, offering a deeper understanding of why CSS is so effective in long-range, low-power communication.

Conclusion

Chirp Spread Spectrum is a versatile and powerful modulation technique that has enabled a wide range of applications requiring long-range, low-power, and interference-resistant communication. From its early use in military communications to its current role in IoT networks like LoRa, CSS remains a critical technology in the field of wireless communication, driving the future of connected devices.

This structure provides an in-depth exploration of CSS, including its historical development, technical operation, and practical applications. The further reading section offers resources for readers interested in diving deeper into specific aspects of CSS and its related technologies.

The post The Power of Chirp Spread Spectrum (CSS): History, Techniques, and Applications appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

Fixing the volume control in an Alesis M1Active 330 USB Speaker System

Posted by Evgeni Golov on 2024-09-14 18:38:18 UTC

I've a set of Alesis M1Active 330 USB on my desk to listen to music. They were relatively inexpensive (~100€), have USB and sound pretty good for their size/price.

They were also sitting on my desk unused for a while, because the left speaker didn't produce any sound. Well, almost any. If you'd move the volume knob long enough you might have found a position where the left speaker would work a bit, but it'd be quieter than the right one and stop working again after some time. Pretty unacceptable when you want to listen to music.

Given the right speaker was working just fine and the left would work a bit when the volume knob is moved, I was quite certain which part was to blame: the potentiometer.

So just open the right speaker (it contains all the logic boards, power supply, etc), take out the broken potentiometer, buy a new one, replace, done. Sounds easy?

Well, to open the speaker you gotta loosen 8 (!) screws on the back. At least it's not glued, right? Once the screws are removed you can pull out the back plate, which will bring the power supply, USB controller, sound amplifier and cables, lots of cables: two pairs of thick cables, one to each driver, one thin pair for the power switch and two sets of "WTF is this, I am not going to trace pinouts today", one with a 6 pin plug, one with a 5 pin one.

Unplug all of these! Yes, they are plugged, nice. Nope, still no friggin' idea how to get to the potentiometer. If you trace the "thin pair" and "WTF1" cables, you see they go inside a small wooden box structure. So we have to pull the thing from the front?

Okay, let's remove the plastic part of the knob Right, this looks like a potentiometer. Unscrew it. No, no need for a Makita wrench, I just didn't have anything else in the right size (10mm).

right Alesis M1Active 330 USB speaker with a Makita wrench where the volume knob is

Still, no movement. Let's look again from the inside! Oh ffs, there are six more screws inside, holding the front. Away with them! Just need a very long PH1 screwdriver.

Now you can slowly remove the part of the front where the potentiometer is. Be careful, the top tweeter is mounted to the front, not the main case and so is the headphone jack, without an obvious way to detach it. But you can move away the front far enough to remove the small PCB with the potentiometer and the LED.

right Alesis M1Active 330 USB speaker open

Great, this was the easy part!

The only thing printed on the potentiometer is "A10K". 10K is easy -- 10kOhm. A?! Wikipedia says "A" means "logarithmic", but only if made in the US or Asia. In Europe that'd be "linear". "B" in US/Asia means "linear", in Europe "logarithmic". Do I need to tap the sign again? (The sign is a print of XKCD#927.) My multimeter says in this case it's something like logarithmic. On the right channel anyway, the left one is more like a chopping board. And what's this green box at the end? Oh right, this thing also turns the power on and off. So it's a power switch.

Where the fuck do I get a logarithmic 10kOhm stereo potentiometer with a power switch? And then in the exact right size too?!

Of course not at any of the big German electronics pharmacies. But AliExpress saves the day, again. It's even the same color!

Soldering without pulling out the cable out of the case was a bit challenging, but I've managed it and now have stereo sound again. Yay!

PS: Don't operate this thing open to try it out. 230V are dangerous!

Infrastructure happenings, second half of aug – first half of sept 2024

Posted by Kevin Fenzi on 2024-09-14 18:07:01 UTC

So, I was going to try and do these posts more regularly, but of course thats hard to do. After flock there was a bunch of things I wanted to post, then a bunch of fires and so things got behind. Such is life, so here’s a few things I wanted to talk about in more detail from the last month or so. As always, I do still post on mastodon daily, happy to answer questions or comments there as things happen and expand on things in posts like this.

Fedora 41 branched off rawhide! This I think went much more smoothly than the last cycle. I like to hope it’s because we documented all the things that were not right last time and did them this time. There were a few more things to adjust, it wasn’t perfect, but it was much better!

We upgraded our OpenShift cluters from 4.15 to 4.16. I continue to be very happy how smooth OpenShift upgrades are. Not 100% seamless, but pretty good. This time we had some storage stuff that caused the upgrade to not finish, but it wasn’t too hard to work around. So much nicer than the old 3.x days.

We landed a bunch of koji/kiwi changes before Beta freeze. Kudos to Neal Gompa and Adam Williamson for working through all those. It was nice to mostly get everything lined up before Freeze so we didn’t have to be doing a lot of churn. We got everything working in rawhide first, then merged the f41 changes.

Had a really anoying IPA outage. I was running our main playbook (runs over everything) on a thursday night, just to make sure everything was in sync for the freeze, and… our playbook thought all our ipa servers were not configured right and tried to uninstall and resync them all. Luckily the server that was the CA master refused to uninstall, so we were still up on one server. From that we were able to reinstall/resync the other 2 and get things back up and working. I am still not sure why the playbook saw no dirserv running on the servers (and thus thought they were unconfigured). We are going to adjust that playbook to definitely not try and do that, and instead move setting up a replica to a manual playbook only run by humans as needed.

Thanks to a bunch of work from Stephen Gallagher and Carl George, eln and epel10 are now doing composes just like we do for rawhide and branched. This should allow us to retire our old ODCS (on demand compose service) setup, as its not really maintained upstream anymore and is on EOL os versions. Great to get things all running the same way, but of course we will probibly change everything next year or something.

We managed to sign off on Fedora 41 Beta being released next week. I was pretty amazed, as it didn’t seem like we had enough time to really shake out all the bugs, but testing coverage ended up being pretty good. Looking forward to Beta next week and end of Beta freeze.

Fedora plymouth boot splash not showing on systems with AMD GPUs

Posted by Hans de Goede on 2024-09-14 13:38:51 UTC
Recently there have been a number of reports (bug 2183743, bug 2276698, bug 2283839, bug 2312355) about the plymouth boot splash not showing properly on PCs using AMD GPUs.

The problem without plymouth and AMD GPUs is that the amdgpu driver is a really really big driver, which easily takes up to 10 seconds to load on older PCs. The delay caused by this may cause plymouth to timeout while waiting for the GPU to be initialized, causing it to fallback to the 3 dot text-mode boot splash.

There are 2 workaround for this depending on the PCs configuration:

1. With older AMD GPUs the radeon driver is actually used to drive the GPU but even though it is unused the amdgpu driver still loads slowing things down.

To check if this is the case for your PC start a terminal in a graphical login session and run: "lsmod | grep -E '^radeon|^amdgpu'" this will output something like this:

amdgpu 17829888 0
radeon 2371584 37

The second number after each is the usage count. As you can see in this example the amdgpu driver is not used. In this case you can disable the loading of the amdgpu driver by adding "modprobe.blacklist=amdgpu" to your kernel commandline:

sudo grubby --update-kernel=ALL --args="modprobe.blacklist=amdgpu"


2. If the amdgpu driver is actually used on your PC then plymouth not showing can be worked around by telling plymouth to use the simpledrm drm/kms device created from the EFI framebuffer early on boot, rather then waiting for the real GPU driver to load. Note this depends on your PC booting in EFI mode. To do this run:

sudo grubby --update-kernel=ALL --args="plymouth.use-simpledrm"


After using 1 of these workarounds plymouth should show normally again on boot (and booting should be a bit faster).

comment count unavailable comments

Hackintosh: A Comprehensive Guide to Building Your Custom macOS PC

Posted by Piju 9M2PJU on 2024-09-14 03:28:22 UTC

Introduction

In the world of technology enthusiasts, few projects are as captivating as building a Hackintosh. Combining the allure of macOS with the flexibility and cost-effectiveness of custom PC building, Hackintoshing opens up a realm of possibilities for users who want the best of both worlds. This blog post will guide you through the history, benefits, challenges, and resources associated with Hackintosh, providing a detailed roadmap for those who wish to embark on this exciting journey.

What is a Hackintosh?

A Hackintosh is a non-Apple computer that runs macOS, the operating system exclusively developed by Apple for its Macintosh computers. The term “Hackintosh” is a portmanteau of “hack” and “Macintosh,” where “hack” refers to the modifications necessary to make macOS run on hardware not officially supported by Apple. This practice allows users to experience the macOS ecosystem without purchasing an Apple machine, often resulting in a system that is both powerful and cost-effective.

A Deep Dive into the History of Hackintosh

The roots of Hackintosh can be traced back to 2005, when Apple announced its transition from PowerPC processors to Intel’s x86 architecture. This monumental shift allowed macOS to run on the same type of processors used in most consumer PCs. This transition made it feasible, albeit with some technical challenges, to run macOS on non-Apple hardware.

Early Days and Challenges

Initially, running macOS on non-Apple hardware was a highly technical endeavor, requiring a deep understanding of both hardware and software. The first public Hackintosh installations were based on Mac OS X 10.4 Tiger. Early adopters had to patch the macOS kernel and create custom drivers, known as kexts (kernel extensions), to get the system to boot and work with the various hardware components found in a typical PC.

At this time, there were no user-friendly tools or guides, and the community was small. Enthusiasts had to manually modify the macOS installation to bypass the hardware checks that Apple had put in place. This process was time-consuming and required a good understanding of how macOS interacted with the hardware. The success rate was low, and even when it worked, it was often unstable and riddled with compatibility issues.

The Rise of Hackintosh Communities

As more tech-savvy users experimented with Hackintosh builds, online communities began to form. These communities became a hub for sharing knowledge, guides, and tools. One of the earliest and most influential communities was InsanelyMac, which provided forums and resources for users to discuss their experiences and troubleshoot problems.

The introduction of tools like Boot-132 and Chameleon made it easier to create bootable media and install macOS on a wider range of hardware. These tools simplified the process of getting macOS to recognize and utilize PC hardware, making Hackintosh more accessible to a growing number of enthusiasts.

Modern Era: OpenCore and Clover Bootloader

The Hackintosh scene has evolved significantly over the years, with the development of more sophisticated tools and bootloaders. Two of the most important modern tools are Clover and OpenCore.

  • Clover: Introduced around 2012, Clover became one of the most popular bootloaders for Hackintosh builds. It offered a graphical interface for selecting different boot options and included features like UEFI support and the ability to inject kexts on the fly. Clover also made it easier to install macOS updates without breaking the system, which was a common issue with earlier Hackintosh setups.
  • OpenCore: Released in 2019, OpenCore represents a new era in Hackintosh bootloaders. It is designed to be more lightweight and modular compared to Clover, offering better performance and security. OpenCore provides more fine-grained control over the boot process, allowing for greater customization and compatibility. It is considered the most advanced and reliable way to build a Hackintosh today.

Benefits of Building a Hackintosh

Creating a Hackintosh comes with several benefits that make it an appealing project for technology enthusiasts and professionals alike.

1. Cost-Effectiveness

One of the most significant advantages of building a Hackintosh is the potential cost savings. Apple products are known for their premium pricing, which often includes a markup for the brand and design. By building a Hackintosh, you can select each component individually, often resulting in a system that offers comparable or superior performance to a similarly priced Mac.

For example, if you need a high-performance machine for video editing or software development, a Mac Pro can cost thousands of dollars. With a Hackintosh, you can build a system with equivalent specifications at a fraction of the cost. This cost-effectiveness is particularly appealing to students, freelancers, and small businesses that require powerful computing on a budget.

2. Hardware Customization and Upgradability

Apple computers are known for their elegant design and build quality, but they often lack the flexibility and upgradability that many PC enthusiasts crave. When you build a Hackintosh, you have the freedom to choose your hardware components, including the CPU, GPU, motherboard, RAM, and storage. This level of customization allows you to tailor the system to your specific needs, whether you’re building a workstation for 3D rendering or a gaming rig.

Moreover, Hackintoshes offer the advantage of upgradability. Unlike many Apple computers, which are often designed with limited upgrade options, a Hackintosh allows you to swap out components as technology advances. This means you can keep your system up-to-date with the latest hardware without having to purchase an entirely new machine.

3. Learning Experience and Technical Knowledge

Building a Hackintosh is not just about getting a cheaper or more customizable macOS machine; it’s also an educational experience. The process involves understanding the intricacies of computer hardware, the macOS operating system, and how they interact. You’ll learn about UEFI vs. BIOS, bootloaders, kexts, and the macOS boot process.

For many, this learning experience is one of the most rewarding aspects of building a Hackintosh. It provides a deep dive into the inner workings of both hardware and software, enhancing your technical skills and knowledge. Whether you’re an IT professional, a computer science student, or simply a tech enthusiast, building a Hackintosh can be a valuable hands-on project.

4. Performance and Optimization

With a Hackintosh, you have the ability to build a high-performance machine optimized for your specific workflow. By carefully selecting compatible hardware, you can create a system that rivals or even surpasses the performance of a Mac in certain tasks. This is particularly useful for power users who require a lot of processing power, such as video editors, graphic designers, and software developers.

For instance, you can choose a high-end graphics card, a multi-core processor, and fast SSDs to build a workstation capable of handling intensive tasks like 4K video editing, 3D rendering, or machine learning. Additionally, you can configure your Hackintosh to dual-boot macOS and Windows, giving you the flexibility to use both operating systems on the same machine.

While the benefits of building a Hackintosh are compelling, there are also significant challenges and legal considerations to be aware of.

1. Compatibility and Stability Issues

One of the most common challenges in building a Hackintosh is hardware compatibility. macOS is designed to work with a limited range of hardware configurations found in Apple’s official product line. When you build a Hackintosh, you’re using hardware that may not be officially supported by macOS, which can lead to compatibility issues.

Common issues include:

  • Graphics Card Support: Not all graphics cards are natively supported by macOS. NVIDIA GPUs, in particular, have limited support in recent versions of macOS. AMD GPUs tend to have better compatibility.
  • Wi-Fi and Bluetooth: Many PC Wi-Fi and Bluetooth adapters are not supported by macOS, requiring the use of compatible hardware or additional drivers.
  • Audio and Networking: Getting onboard audio and Ethernet to work can sometimes be tricky, depending on the motherboard’s chipset.

Stability can also be an issue. Official macOS updates may introduce changes that break compatibility with your Hackintosh. This means you may need to wait for the community to develop patches or new versions of tools like OpenCore or Clover before updating your system.

Running macOS on non-Apple hardware is a violation of Apple’s End User License Agreement (EULA). According to the EULA, macOS is licensed to be used only on Apple-branded hardware. This means that while building a Hackintosh is not illegal in the traditional sense, it does breach Apple’s software license terms.

Apple has not pursued legal action against individual Hackintosh users, likely because the practice remains a niche hobbyist activity that doesn’t significantly impact Apple’s bottom line. However, it’s important to be aware that Hackintoshing exists in a legal gray area, and using macOS on non-Apple hardware is not officially sanctioned by Apple.

3. Software Updates and Maintenance

Keeping a Hackintosh up-to-date can be a challenging process. While Apple makes updating macOS on official hardware a seamless experience, the same is not true for Hackintoshes. Each macOS update has the potential to break your Hackintosh, requiring careful preparation and sometimes complex troubleshooting to get everything working again.

Before applying any system updates, it’s recommended to:

  • Backup your system using Time Machine or a disk cloning tool.
  • Check community forums like tonymacx86 or r/Hackintosh for reports on the latest updates.
  • Ensure you have the latest version of your bootloader and any necessary patches.

In some cases, you may need to wait for the Hackintosh community to develop and release updated kexts or bootloader versions compatible with the new macOS version.

Getting Started with Hackintosh: Essential Resources

If you’re ready to take on the challenge of building a Hackintosh, you’ll find a wealth of resources and tools to guide you through the process. Here are some of the most valuable resources available to the Hackintosh community:

1. tonymacx86

tonymacx86 is one of the most popular and comprehensive resources for Hackintosh enthusiasts. It offers a wide range of guides, tools, and a highly active forum where users can share their experiences and seek help.

  • UniBeast and MultiBeast: These are two essential tools developed by tonymacx86 to simplify the Hackintosh installation process. UniBeast helps you create a bootable macOS USB installer, while MultiBeast provides post-installation tools to configure drivers and bootloaders.
  • Buyer’s Guide: tonymacx86 regularly updates its Hackintosh Buyer’s Guide, recommending compatible hardware for different types of builds, from budget-friendly systems to high-end workstations.
  • Forum: The tonymacx86 forum is a valuable resource for troubleshooting and community support. Users share their build experiences, solutions to common problems, and updates on the latest developments in the Hackintosh world.
  • tonymacx86

2. Dortania

Dortania is a project focused on providing in-depth guides for building a Hackintosh using OpenCore, the latest and most advanced bootloader for Hackintosh systems. The guides on Dortania are highly detailed and cater to both beginners and experienced users.

  • OpenCore Install Guide: This comprehensive guide walks you through the entire process of building a Hackintosh using OpenCore. It covers everything from selecting compatible hardware to creating the macOS installer and configuring OpenCore.
  • Post-Installation Guides: Dortania also offers post-installation guides to help you fine-tune your Hackintosh for optimal performance and stability. This includes setting up audio, networking, and GPU acceleration.
  • Dortania

3. r/Hackintosh (Reddit)

The Hackintosh subreddit is a vibrant community where users share their builds, discuss compatibility issues, and provide support to one another. It’s a great place to stay up-to-date with the latest developments in the Hackintosh world and to seek advice from experienced users.

  • Build Logs: Many users post detailed build logs on r/Hackintosh, outlining the hardware they used, the installation process, and any challenges they encountered. These logs can be an invaluable resource for those looking to build a similar system.
  • Troubleshooting and Support: If you run into issues with your Hackintosh, the r/Hackintosh community is often quick to help. Whether you’re dealing with kernel panics, boot issues, or hardware compatibility problems, you can find support here.
  • r/Hackintosh

4. InsanelyMac

InsanelyMac is one of the oldest and most comprehensive Hackintosh communities on the internet. It offers forums, guides, and a vast repository of tools and resources for Hackintosh builders.

  • Forum: The InsanelyMac forum is a treasure trove of information. It covers a wide range of topics, including installation guides, hardware compatibility lists, and troubleshooting tips. The community is diverse and includes both beginners and advanced users.
  • Downloads: InsanelyMac provides a download section where you can find various tools, kexts, and utilities to assist with your Hackintosh build.
  • InsanelyMac

Step-by-Step Guide to Building a Hackintosh

Now that we’ve covered the background and resources, let’s go through a basic overview of the steps involved in building a Hackintosh.

1. Planning and Selecting Hardware

The first step in building a Hackintosh is selecting compatible hardware. The success of your build depends largely on choosing components that are known to work well with macOS.

  • CPU: Intel processors are generally more compatible with macOS than AMD processors, although recent developments have made AMD builds possible with additional patches.
  • Motherboard: Choose a motherboard with a chipset known to work well with macOS, such as Intel Z370, Z390, or Z490. Gigabyte motherboards are often recommended due to their compatibility.
  • Graphics Card: For the best compatibility, choose an AMD Radeon GPU, as macOS provides native support for many Radeon models. NVIDIA GPUs are not recommended for recent versions of macOS due to the lack of driver support.
  • Storage: Use an SSD for faster boot times and overall performance. NVMe SSDs are supported but may require additional configuration.

2. Creating a Bootable macOS Installer

Once you’ve selected your hardware, the next step is to create a bootable USB installer for macOS.

  • Download macOS: Use a Mac or an existing Hackintosh to download the macOS installer from the App Store.
  • Create Bootable USB: Use a tool like UniBeast (for Clover) or follow the Dortania OpenCore guide to create a bootable USB installer. This process involves formatting the USB drive and copying the macOS installer to it, along with the necessary bootloader files.

3. Configuring the BIOS

Before installing macOS, you’ll need to configure your motherboard’s BIOS settings. This includes:

  • Setting the boot mode to UEFI.
  • Disabling Secure Boot and Fast Boot.
  • Enabling AHCI mode for SATA.
  • Configuring the integrated graphics settings if using onboard graphics.

4. Installing macOS

With the bootable USB installer prepared and the BIOS configured, you can begin the macOS installation process.

  • Boot from USB: Insert the USB installer and boot from it. Use the bootloader (Clover or OpenCore) to start the macOS installer.
  • Install macOS: Follow the on-screen instructions to install macOS. This process is similar to installing macOS on an official Apple device.
  • Post-Installation: Once macOS is installed, you’ll need to perform post-installation tasks, such as configuring drivers and setting up the bootloader on the system’s hard drive.

5. Post-Installation Tweaks and Optimization

After installing macOS, you’ll need to perform some post-installation tweaks to ensure your Hackintosh runs smoothly.

  • Install Kexts: Use tools like MultiBeast or OpenCore Configurator to install the necessary kexts for audio, networking, and other hardware components.
  • Configure Bootloader: Set up the bootloader to load macOS from the system’s drive without needing the USB installer.
  • Fine-Tuning: Make adjustments for power management, GPU acceleration, and other system optimizations.

Conclusion

Building a Hackintosh is a rewarding endeavor that offers the opportunity to run macOS on custom hardware tailored to your needs. It provides significant cost savings, hardware flexibility, and a deep learning experience. However, it also comes with challenges, including hardware compatibility, legal considerations, and the need for ongoing maintenance.

By utilizing the wealth of resources available within the Hackintosh community, including guides from tonymacx86, Dortania, and forums like InsanelyMac and r/Hackintosh, you can navigate these challenges and build a system that offers a unique blend of power, customization, and the macOS experience. Whether you’re a seasoned tech enthusiast or a curious beginner, the journey into Hackintoshing is one that can deepen your understanding of both hardware and software, providing a satisfying and educational project.

The post Hackintosh: A Comprehensive Guide to Building Your Custom macOS PC appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

Convert Your Heltec Wireless Stick Into a 433 MHz LoRa APRS iGate or Digipeater

Posted by Piju 9M2PJU on 2024-09-13 19:34:35 UTC

In this guide, we’ll walk you through the steps to turn a Heltec Wireless Stick into a 433 MHz LoRa APRS iGate or digipeater. All you’ll need is the Heltec Wireless Stick, a Windows computer, Visual Studio Code, and a few other tools.

🛠 Prerequisites

Heltec Wireless Stick: A compact device that includes a plastic case and an antenna.
Computer: A Windows machine for development.
USB Type-C Cable: To connect your Heltec Wireless Stick to your computer.

💻 Step 1: Set Up Your Computer

  1. Download and Install Visual Studio Code:
    Visit the Visual Studio Code website, download the installer for Windows, and follow the installation instructions.
  2. Install PlatformIO Extension for Visual Studio Code:
    Open Visual Studio Code, click on the Extensions icon in the sidebar (or press Ctrl+Shift+X), search for “PlatformIO IDE,” and install the extension.

🔧 Step 2: Install the Heltec Wireless Stick Driver

  1. Visit the Heltec website and go to the Support section under “Resources Download.”
  2. Download the CP210x Universal Driver for your Heltec Wireless Stick.
  3. Follow the installation instructions to install the driver on your Windows machine.

📦 Step 3: Download and Prepare the LoRa APRS iGate Code

  1. Navigate to the LoRa APRS iGate GitHub repository and download the code as a ZIP file.
  2. Extract the contents of the ZIP file to a folder on your computer.
  3. Open Visual Studio Code, go to File > Open Folder…, and select the folder where you extracted the LoRa APRS iGate code.

📝 Step 4: Configure the iGate/Digipeater

  1. In Visual Studio Code, navigate to the data folder within the LoRa APRS iGate project.
  2. Open the igate_conf.json file.
  3. Edit the file to include your specific details, such as your callsign, SSID, and other necessary parameters.

⚙ Step 5: Update the PlatformIO Environment

  1. In Visual Studio Code, open the platformio.ini file located in the root directory of the LoRa APRS iGate project.
  2. Ensure the following line is included to set the environment for the Heltec Wireless Stick:
   [platformio]  
   default_envs = heltec_wireless_stick

🛠 Step 6: Build and Upload the Firmware

  1. Build the Firmware:
    Press Ctrl+Alt+B in Visual Studio Code to compile the firmware for your Heltec Wireless Stick.
  2. Connect the Heltec Wireless Stick:
    Use a USB Type-C cable to connect your Heltec Wireless Stick to the computer. While connecting, hold down the USER button on the device to enter upload mode.
  3. Upload the Firmware:
    Press Ctrl+Alt+U in Visual Studio Code to upload the firmware to your device.

📂 Step 7: Upload the Filesystem Image

  1. After successfully uploading the firmware, click the PlatformIO icon in the Visual Studio Code sidebar.
  2. Under the PlatformIO tab, locate “Heltec Wireless Stick.”
  3. Click on Platform > Upload Filesystem Image to upload the necessary configuration files to your device.

🌐 Step 8: Configure via Web Browser

Once you’ve configured, flashed, and uploaded the filesystem image, the Heltec Wireless Stick will broadcast a Wi-Fi SSID. You can connect to this Wi-Fi network and access the device’s configuration page by opening a web browser and navigating to 192.168.4.1. This allows you to reconfigure the device easily from the web interface.

🎉 Step 9: Complete the Setup

Congratulations! 🎊 Your Heltec Wireless Stick is now configured as a 433 MHz LoRa APRS iGate or digipeater. You’re all set to start relaying APRS packets!

Enjoy your new LoRa APRS iGate/Digipeater and happy tracking! 📡

P/S: There is still a known bug: the display on the Heltec Wireless Stick is not working at the moment.

image-2-1024x704 Convert Your Heltec Wireless Stick Into a 433 MHz LoRa APRS iGate or Digipeater

The post Convert Your Heltec Wireless Stick Into a 433 MHz LoRa APRS iGate or Digipeater appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

PHP version 8.2.24RC1 and 8.3.12RC1

Posted by Remi Collet on 2024-09-13 08:59:00 UTC

Release Candidate versions are available in the testing repository for Fedora and Enterprise Linux (RHEL / CentOS / Alma / Rocky and other clones) to allow more people to test them. They are available as Software Collections, for a parallel installation, the perfect solution for such tests, and also as base packages.

RPMs of PHP version 8.3.12RC1 are available

  • as base packages in the remi-modular-test for Fedora 39-41 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

RPMs of PHP version 8.2.24RC1 are available

  • as base packages in the remi-modular-test for Fedora 39-41 and Enterprise Linux ≥ 8
  • as SCL in remi-test repository

emblem-notice-24.png The packages are available for x86_64 and aarch64.

emblem-notice-24.pngPHP version 8.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation: follow the wizard instructions.

emblem-notice-24.png Announcements:

Parallel installation of version 8.3 as Software Collection:

yum --enablerepo=remi-test install php83

Parallel installation of version 8.2 as Software Collection:

yum --enablerepo=remi-test install php82

Update of system version 8.3:

dnf module switch-to php:remi-8.3
dnf --enablerepo=remi-modular-test update php\*

Update of system version 8.2:

dnf module switch-to php:remi-8.2
dnf --enablerepo=remi-modular-test update php\*

emblem-notice-24.png Notice:

  • version 8.3.11RC1 is also in Fedora rawhide for QA
  • version 8.4.0beta5 is also available in the repository
  • EL-9 packages are built using RHEL-9.4
  • EL-8 packages are built using RHEL-8.10
  • oci8 extension uses the RPM of the Oracle Instant Client version 23.5 on x86_64 or 19.24 on aarch64
  • intl extension uses libicu 74.2
  • RC version is usually the same as the final version (no change accepted after RC, exception for security fix).
  • versions 8.2.24 and 8.3.12 are planed for September 26th, in 2 weeks.

Software Collections (php82, php83)

Base packages (php)

Convert Your 433 MHz Heltec Wireless Tracker into a 433 MHz LoRa APRS Tracker

Posted by Piju 9M2PJU on 2024-09-13 12:51:23 UTC

In this guide, we’ll walk you through the steps to convert a 433 MHz Heltec Wireless Tracker into a 433 MHz LoRa APRS (Automatic Packet Reporting System) tracker. All you need is a Heltec Wireless Tracker, a Windows computer, Visual Studio Code, and a few additional tools. Let’s get started!

🛠 Prerequisites

  • Heltec Wireless Tracker: Comes with a plastic case and an antenna.
  • Computer: A Windows machine is needed.
  • USB Type-C Cable: To connect the Heltec Wireless Tracker to your computer.

💻 Step 1: Set Up Your Computer

  1. Download and Install Visual Studio Code:
    Visit the Visual Studio Code website, download the installer for Windows, and follow the instructions to install it.
  2. Install PlatformIO Extension for Visual Studio Code:
    Open Visual Studio Code, click on the square icon in the sidebar (or press Ctrl+Shift+X), search for “PlatformIO IDE,” and install the extension.

🔧 Step 2: Install the Heltec Wireless Tracker Driver

  • Head over to the Heltec website and navigate to the Support section under “Resources Download.”
  • Download the CP210x Universal Driver for your Heltec Wireless Tracker.
  • Follow the on-screen instructions to install the driver on your computer.

📦 Step 3: Download and Prepare the LoRa APRS Tracker Code

  • Go to the LoRa APRS Tracker GitHub repository and download the code as a ZIP file.
  • Extract the contents of the ZIP file to a folder on your computer.
  • Open Visual Studio Code, navigate to File > Open Folder…, and select the folder where you extracted the LoRa APRS Tracker code.

📝 Step 4: Configure the Tracker

  • In Visual Studio Code, open the data folder within the LoRa APRS Tracker project.
  • Find and open the tracker_config.json file.
  • Edit the file to include your specific details, such as callsign, SSID, and other necessary parameters.

⚙ Step 5: Update the PlatformIO Environment

  • Open the platformio.ini file located in the root directory of the LoRa APRS Tracker project.
  • Make sure it contains the following line to set the correct environment for the Heltec Wireless Tracker:
  [platformio]
  default_envs = heltec_wireless_tracker

🛠 Step 6: Build and Upload the Firmware

  1. Build the Firmware:
    Press Ctrl+Alt+B in Visual Studio Code to compile the firmware for your Heltec Wireless Tracker.
  2. Connect the Heltec Wireless Tracker:
    Use a USB Type-C cable to connect your Heltec Wireless Tracker to the computer. While connecting, hold down the USER button on the device to enter upload mode.
  3. Upload the Firmware:
    Press Ctrl+Alt+U in Visual Studio Code to upload the firmware to your device.

📂 Step 7: Upload the Filesystem Image

  • Once the firmware upload is successful, click on the PlatformIO icon in the Visual Studio Code sidebar.
  • In the PlatformIO tab, locate “Heltec Wireless Tracker.”
  • Click on Platform -> Upload Filesystem Image to upload the required files to the device.

🎉 Step 8: Complete the Setup

Congratulations! 🎊 Your 433 MHz Heltec Wireless Tracker is now a fully functioning 433 MHz LoRa APRS tracker. You’re all set to start using it!

Enjoy your new LoRa APRS tracker and happy tracking! 📡

image-1 Convert Your 433 MHz Heltec Wireless Tracker into a 433 MHz LoRa APRS Tracker

The post Convert Your 433 MHz Heltec Wireless Tracker into a 433 MHz LoRa APRS Tracker appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

Infra and RelEng Update – Week 37 2024

Posted by Fedora Community Blog on 2024-09-13 10:00:00 UTC

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 09 September – 13 September 2024

I&R infographic

Infrastructure & Release Engineering

The purpose of this team is to take care of day to day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux (SL) and Oracle Linux (OL).

Updates

Community Design

CPE has few members that are working as part of Community Design Team. This team is working on anything related to design in Fedora Community.

Updates

  • Fedora:
    • [Closed] DEI Team Report ComBlog Graphic Revamp #42
    • [Closed] FCOREOS Swag #160
    • SWAG: Fedora Websites & Apps Revamp Community Initiative #53
  • Podman Desktop:
    • #2363: Mockup for extension install web page
    • #8338: UX: Mockups for Docker Compatibility page
  • Other:
    • Posted initial ideas for Ramalama logo #166

ARC Investigations

The ARC (which is a subset of the CPE team) investigates possible initiatives that CPE might take on.

Updates

  • Dist Git Move
    • User stories are updated on the website – We have 79 of them now
    • Work in progress – Seeding of GitLab and Forgejo for workflow testing

If you have any questions or feedback, please respond to this report or contact us on #redhat-cpe channel on matrix.

The post Infra and RelEng Update – Week 37 2024 appeared first on Fedora Community Blog.

New Fedora shirts available at HELLOTUX

Posted by Fedora Magazine on 2024-09-13 08:00:00 UTC

We’ve upgraded the embroidered Fedora shirt collection with dark blue and white T-shirts and polo shirts. There’s a coupon code below, read onward for more details.

Half of the Fedora backpacks were for free

After delivering more than 800 embroidered Fedora garments to more than 50 countries, we’d like to share some numbers about our Fedora shirt project.

Our most popular items are of course shirts: 266 T-shirts, and 267 polo shirts were sold and there are 30 Fedora laptop backpacks around the world. Half of them were gifts from us to people ordering four or more other items (shirts or sweatshirts). You can still get a backpack too.

The most popular sizes are large (27%) and extra large (26%), followed by medium (19%) and 2XL (16%).

Since we ship worldwide, our Fedora garments are in 53 countries. Most of them in the United States (33%), Germany (14%), United Kingdom (5%) and France (5%). But can you find Moldova, Macao, Ghana or Réunion on the map? Fedora users show their commitment to our favorite operating system with Fedora shirts in these countries and territories too.

Currently in the HELLOTUX Fedora collection you can find T-shirts, polo shirts, a jacket, a hoodie and a laptop backpack. All of these are embroidered with the Fedora logo.

The coupon code

You can get a $5 discount on every Fedora item with the coupon code FEDORA5, and there is the Fedora laptop backpack promo as well.

Order now

Order directly from the HELLOTUX website.

Understanding APRS Wide Path Configurations: From Wide1-1 to Wide3-3

Posted by Piju 9M2PJU on 2024-09-13 06:08:06 UTC

Amateur radio enthusiasts and emergency communication specialists often rely on the Automatic Packet Reporting System (APRS) for position reporting and short messaging. One of the key features that make APRS so effective is its ability to relay messages through multiple digipeaters using “wide path” configurations. In this post, we’ll explore the various wide path settings from Wide1-1 to Wide3-3 and understand their implications for your APRS communications.

What is a Wide Path?

Before diving into specific configurations, let’s clarify what a “wide path” means in APRS:

  • A wide path is a routing mechanism that allows APRS packets to be relayed through multiple digipeaters.
  • The format “WIDEn-N” indicates how many hops (n) a packet can take, and how many hops remain (N).
  • Each time a digipeater relays the packet, it decreases the N value by 1.

Now, let’s break down each wide path configuration:

Wide1-1

  • Hops: 1
  • Usage: Local area communications
  • Description: This is the most basic wide path. Your packet will be relayed only once by the first compatible digipeater that hears it.
  • Best for: Urban areas with good digipeater coverage or direct communication with nearby stations.

Wide2-1

  • Hops: Up to 2
  • Usage: Extended local area
  • Description: Your packet can be relayed twice. After the first relay, it becomes Wide2-0, allowing one more hop.
  • Best for: Suburban areas or situations where you need slightly extended range beyond Wide1-1.

Wide2-2

  • Hops: Up to 2
  • Usage: Regional communications
  • Description: Similar to Wide2-1, but allows for two full hops. After the first relay, it becomes Wide2-1, permitting one more full-strength hop.
  • Best for: Covering larger areas or reaching more distant digipeaters.

Wide3-1

  • Hops: Up to 3
  • Usage: Extended regional communications
  • Description: Allows for up to three hops. After two relays, it becomes Wide3-0, permitting one final hop.
  • Best for: Reaching distant stations or filling gaps in digipeater coverage.

Wide3-2

  • Hops: Up to 3
  • Usage: Wide area communications
  • Description: Permits three hops, with the last two at full strength. After the first relay, it becomes Wide3-1.
  • Best for: Covering very large areas or ensuring your signal reaches key digipeaters.

Wide3-3

  • Hops: 3
  • Usage: Maximum coverage
  • Description: Allows for three full-strength hops. Each relay reduces it (Wide3-2, then Wide3-1, then Wide3-0).
  • Best for: Maximum theoretical coverage, but use with caution to avoid network congestion.

Choosing the Right Wide Path

Selecting the appropriate wide path depends on several factors:

  1. Local network topology: Understand the digipeater coverage in your area.
  2. Purpose of transmission: Emergency communications might justify wider paths.
  3. Network congestion: Using wider paths unnecessarily can congest the network.
  4. Power and antenna: Consider your station’s effective radiated power.

Best Practices

  1. Start with the minimum path needed (often Wide1-1) and increase only if necessary.
  2. Use Wide3-3 sparingly to avoid network congestion.
  3. Consider using specific digipeater callsigns for more efficient routing in well-known networks.
  4. Regularly review and adjust your settings based on local conditions and feedback.

Conclusion

Understanding APRS wide path configurations is crucial for effective and responsible use of the APRS network. By choosing the right path for your situation, you can ensure your messages reach their intended audience without unnecessarily burdening the system. Remember, the goal is efficient communication, not maximum coverage at all times.

Happy APRSing!

The post Understanding APRS Wide Path Configurations: From Wide1-1 to Wide3-3 appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.

The syslog-ng Insider 2024-09: documentation; TRANSPORT macro; rolling RPMs

Posted by Peter Czanik on 2024-09-11 11:13:13 UTC

The September syslog-ng newsletter is now on-line:

  • You can also contribute to the syslog-ng OSE documentation
  • The $TRANSPORT macro of syslog-ng
  • Rolling RPM platforms added to the syslog-ng package build system

It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2024-09-documentation-transport-macro-rolling-rpms

syslog-ng logo

System insights with command line tools: lsof and lsblk

Posted by Fedora Magazine on 2024-09-11 08:00:00 UTC

In our ongoing series on Linux system insights, we have a look into essential command-line utilities that provide information about the system’s hardware and status. Following our previous discussions on lscpu, lsusb, dmidecode and lspci, we now turn our attention to lsof and lsblk. These tools are particularly useful for investigating open files, active network connections, and mounted block devices on your Fedora Linux system.

Exploring open files with lsof

lsof (list open files) is a powerful command-line tool. Since almost everything in Linux is treated as a file, lsof provides detailed insight into many parts of your system by listing what files are being used, which processes are accessing them, and even which network ports are open (see e.g. Wikipedia on Network socket for more information).

Basic usage

To start with, execute the basic lsof command to get an overview of the system’s open files:

$ sudo lsof

sudo was used for extended privileges. This is needed to get information about files not opened by processes started by your user. The command outputs a lot of information which can be overwhelming. We are going to narrow down the output to specific information about some common use cases in the following examples.

Example 1: Finding open files by user or process

To identify which files a specific user or process has open, lsof can be very helpful.

To list all files opened by a specific user:

$ sudo lsof -u <username>

This will return a list of open files owned by the given user. For example:

$ sudo lsof -u johndoe

You’ll see details such as the process ID (PID), the file descriptor, the type of file, and the file’s path.

To filter by process, use the -p flag:

$ lsof -p <PID>

This is particularly useful for troubleshooting issues related to specific processes or when you need to check which files a service is holding open. Use sudo if the process is not owned by your user.

Example output:

$ lsof -p 873648
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 873648 user cwd DIR 0,39 8666 257 /home/user
bash 873648 user rtd DIR 0,35 158 256 /
bash 873648 user txt REG 0,35 1443376 12841259 /usr/bin/bash
bash 873648 user mem REG 0,33 12841259 /usr/bin/bash (path dev=0,35)
bash 873648 user mem REG 0,33 14055145 /usr/lib/locale/locale-archive (path dev=0,35)
bash 873648 user mem REG 0,33 14055914 /usr/lib64/libc.so.6 (path dev=0,35)
bash 873648 user mem REG 0,33 13309071 /usr/lib64/libtinfo.so.6.4 (path dev=0,35)
bash 873648 user mem REG 0,33 14059926 /usr/lib64/gconv/gconv-modules.cache (path dev=0,35)
bash 873648 user mem REG 0,33 14055911 /usr/lib64/ld-linux-x86-64.so.2 (path dev=0,35)
bash 873648 user 0u CHR 136,3 0t0 6 /dev/pts/3
bash 873648 user 1u CHR 136,3 0t0 6 /dev/pts/3
bash 873648 user 2u CHR 136,3 0t0 6 /dev/pts/3
bash 873648 user 255u CHR 136,3 0t0 6 /dev/pts/3

Example 2: identifying open network connections via sockets

With its ability to list network connections, lsof also becomes a handy tool for diagnosing network-related issues as it is usually even available on hardened, minimal systems.

To display all open network connections (TCP/UDP sockets), run:

$ sudo lsof -i

This will list active Internet connections along with the associated protocol, port, and process details.

You can filter for specific protocols (like TCP or UDP), include or exclude IPv4 and v6 and combine several values (the example section of man lsof provides a lot of useful information, including negation):

$ sudo lsof -i tcp
$ sudo lsof -i udp
$ sudo lsof -i 4tcp
$ sudo lsof -i 6tcp
$ sudo lsof -i 4tcp@example.com

For connections associated with a particular port:

$ sudo lsof -i :<port_number>

For example, to list connections to port 22 (SSH):

$ sudo lsof -i :22
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sshd 904379 root 3u IPv4 5622530 0t0 TCP *:ssh (LISTEN)
sshd 904379 root 4u IPv6 5622532 0t0 TCP *:ssh (LISTEN)

This information can be critical for identifying unauthorized connections or simply monitoring network activity on a system for debugging.

Investigating block devices with lsblk

Another useful tool is lsblk, which displays information about all available block devices on your system. Block devices include hard drives, SSDs, and USB storage. This command provides a tree-like view, helping you understand the relationships between partitions, devices, and their mount points.

Basic usage

Running lsblk without any options provides a clean hierarchical structure of the block devices:

$ lsblk

This shows all block devices in a tree structure, including their size, type (disk, partition), and mount point (if applicable).

Examples

For a deeper look into the file systems on your block devices, use the -f flag:

$ lsblk -f

This will display not just the block devices, but also details about the file systems on each partition, including the type (e.g., ext4, vfat, swap), the UUID, and the current mount points.

If you want less information about the devices themselves (without showing partitions or mount points), the -d option is useful:

$ lsblk -d

There is also a -J or –json option. If used, the command outputs the information in JSON format. This provides a structured view that is particularly useful for scripting and automation.

Example outputs from my laptop (some long information like UUIDs stripped for readability):

$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 1 0B 0 disk
sdb 8:16 1 0B 0 disk
sdc 8:32 1 0B 0 disk
zram0 252:0 0 8G 0 disk [SWAP]
nvme0n1 259:0 0 931,5G 0 disk
├─nvme0n1p1 259:1 0 600M 0 part /boot/efi
├─nvme0n1p2 259:2 0 1G 0 part /boot
└─nvme0n1p3 259:3 0 929,9G 0 part
└─luks-84257c20[...] 253:0 0 929,9G 0 crypt /home


$ lsblk -d
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 1 0B 0 disk
sdb 8:16 1 0B 0 disk
sdc 8:32 1 0B 0 disk
zram0 252:0 0 8G 0 disk [SWAP]
nvme0n1 259:0 0 931,5G 0 disk

$ lsblk -f
NAME FSTYPE [...]LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS
sda
sdb
sdc
zram0 [SWAP]
nvme0n1
├─nvme0n1p1 vfat 4C5B-4355 579,7M 3% /boot/efi
├─nvme0n1p2 ext4 30eff827[...] 605M 31% /boot
└─nvme0n1p3 crypto_LUKS 84257c20[...]
└─luks-84257[...] btrfs fe[...] 666f9d6f[...] 303,1G 67% /home
/

$ lsblk -f -J
{
"blockdevices": [
[...],{
"name": "nvme0n1",
"fstype": null,
"fsver": null,
"label": null,
"uuid": null,
"fsavail": null,
"fsuse%": null,
"mountpoints": [
null
],
"children": [
{
"name": "nvme0n1p1",
"fstype": "vfat",
"fsver": "FAT32",
"label": null,
"uuid": "4C5B-4355",
"fsavail": "579,7M",
"fsuse%": "3%",
"mountpoints": [
"/boot/efi"
]
},{
"name": "nvme0n1p2",
"fstype": "ext4",
"fsver": "1.0",
"label": null,
"uuid": "30eff827-[...]",
"fsavail": "605M",
"fsuse%": "31%",
"mountpoints": [
"/boot"
]
},{
"name": "nvme0n1p3",
"fstype": "crypto_LUKS",
"fsver": "2",
"label": null,
"uuid": "84257c20-[...]",
"fsavail": null,
"fsuse%": null,
"mountpoints": [
null
],
"children": [
{
"name": "luks-[...]",
"fstype": "btrfs",
"fsver": null,
"label": "fedora_localhost-live",
"uuid": "666f9d6f-[...]",
"fsavail": "303,1G",
"fsuse%": "67%",
"mountpoints": [
"/home", "/"
]
}
]
}
]
}
]
}

Conclusion

The lsof and lsblk commands are providing insights into file usage, network activity, and block device structures. Whether you’re tracking down open file handles, diagnosing network connections, or reviewing storage devices; whether you’re troubleshooting, optimizing, or simply curious; these tools provide valuable data that can help you better understand and manage your Fedora Linux environment. See you next time when we will have a look at more useful listing and information command line tools and how to use them.

Using Ubuntu as Your Ham Shack Operating System: A Comprehensive Guide for Amateur Radio Enthusiasts

Posted by Piju 9M2PJU on 2024-09-10 17:16:54 UTC

Amateur radio, or “ham” radio, is a hobby that combines electronics, communication technology, and experimentation. It’s a perfect blend for those who enjoy tinkering with both hardware and software. While Windows and macOS are popular choices for many hams, Linux distributions, especially Ubuntu, offer a robust, flexible, and cost-effective alternative for building a ham shack. In this blog post, we’ll explore why Ubuntu is a great choice for ham radio operators and provide a step-by-step guide on setting up a ham shack operating system using Ubuntu.

Why Choose Ubuntu for Your Ham Shack?

Ubuntu, a Debian-based Linux distribution, is known for its user-friendly interface, vast repository of software, and strong community support. Here are a few reasons why Ubuntu is a great choice for amateur radio enthusiasts:

  1. Open Source and Free: Ubuntu is free to download, install, and use. Being open-source means you have full control over the operating system, including the ability to tweak it to suit your specific needs.
  2. Stability and Security: Ubuntu is known for its stability and security. The Linux kernel is less prone to viruses and malware compared to other operating systems, which is crucial when running a reliable ham shack.
  3. Vast Software Repository: Ubuntu has a huge software repository, including a wide variety of applications specifically designed for amateur radio. This makes it easy to find and install the tools you need.
  4. Community Support: Ubuntu has a large, active community. If you run into problems or need help setting up a particular piece of software, you’re likely to find solutions in forums, user groups, or dedicated ham radio communities.
  5. Customization: Ubuntu allows for extensive customization. You can strip down the OS to its bare essentials to maximize performance or build a fully-featured desktop environment with all the tools and utilities you need.

Getting Started: Installing Ubuntu

Step 1: Download Ubuntu

Visit the official Ubuntu website to download the latest version of Ubuntu. You can choose between the Long-Term Support (LTS) version, which is stable and receives updates for five years, or the regular release, which includes newer features but is only supported for nine months.

Step 2: Create a Bootable USB Drive

Once you have downloaded the Ubuntu ISO file, create a bootable USB drive. You can use tools like Rufus (Windows) or Etcher (Linux/macOS) to make a bootable USB stick.

Step 3: Install Ubuntu

Boot your computer from the USB drive and follow the on-screen instructions to install Ubuntu. You can choose to install Ubuntu alongside your existing operating system or as a standalone OS.

Essential Ham Radio Software for Ubuntu

Now that you have Ubuntu installed, it’s time to set up your ham shack environment. Here are some essential ham radio software packages you should consider:

1. FLDigi

FLDigi (Fast Light Digital Modem Application) is a popular digital mode software suite for Linux, Windows, and macOS. It supports a wide range of digital modes like PSK31, RTTY, MFSK, and more. FLDigi integrates well with other software, making it an essential part of any ham shack setup.

  • Installation: You can install FLDigi directly from the Ubuntu repository using the following command:
  sudo apt-get install fldigi

2. WSJT-X

WSJT-X is a software suite designed for weak-signal digital communication by K1JT. It supports FT8, JT65, JT9, and other popular digital modes. The software is user-friendly and widely used in the ham radio community.

  • Installation: Download the latest WSJT-X package from the official website and follow the installation instructions provided.

3. CQRLOG

CQRLOG is an advanced logging program for Linux that integrates seamlessly with ham radio applications. It supports real-time logging, QSO records, and features like LoTW and eQSL synchronization.

  • Installation: Install CQRLOG from the Ubuntu repository:
  sudo apt-get install cqrlog

4. GPredict

GPredict is a satellite tracking application that helps you monitor satellite passes in real-time. It’s a must-have for any ham operator interested in satellite communication.

  • Installation: Install GPredict using the following command:
  sudo apt-get install gpredict

5. Hamlib

Hamlib provides a standardized API for controlling radios and other shack equipment. Many ham radio applications rely on Hamlib to interface with various radios. It’s an essential library for integrating different hardware with your Ubuntu system.

  • Installation: Install Hamlib via the terminal:
  sudo apt-get install libhamlib-utils

Setting Up Rig Control and CAT Interfaces

One of the key aspects of setting up a ham shack on Ubuntu is ensuring seamless communication between your computer and radio equipment. This usually involves setting up rig control and Computer-Aided Transceiver (CAT) interfaces. The Hamlib library mentioned earlier is crucial for this.

  • Rig Control Setup: Use rigctl (part of Hamlib) to set up rig control. You may need to specify the serial port or USB port where your rig is connected:
  rigctl -m <radio_model_number> -r /dev/ttyUSB0 -s <baud_rate>
  • Testing the Interface: Once set up, test the interface to ensure commands from the computer are correctly interpreted by the radio.

Customizing Ubuntu for Ham Radio Use

To optimize Ubuntu for your ham shack, consider the following:

  1. Disable Unnecessary Services: Disable services that aren’t needed to reduce system load.
  2. Optimize Audio Settings: Properly configure ALSA and PulseAudio settings to ensure clear and reliable audio communication.
  3. Set Up a Backup System: Use tools like rsync or Timeshift to set up regular backups of your log files and settings.
  4. Use Virtual Desktops: Take advantage of Ubuntu’s multiple desktops feature to separate your ham radio operations from general computing tasks.

Conclusion

Using Ubuntu as your ham shack operating system offers flexibility, stability, and a wide range of powerful software tools. Whether you’re a digital mode enthusiast, a satellite tracker, or someone who loves experimenting with different radio setups, Ubuntu provides an open, customizable platform that can meet your needs. With a little bit of setup and configuration, you’ll have a robust, reliable ham shack operating system tailored just for you.

Dive in, experiment, and enjoy the freedom that comes with using an open-source operating system like Ubuntu in your amateur radio adventures!

The post Using Ubuntu as Your Ham Shack Operating System: A Comprehensive Guide for Amateur Radio Enthusiasts appeared first on HamRadio.My - Ham Radio, Fun Facts, Open Source Software, Tech Insights, Product Reviews by 9M2PJU.