Technology Tales

Notes drawn from experiences in consumer and enterprise technology

TOPIC: LINUX KERNEL

Windows 11 virtualisation on Linux using KVM and QEMU

5th March 2026

Windows 11 arrived in October 2021 with a requirement that posed a challenge to many virtualisation users: TPM 2.0 was mandatory, not optional. For anyone running Windows in a virtual machine, that meant their hypervisor needed to emulate a Trusted Platform Module convincingly enough to satisfy the installer.

VirtualBox, which had been my go-to choice for desktop virtualisation for years, could not do this in its 6.1.x series. Support arrived only with VirtualBox 7.0 in October 2022, meaning anyone who needed Windows 11 in a VM faced roughly a year with no straightforward path through their existing tool.

That gap prompted a look at KVM (Kernel-based Virtual Machine), which could handle the TPM requirement through software emulation. This article documents what that investigation found, what the rough edges were at the time, and how the situation has developed in the years since.

What KVM Actually Is

KVM is not a standalone application. It is a virtualisation infrastructure built directly into the Linux kernel, and has been since the module was merged between 2006 and 2007. Rather than sitting on top of the operating system as a separate layer, it turns the Linux kernel itself into a hypervisor. This makes KVM a type-1 hypervisor in practice, even when running on a desktop machine, which is part of why its performance characteristics compare favourably with hosted solutions.

In use, KVM operates alongside QEMU for hardware emulation, libvirt for virtual machine management and virt-manager as a graphical front end. The distinction matters because problems and improvements tend to originate in different parts of that stack. KVM itself is rarely the issue; QEMU and libvirt are where the day-to-day configuration lives.

To confirm that the host CPU supports hardware virtualisation before beginning, the following command checks for the relevant flags:

egrep -c '(vmx|svm)' /proc/cpuinfo

Any result above zero means the hardware is capable. Intel processors expose the vmx flag and AMD processors expose svm.

Installing the Required Packages

The installation is straightforward on any major distribution.

On Debian and Ubuntu:

sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager

On Fedora:

sudo dnf install @virtualization

On Arch Linux:

sudo pacman -S qemu libvirt virt-manager bridge-utils

After installation, the current user needs to be added to the libvirt and kvm groups before the tools will work without root privileges:

sudo usermod -aG libvirt,kvm $(whoami)

Logging out and back in instates the group membership.

Configuring Network Bridging

The default network configuration in libvirt uses NAT, which is sufficient for most purposes and requires no additional setup. The VM can reach the internet and the host, but the host cannot initiate connections to the VM. For a Windows 11 guest used primarily for application compatibility, NAT works without complaint.

A bridged network, which places the VM on the same network segment as the host, requires a wired Ethernet connection. Wireless interfaces do not support bridging in the standard Linux networking stack due to how 802.11 handles MAC addresses. For those on a wired connection, a bridge can be defined with a file named bridge.xml:

<network>
  <name>br0</name>
  <forward mode="bridge"/>
  <bridge name="br0"/>
</network>

The bridge is then activated with:

sudo virsh net-define bridge.xml
sudo virsh net-start br0
sudo virsh net-autostart br0

Installing Windows 11

Windows 11 requires TPM 2.0 and Secure Boot. Neither is present in a default KVM configuration, and both need to be added explicitly.

The swtpm package provides software TPM emulation:

sudo apt install swtpm swtpm-tools   # Debian/Ubuntu
sudo dnf install swtpm swtpm-tools   # Fedora

UEFI firmware is provided by the ovmf package, which supplies the file that virt-manager needs for Secure Boot:

sudo apt install ovmf   # Debian/Ubuntu
sudo dnf install edk2-ovmf   # Fedora

In virt-manager, when creating the VM, the firmware should be set to UEFI x86_64: /usr/share/OVMF/OVMF_CODE.fd rather than the default BIOS option. A TPM 2.0 device should be added in the hardware configuration before the VM is started. With those two elements in place, the Windows 11 installer proceeds without complaint about the hardware requirements.

The VirtIO drivers ISO should be attached as a second virtual CD-ROM drive during installation. The installer will not find the storage device otherwise because the VirtIO disk controller is not a standard device that Windows recognises without a driver. When prompted to select an installation location and no disks appear, clicking "Load driver" and browsing to the VirtIO ISO resolves it.

During the out-of-box experience, Windows 11 requires a Microsoft account and an internet connection by default. To bypass this and create a local account instead, opening a command prompt with Shift+F10 and running the following works on the Home edition:

oobebypassNRO

The machine restarts and presents an option to proceed without internet access.

Performance Considerations

KVM performance for a Windows 11 guest is generally good, but one factor specific to Windows 11 is worth understanding. Memory Integrity, also referred to as Hypervisor-Protected Code Integrity (HVCI), is a Windows security feature that uses virtualisation to protect the kernel. Running it inside a virtual machine creates nested virtualisation overhead because the guest is attempting to run its own virtualisation layer inside the host's. The performance impact is more pronounced on processors predating Intel Kaby Lake or AMD Zen 2, where the hardware support for nested virtualisation is less capable.

The CPU type selection in virt-manager also matters more than it might appear. Setting the CPU model to host-passthrough exposes the actual host CPU flags to the guest, which improves performance compared to emulated CPU models, at the cost of reduced portability if the VM image is ever moved to a different machine.

Host File System Access and Clipboard Sharing

This was where the experience diverged most noticeably from VirtualBox. VirtualBox Guest Additions handle shared folders and clipboard integration as a single installation, and the result works reliably with minimal configuration. KVM requires separate solutions for each, and in 2022 neither was as seamless as it has since become.

Clipboard Sharing via SPICE

Clipboard sharing uses the SPICE display protocol rather than VNC. The VM needs a SPICE display and a virtio-serial controller, which virt-manager adds automatically when SPICE is selected. Within the Windows guest, the installer for SPICE guest tools provides the clipboard agent. Once installed, clipboard text passes between host and guest in both directions.

The critical dependency that caused problems in 2022 was the virtio-serial channel. Without a com.redhat.spice.0 character device present in the VM configuration, the clipboard agent installs successfully but does nothing. Virt-manager now adds this automatically when SPICE is selected, which removes one of the more common failure points.

Host Directory Sharing via Virtiofs

At the time of this investigation, the practical option for sharing files between the Linux host and a Windows guest was WebDAV, which worked but felt like a workaround. The proper solution, virtiofs, existed but was not yet well-supported on Windows guests. The situation has since improved to the point where virtiofs is now the standard recommended approach.

It requires three components: the virtiofsd daemon on the host (included in recent QEMU packages), the virtiofs driver from the VirtIO Windows drivers package and WinFsp, which is the Windows equivalent of FUSE. Once configured through virt-manager's file system hardware settings, the shared directory appears as a mapped drive in Windows Explorer. The virtiofsd daemon was also rewritten in Rust in the intervening period, improving both its reliability and performance.

To configure a shared directory, shared memory must first be enabled in the VM's memory settings, then a file system device added with the driver set to virtiofs, a source path on the host and an arbitrary mount tag. The corresponding libvirt XML looks like this:

<memoryBacking>
  <source type='memfd'/>
  <access mode='shared'/>
</memoryBacking>

<filesystem type='mount' accessmode='passthrough'>
  <driver type='virtiofs' queue='1024'/>
  <source dir='/home/user/shared'/>
  <target dir='host_share'/>
</filesystem>

This was the area where VirtualBox held a clear practical advantage in 2022. The gap has since narrowed considerably.

Migrating from VirtualBox

Moving existing VirtualBox VMs to KVM is possible using qemu-img, which converts between disk image formats. The straightforward conversion from VDI to QCOW2 is:

qemu-img convert -f vdi -O qcow2 windows11.vdi windows11.qcow2

For large images or where reliability is a concern, converting via an intermediate RAW format reduces the risk of issues:

qemu-img convert -f vdi -O raw windows11.vdi windows11.raw
qemu-img convert -f raw -O qcow2 windows11.raw windows11.qcow2

The resulting QCOW2 file can then be used when creating a new VM in virt-manager, selecting "Import existing disk image" rather than creating a new one.

How the Landscape Has Shifted Since

The investigation described here took place during a specific window: VirtualBox 6.1.x was the current release, Windows 11 had just launched, and KVM was the most practical route to TPM emulation on Linux. That context has changed in several ways worth noting for anyone reading this in 2026.

VirtualBox 7.0 arrived in October 2022 with TPM 1.2 and 2.0 support, Secure Boot and a number of additional improvements. The original reason for investigating KVM was resolved, and for those who had moved across during the gap period, returning to VirtualBox for Windows guests made sense given its more straightforward Guest Additions integration.

QEMU reached version 10.0 in April 2025, a significant milestone reflecting years of accumulated improvements to hardware emulation, storage performance and x86 guest support. Libvirt has kept pace, adding reliable internal snapshots for UEFI-based VMs, evdev input device hot plug and improved unprivileged user support. The virtiofs situation for Windows guests has moved from "technically possible but awkward" to "the recommended approach with good documentation and a rewritten daemon", which addresses the most significant practical shortcoming from 2022 directly.

The broader desktop virtualisation landscape shifted when VMware Workstation Pro became free for all users, including commercial ones, in November 2024. VMware Workstation Player was discontinued as a separate product at the same time, having become redundant once Workstation Pro was available at no cost. This gave desktop users a third credible option alongside VirtualBox and KVM, with VMware's historically strong Windows guest integration now accessible without a licence fee, though users of the free version are not entitled to support through the global support team.

The miniature PC market also expanded considerably from 2023 onwards, with Intel N100-based and AMD Ryzen Embedded systems offering enough performance to run Windows natively at modest cost. For many people, that proves a cleaner solution than any hypervisor, eliminating the integration limitations entirely by giving Windows its own dedicated hardware.

Final Assessment

KVM handled Windows 11 competently during a period when the alternatives could not, and the platform has continued to improve in the years since. The two areas that fell short in 2022, host file sharing and clipboard integration, have been addressed by developments in virtiofs and the SPICE tooling, and a new user starting today may find the experience noticeably smoother.

Whether KVM is the right choice in 2026 depends on the use case. For Linux-native workloads and server-style VM management, it remains the strongest option on Linux. For a Windows desktop guest where ease of integration matters most, VirtualBox 7.x and VMware Workstation Pro are both strong alternatives, with the latter now free to use for both commercial and personal purposes. The question that drove this investigation was answered by VirtualBox itself in October 2022. KVM provided a workable solution in the meantime, and the platform has only become more capable since then.

Additional Reading

How To Convert VirtualBox Disk Image (VDI) to Qcow2 format

How to enable TPM and secure boot on KVM?

Windows 11 on KVM – How to Install Step by Step?

Enable Virtualization-based Protection of Code Integrity in Microsoft Windows

A Practical Linux Administration Toolkit: Kernels, Storage, Filesystems, Transfers and Shell Completion

4th February 2026

Linux command-line administration has a way of beginning with a deceptively simple question that opens into several possible answers. Whether the task is checking which kernels are installed before an upgrade, mounting an NFS share for backup access, diagnosing low disk space, throttling a long-running sync job or wiring up tab completion, the right answer depends on context: the distribution, the file system type, the transport protocol and whether the need is a one-off action or a persistent configuration. This guide draws those everyday administrative themes into a single continuous reference.

Identifying Your System and Installed Kernels

Reading Distribution Information

A sensible place to begin any administration session is knowing exactly what you are working with. One quick approach is to read the release files directly:

cat /etc/*-release

On systems where bat is available (sometimes installed as batcat), the same files can be read with syntax highlighting using batcat /etc/*-release. Typical output on Ubuntu includes /etc/lsb-release and /etc/os-release, with values such as DISTRIB_ID=Ubuntu, VERSION_ID="20.04" and PRETTY_NAME="Ubuntu 20.04.6 LTS". Three additional commands, cat /etc/os-release, lsb_release -a and hostnamectl, each present the same underlying facts in slightly different formats, while uname -r reports the currently running kernel release in isolation. Adding more flags with uname -mrs extends the output to include the kernel name and machine hardware class, which on an older RHEL system might return something like Linux 2.6.18-8.1.14.el5 x86_64.

Querying Installed Kernels by Package Manager

On Red Hat Enterprise Linux, CentOS, Rocky Linux, AlmaLinux, Oracle Linux and Fedora, installed kernels are managed by the RPM package database and are queried with:

rpm -qa kernel

This may return entries such as kernel-5.14.0-70.30.1.el9_0.x86_64. The same information is also accessible through yum list installed kernel or dnf list installed kernel. On Debian, Ubuntu, Linux Mint and Pop!_OS the package manager differs, so the command changes accordingly:

dpkg --list | grep linux-image

Output may include versioned packages, such as linux-image-2.6.20-15-generic, alongside the metapackage linux-image-generic. Arch Linux users can query with pacman -Q | grep linux, while SUSE Enterprise Linux and openSUSE users can turn to rpm -qa | grep -i kernel or use zypper search -i kernel, which presents results in a structured table. Alpine Linux takes yet another approach with apk info -vvv | grep -E 'Linux' | grep -iE 'lts|virt', which may return entries such as linux-virt-5.15.98-r0 - Linux lts kernel.

Finding Kernels Outside the Package Manager

Package databases do not always tell the whole story, particularly where custom-compiled kernels are involved. A kernel built and installed manually will not appear in any package manager query at all. In that case, /lib/modules/ is a useful place to look, since each installed kernel generally has a corresponding module directory. Running ls -l /lib/modules/ may show entries such as 4.15.0-55-generic, 4.18.0-25-generic and 5.0.0-23-generic. A further check is:

sudo find /boot/ -iname "vmlinuz*"

This may return files such as /boot/vmlinuz-5.4.0-65-generic and /boot/vmlinuz-5.4.0-66-generic, confirming precisely which versions exist on disk.

A Brief History of vmlinuz

That naming convention is worth understanding because it appears on virtually every Linux system. vmlinuz is the compressed, bootable Linux kernel image stored in /boot/. The name traces back through computing history: early Unix kernels were simply called /unix, but when the University of California, Berkeley ported Unix to the VAX architecture in 1979 and added paged virtual memory, the resulting system, 3BSD, was known as VMUNIX (Virtual Memory Unix) and its kernel images were named /vmunix. Linux inherited vmlinuz as a mutation of vmunix, with the trailing z denoting gzip compression (though other algorithms such as xz and lzma are also supported). The counterpart vmlinux refers to the uncompressed, non-bootable kernel file, which is used for debugging and symbol table generation but is not loaded directly at boot. Running ls -l /boot/ will show the full set of boot files present on any given system.

Examining and Investigating Disk Usage

Why ls Is Not the Right Tool for Directory Sizes

Storage management is an area where a familiar command can mislead. Running ls -l on a directory typically shows it occupying 4,096 bytes, which reflects the directory entry metadata rather than the combined size of its contents. For real space consumption, du is the appropriate tool.

sudo du -sh /var

The above command produces a summarised, human-readable total such as 85G /var. The -s flag limits output to a single grand total and -h formats values in K, M or G units. For an individual file, du -sh /var/log/syslog might report 12M /var/log/syslog, while ls -lh /var/log/syslog adds ownership and timestamps to the same figure.

Drilling Down to Find Where Space Has Gone

When a file system is full and the need is to locate exactly where the space has accumulated, du can be made progressively more revealing. The command sudo du -h --max-depth=1 /var lists first-level subdirectories with sizes, potentially showing 77G /var/lib, 5.0G /var/cache and 3.3G /var/log. To surface the biggest consumers quickly, piping to sort and head works well:

sudo du -h /var/ | sort -rh | head -10

Adding the -a flag includes individual files alongside directories in the same output:

sudo du -ah /var/ | sort -rh | head -10

Apparent Size Versus Allocated Disk Space

There is a subtle distinction that sometimes causes confusion. By default, du reports allocated disk usage, which is governed by the file system block size. A single-byte file on a file system with 4 KB blocks still consumes 4 KB of disk. To see the amount of data actually stored rather than allocated, sudo du -sh --apparent-size /var reports the apparent size instead. The df command answers a different question altogether: it shows free and used space per mounted file system, such as /dev/sda1 at 73 per cent usage or /dev/sdb1 mounted on /data with 70 GB free. In practice, du is for locating what consumes space and df is for checking how much remains on each volume.

gdu: A Faster Interactive Alternative

Some administrators prefer a more modern tool for storage investigations, and gdu is a notable option. It is a fast disk usage analyser written in Go with an interactive console interface, designed primarily for SSDs where it can exploit parallel processing to full effect, though it functions on hard drives too with less dramatic speed gains. The binary release can be installed by extracting its .tgz archive:

curl -L https://github.com/dundee/gdu/releases/latest/download/gdu_linux_amd64.tgz | tar xz
chmod +x gdu_linux_amd64
mv gdu_linux_amd64 /usr/bin/gdu

It can also be run directly via Docker without installation:

docker run --rm --init --interactive --tty --privileged 
  --volume /:/mnt/root ghcr.io/dundee/gdu /mnt/root

In use, gdu scans a directory interactively when run without flags, summarises a target with gdu -ps /some/dir, shows top results with gdu -t 10 / and runs without interaction using gdu -n /. It supports apparent size display, hidden file inclusion, item counts, modification times, exclusions, age filtering and database-backed analysis through SQLite or BadgerDB. The project documentation notes that hard links are counted only once and that analysis data can be exported as JSON for later review.

Unpacking TGZ Archives

A brief note on the tar command is useful here, since it appears throughout Linux administration, including in the gdu installation step above. A .tgz file is simply a GZIP-compressed tar archive, and the standard way to extract one is:

tar zxvf archive.tgz

Modern GNU tar can detect the compression type automatically, so the -z flag is often optional:

tar xvf archive.tgz

To extract into a specific directory rather than the current working directory, the -C option takes a destination path:

tar zxvf archive.tgz -C /path/to/destination/

To inspect the contents of a .tgz file without extracting it, the t (list) flag replaces x (extract):

tar ztvf archive.tgz

The tar command was first introduced in the seventh edition of Unix in January 1979 and its name comes from its original purpose as a Tape ARchiver. Despite that origin, modern tar reads from and writes to files, pipes and remote devices with equal facility.

Mounting NFS Shares and Optical Media

Installing NFS Client Tools

NFS remains common on Linux and Unix-like systems, allowing remote directories to be mounted locally and treated as though they were native file systems. Before a client can mount an NFS export, the client packages must be installed. On Ubuntu and Debian, that means:

sudo apt update
sudo apt install nfs-common

On Fedora and RHEL-based distributions, the equivalent is:

sudo dnf install nfs-utils

Once installed, showmount -e 10.10.0.10 can list available exports from a server, returning output such as /backups 10.10.0.0/24 and /data *.

Mounting an NFS Share Manually

Mounting an NFS share follows the same broad pattern as mounting any other file system. First, create a local mount point:

sudo mkdir -p /var/backups

Then mount the remote export, specifying the file system type explicitly:

sudo mount -t nfs 10.10.0.10:/backups /var/backups

A successful command produces no output. Verification is done with mount | grep nfs or df -h, after which the local directory acts as the root of the remote file system for all practical purposes.

Persisting NFS Mounts Across Reboots

Since a manual mount does not survive a reboot, persistent setups use /etc/fstab. An appropriate entry looks like:

10.10.0.10:/backups /var/backups nfs defaults,nofail,_netdev 0 0

The nofail option prevents a boot failure if the NFS server is unavailable when the machine starts. The _netdev flag marks the mount as network-dependent, ensuring the system defers the operation until the network stack is available. Running sudo mount -a tests the entry without rebooting.

Troubleshooting Common NFS Errors

NFS problems are often predictable. A "Permission denied" error usually means the server export in /etc/exports does not include the client, and reloading exports with sudo exportfs -ar is frequently the remedy. "RPC: Program not registered" indicates the NFS service is not running on the server, in which case sudo systemctl restart nfs-server applies. A "Stale file handle" error generally follows a server reboot or a deleted file and is cleared by unmounting and remounting. Timeouts and "Server not responding" messages call for checking network connectivity, confirming that firewall rules permit access to port 111 (rpcbind, required for NFSv3) and port 2049 (NFS itself), and verifying NFS version compatibility using the vers=3 or vers=4 mount option. NFSv4 requires only port 2049, while NFSv2 and NFSv3 also require port 111. To detach a share, sudo umount /var/backups is the standard route, with fuser -m /var/backups helping identify processes that are blocking the unmounting process.

Mounting Optical Media

CDs and DVDs are less central than they once were, but some systems still need to read them. After inserting a disc, blkid can identify the block device path, which is typically /dev/sr0, and will report the file system type as iso9660. With a mount point created using sudo mkdir /mnt/cdrom, the disc is mounted with:

sudo mount /dev/sr0 /mnt/cdrom

The warning device write-protected, mounted read-only is expected for optical media and can be disregarded. CDs and DVDs use the ISO 9660 file system, a data-exchange standard designed to be readable across operating systems. Once mounted, the disc contents are accessible under /mnt/cdrom, and sudo umount /mnt/cdrom detaches it cleanly when work is complete.

Transferring Files Securely and Efficiently

Copying Files with scp

scp (Secure Copy) transfers files and directories between hosts over SSH, encrypting both data and authentication credentials in transit. Its basic syntax is:

scp [OPTIONS] [[user@]host:]source [[user@]host:]destination

The colon is how scp distinguishes between local and remote paths: a path without a colon is local. A typical upload from a local machine to a remote host looks like:

scp file.txt remote_username@10.10.0.2:/remote/directory

A download from a remote host to the local machine reverses the argument order:

scp remote_username@10.10.0.2:/remote/file.txt /local/directory

Commonly used options include -r for recursive directory copies, -p to preserve metadata such as modification times and permissions, -C for compression, -i for a specific private key, -l to cap bandwidth in Kbit/s and the uppercase -P to specify a non-standard SSH port. It is also possible to copy between two remote hosts directly, routing the transfer through the local machine with the -3 flag.

The Protocol Change in OpenSSH 9.0

There is an important change in modern OpenSSH that administrators should be aware of. From OpenSSH 9.0 onward, the scp command uses the SFTP protocol internally by default rather than the older SCP/RCP protocol, which is now considered outdated. The command behaves identically from the user's perspective, but if an older server requires the legacy protocol, the -O flag forces it. For advanced requirements such as resumable transfers or incremental directory synchronisation, rsync is generally the better fit, particularly for large directory trees.

Throttling rsync to Protect Bandwidth

Even with rsync, raw speed is not always desirable. A backup script consuming all available bandwidth can disrupt other services on the same network link, so --bwlimit is often essential. The basic syntax is:

rsync --bwlimit=KBPS source destination

The value is in units of 1,024 bytes unless an explicit suffix is added. A fractional value is also valid: --bwlimit=1.5m sets a cap of 1.5 MB/s. A local transfer capped at 1,000 KB/s looks like:

rsync --bwlimit=1000 /path/to/source /path/to/dest/

And a remote backup:

rsync --bwlimit=1000 /var/www/html/ backups@server1.example.com:~/mysite.backups/

The man page for rsync explains that --bwlimit works by limiting the size of the blocks rsync writes and then sleeping between writes to achieve the target average. Some volume undulation is therefore normal in practice.

Managing I/O Priority with ionice

Bandwidth is only one dimension of the load a transfer places on a system. Disk I/O scheduling may also need attention, particularly on busy servers running other workloads. The ionice utility adjusts the I/O scheduling class and priority of a process without altering its CPU priority. For instance:

/usr/bin/ionice -c2 -n7 rsync --bwlimit=1000 /path/to/source /path/to/dest/

This runs the rsync process in best-effort I/O class (-c2) at the lowest priority level (-n7), combining transfer rate limiting with reduced I/O priority. The scheduling classes are: 0 (none), 1 (real-time), 2 (best-effort) and 3 (idle), with priority levels 0 to 7 available for the real-time and best-effort classes.

Together, --bwlimitand  ionice provide complementary controls over exactly how much resource a routine transfer is permitted to consume at any given time.

Setting Up Bash Tab Completion

On Ubuntu and related distributions, Bash programmable completion is provided by the bash-completion package. If tab completion does not function as expected in a new installation or container environment, the following commands will install the necessary support:

sudo apt update
sudo apt upgrade
sudo apt install bash-completion

The package places a shell script at /etc/profile.d/bash_completion.sh. To ensure it is loaded in shell startup, the following appends the source line to .bashrc:

echo "source /etc/profile.d/bash_completion.sh" >> ~/.bashrc

A conditional form avoids duplicating the line on repeated runs:

grep -wq '^source /etc/profile.d/bash_completion.sh' ~/.bashrc 
  || echo 'source /etc/profile.d/bash_completion.sh' >> ~/.bashrc

The script is typically loaded automatically in a fresh login shell, but source /etc/profile.d/bash_completion.sh activates it immediately in the current session. Once active, pressing Tab after partial input such as sudo apt i or cat /etc/re completes commands and paths against what is actually installed. Bash also supports simple custom completions: complete -W 'google.com cyberciti.biz nixcraft.com' host teaches the shell to offer those three domains after typing host and pressing Tab, which illustrates how the feature can be extended to match the patterns of repeated daily work.

Installing Snap on Debian

Snap is a packaging format developed by Canonical that bundles an application together with all of its dependencies into a single self-contained package. Snaps update automatically, roll back gracefully on failure and are distributed through the Snap Store, which carries software from both Canonical and independent publishers. The background service that manages them, snapd, is pre-installed on Ubuntu but requires a manual setup step on Debian.

On Debian 9 (Stretch) and newer, snap can be installed directly from the command line:

sudo apt update
sudo apt install snapd

After installation, logging out and back in again, or restarting the system, is necessary to ensure that snap's paths are updated correctly in the environment. Once that is done, install the snapd snap itself to obtain the latest version of the daemon:

sudo snap install snapd

To verify that the setup is working, the hello-world snap provides a straightforward test:

sudo snap install hello-world
hello-world

A successful run prints Hello World! to the terminal. Note that snap is not available on Debian versions before 9. If a snap installation produces an error such as snap "lxd" assumes unsupported features, the resolution is to ensure the core snap is present and current:

sudo snap install core
sudo snap refresh core

On desktop systems, the Snap Store graphical application can then be installed with sudo snap install snap-store, providing a point-and-click interface for browsing and managing snaps alongside the command-line tools.

Increasing the Root Partition Size on Fedora with LVM

Fedora's default installer has used LVM (Logical Volume Manager) for many years, dividing the available disk into a volume group containing separate logical volumes for root (/), home (/home) and swap. This arrangement makes it straightforward to redistribute space between volumes without repartitioning the physical disk, which is a significant advantage over a fixed partition layout. Note that Fedora 33 and later default to Btrfs without LVM for new installations, so the steps below apply to systems that were installed with LVM, including pre-Fedora 33 installs and any system where LVM was selected manually.

Because the root file system is in active use while the system is running, resizing it safely requires booting from a Fedora Live USB stick rather than the installed system. Once booted from the live environment, open a terminal and begin by checking the volume group:

sudo vgs

Output such as the following shows the volume group name, total size and, crucially, how much free space (VFree) is unallocated:

  VG     #PV #LV #SN Attr   VSize    VFree
  fedora   1   3   0 wz--n- <237.28g    0

Before proceeding, confirm the exact device mapper paths for the root and home logical volumes by running fdisk -l, since the volume group name varies between installations. Common names include /dev/mapper/fedora-root and /dev/mapper/fedora-home, though some systems use fedora00 or another prefix.

When Free Space Is Already Available

If VFree shows unallocated space in the volume group, the root logical volume can be extended directly and the file system resized in a single command:

lvresize -L +5G --resizefs /dev/mapper/fedora-root

The --resizefs flag instructs lvresize to resize the file system at the same time as the logical volume, removing the need to run resize2fs separately.

When There Is No Free Space

If VFree is zero, space must first be reclaimed from another logical volume before it can be given to root. The most common approach is to shrink the home logical volume, which typically holds the most available headroom. Shrinking a file system involves data moving on disk, so the operation requires the volume to be unmounted, which is why the live environment is essential. To take 10 GB from home:

lvresize -L -10G --resizefs /dev/mapper/fedora-home

Once that completes, the freed space appears as VFree in vgs and can be added to the root volume:

lvresize -L +10G --resizefs /dev/mapper/fedora-root

Both steps use --resizefs so that the file system boundaries are updated alongside the logical volume boundaries. After rebooting back into the installed system, df -h will confirm the new sizes are in effect.

Keeping a Linux System Well Maintained

The commands and configurations covered above form a coherent body of everyday Linux administration practice. Knowing where installed kernels are recorded, how to measure real disk usage rather than directory metadata, how to attach local and network file systems correctly, how to extract archives and move data securely without disrupting shared resources, how to make the shell itself more productive, how to extend a Debian system with snap packages and how to redistribute disk space between LVM volumes on Fedora converts a scattered collection of one-liners into a reliable working toolkit. Each topic interconnects naturally with the others: a kernel query clarifies what system you are managing, disk investigation reveals whether a file system has room for what you plan to transfer, NFS mounting determines where that transfer will land and bandwidth control determines what impact it will have while it runs.

Keyboard remapping on macOS with Karabiner-Elements for cross-platform work

20th November 2025

This is something that I have been planing to share for a while; working across macOS, Linux and Windows poses a challenge to muscle memory when it comes to keyboard shortcuts. Since the macOS set up varies from the others, it was that which I set to harmonise with the others. Though the result is not full compatibility, it is close enough for my needs.

The need led me to install Karabiner-Elements and Karabiner-EventViewer. The latter has its uses for identifying which key is which on a keyboard, which happens to be essential when you are not using a Mac keyboard. While it is not needed all the time, the tool is a godsend when doing key mappings.

Karabiner-Elements is what holds the key mappings and needs to run all the time for them to be activated. Some are simple and others are complex; it helps the website is laden with examples of the latter. Maybe that is how an LLM can advise on how to set up things, too. Before we come to the ones that I use, here are the simple mappings that are active on my Mac Mini:

left_command → left_control

left_comtrol → left_command

This swaps the left-hand Command and Control keys while leaving their right-hand ones alone. It means that the original functionality is left for some cases when changing it for the keys that I use the most. However, I now find that I need to use the Command key in the Terminal instead of the Control counterpart that I used before the change, a counterintuitive situation that I overlook given how often the swap is needed in other places like remote Linux and Windows sessions.

grave_accent_and_tilde → non_us_backslash

non_us_backslash → non_us_pound

non_us_pound → grave_accent_and_tilde

It took a while to get this three-way switch figured out, and it is a bit fiddly too. All the effort was in the name of getting backslash and hash (pound in the US) keys the right way around for me, especially in those remote desktop sessions. What made the thing really tricky was the need to deal with Shift key behaviour, which necessitated the following script:

{
    "description": "Map grave/tilde key to # and ~ (forced behaviour, detects Shift)",
    "manipulators": [
        {
            "conditions": [
                {
                    "name": "shift_held",
                    "type": "variable_if",
                    "value": 1
                }
            ],
            "from": {
                "key_code": "grave_accent_and_tilde",
                "modifiers": { "optional": ["any"] }
            },
            "to": [{ "shell_command": "osascript -e 'tell application \"System Events\" to keystroke \"~\"'" }],
            "type": "basic"
        },
        {
            "conditions": [
                {
                    "name": "shift_held",
                    "type": "variable_unless",
                    "value": 1
                }
            ],
            "from": {
                "key_code": "grave_accent_and_tilde",
                "modifiers": { "optional": ["any"] }
            },
            "to": [
                {
                    "key_code": "3",
                    "modifiers": ["option"]
                }
            ],
            "type": "basic"
        },
        {
            "from": { "key_code": "left_shift" },
            "to": [
                {
                    "set_variable": {
                        "name": "shift_held",
                        "value": 1
                    }
                },
                { "key_code": "left_shift" }
            ],
            "to_after_key_up": [
                {
                    "set_variable": {
                        "name": "shift_held",
                        "value": 0
                    }
                }
            ],
            "type": "basic"
        },
        {
            "from": { "key_code": "right_shift" },
            "to": [
                {
                    "set_variable": {
                        "name": "shift_held",
                        "value": 1
                    }
                },
                { "key_code": "right_shift" }
            ],
            "to_after_key_up": [
                {
                    "set_variable": {
                        "name": "shift_held",
                        "value": 0
                    }
                }
            ],
            "type": "basic"
        }
    ]
}

Here, I resorted to AI to help get this put in place. Even then, there was a deal of toing and froing before the setup worked well. After that, it was time to get the quote (") and at (@) symbols assigned to what I was used to having on a British English keyboard:

{
    "description": "Swap @ and \" keys (Shift+2 and Shift+quote)",
    "manipulators": [
        {
            "from": {
                "key_code": "2",
                "modifiers": {
                    "mandatory": ["shift"],
                    "optional": ["any"]
                }
            },
            "to": [
                {
                    "key_code": "quote",
                    "modifiers": ["shift"]
                }
            ],
            "type": "basic"
        },
        {
            "from": {
                "key_code": "quote",
                "modifiers": {
                    "mandatory": ["shift"],
                    "optional": ["any"]
                }
            },
            "to": [
                {
                    "key_code": "2",
                    "modifiers": ["shift"]
                }
            ],
            "type": "basic"
        }
    ]
}

The above possibly was one of the first changes that I made, and took less time than some of the others that came after it. There was another at the end that was even simpler again: neutralising the Caps Lock key. That came up while I was perusing the Karabiner-Elements website, so here it is:

{
    "manipulators": [
        {
            "description": "Change caps_lock to command+control+option+shift.",
            "from": {
                "key_code": "caps_lock",
                "modifiers": { "optional": ["any"] }
            },
            "to": [
                {
                    "key_code": "left_shift",
                    "modifiers": ["left_command", "left_control", "left_option"]
                }
            ],
            "type": "basic"
        }
    ]
}

That was the simplest of the lot to deploy, being a simple copy and paste effort. It also halted mishaps when butter-fingered actions on the keyboard activated capitals when I did not need them. While there are occasions when the facility would have its uses, it has not noticed its absence since putting this in place.

At the end of all the tinkering, I now have a set-up that works well for me. While possible enhancements may include changing the cursor positioning and corresponding highlighting behaviours, I am happy to leave these aside for now. Compatibly with British and Irish keyboards together with smoother working in remote sessions was what I sought, and I largely have that. Thus, I have no complaints so far.

Generating Git commit messages automatically using aicommit and OpenAI

25th October 2025

One of the overheads of using version control systems like Subversion or Git is the need to create descriptive messages for each revision. Now that GitHub has its copilot, it now generates those messages for you. However, that still leaves anyone with a local git repository out in the cold, even if you are uploading to GitHub as your remote repo.

One thing that a Homebrew update does is to highlight other packages that are available, which is how I got to learn of a tool that helps with this, aicommit. Installing is just a simple command away:

brew install aicommit

Once that is complete, you now have a tool that generates messages describing very commit using GPT. For it to work, you do need to get yourself set up with OpenAI's API services and generate a token that you can use. That needs an environment variable to be set to make it available. On Linux (and Mac), this works:

export OPENAI_API_KEY=<Enter the API token here, without the brackets>

Because I use this API for Python scripting, that part was already in place. Thus, I could proceed to the next stage: inserting it into my workflow. For the sake of added automation, this uses shell scripting on my machines. The basis sequence is this:

git add .
git commit -m "<a default message>"
git push

The first line above stages everything while the second commits the files with an associated message (git makes this mandatory, much like Subversion) and the third pushes the files into the GitHub repository. Fitting in aicommit then changes the above to this:

git add .
aicommit
git push

There is now no need to define a message because aicommit does that for you, saving some effort. However, token limitations on the OpenAI side mean that the aicommit command can fail, causing the update operation to abort. Thus, it is safer to catch that situation using the following code:

git add .
if ! aicommit 2>/dev/null; then
    echo "  aicommit failed, using fallback"
    git commit -m "<a default message>"
fi
git push

This now informs me what has happened when the AI option is overloaded and the scripts fallback to a default option that is always available with git. While there is more to my git scripting than this, the snippets included here should get across how things can work. They go well for small push operations, which is what happens most of the time; usually, I do not attempt more than that.

And then there was one...

25th August 2025

Even with the rise of the internet, magazine publishing was in rude health at the end of the last century. That persisted through the first decade of this century before any decline began to be noticed. Now, things are a poor shadow of what once used to be.

Though a niche area with some added interest, Linux magazine publishing has not been able to escape this trend. At the height of the flourish, we had the launch of Linux Voice by former writers of Linux Format. That only continues as a section in Linux Magazine these days, one of the first casualties from a general decline in readership.

Next, it was the turn of Linux User & Developer. This came into the hands of Future when they acquired Imagine Publishing and remained in print for a time after that due to the conditions of the deal set by the competition regulator. Once beyond that, Future closed it to stick with its own more lightweight title, Linux Format, transferring some of the writers across to the latter. This echoed what they did with Web Designer; they closed that in favour of .Net.

In both cases, dispatching the in-house publications might have been a better idea; without content, you cannot have readers. In time, .Net met its demise and the same fate awaited Linux Format a few months ago after it celebrated its silver jubilee. There was no goodbye edition this time: it went out with a bang, reminding us of glory days that have passed. The content was becoming more diverse with hardware features getting included, perhaps reflecting that something else was happening behind the scenes.

After all that, Linux Magazine soldiers on to outlive the others. It perhaps helps that it is published by an operation centred around the title, Linux New Media, which also publishes Admin Magazine. Even so, it too has needed to trim things a bit. For a time, there was a dedicated Raspberry Pi title that lives on as a section in Linux Magazine.

It was while sprucing up some of the content on here that I was reminded of the changes. Where do you find surveys of what is available? The options could include desktop environments and Linux distributions as much as code editors, web browsers and other kinds of software. While it is true that operations like Future have portals for exactly this kind of thing, they too have a nemesis in the form of GenAI. Asking ChatGPT may be their undoing as much as web publishing triggered a decline in magazine publishing.

Technology upheaval means destruction as much as creation, and there are occasions when spending quiet time reading a paper periodical is just what you need. We all need time away from screen reading, whatever the device happens to be.

Upheaval and miniaturisation

4th March 2025

The ongoing AI boom got me refreshing my computer assets. One was a hefty upgrade to my main workstation, still powered by Linux. Along the way, I learned a few lessons:

  • Processing with LLM's only works on a graphics card when everything can remain within its onboard memory. It is all too easy to revert to system memory and CPU usage, given the amount of memory you get on consumer graphics cards. That applies even with the latest and greatest from Nvidia, when the main use case is for gaming. Things become prohibitively expensive when you go on from there.
  • Even with water cooling, keeping a top of the range CPU cool and its fans running quietly remains a challenge, more so than when I last went for a major upgrade. It takes time for things to settle down.
  • My Iiyama monitor now feels flaky with input from the latest technology. This is enough to make me look for a replacement, and it is waking up from dormancy that is the real issue. While it was always slow, plugging out from mains electricity and then back in again is a hack that is needed all too often.
  • KVM switches may need upgrading to work with the latest graphical input. The monitor may have been a culprit with the problems that I was getting, yet things were smoother once I replaced the unit that I had been using with another that is more modern.
  • AMD Ryzen 9 chips now have onboard graphics, a boon when things are not proceeding too well with a dedicated graphics card. Even though this was not the case when the last major upgrade happened, there were no issues like what I faced this time around.
  • Having LED's on a motherboard to tell what might be stopping system startup is invaluable. This helped in July 2021 and averted confusion this time around as well. While only four of them were on offer, knowing which of CPU, DRAM, GPU or system boot needs attention is a big help.
  • Optical drives are not needed any longer. Booting off a USB drive was enough to get Linux Mint installed, once I got the image loaded on there properly. Rufus got used, and I needed to select the low-level writing option before things proceeded as I had hoped.

Just like 2021, the 2025 upgrade cycle needed a few weeks for everything to settle down. The previous cycle was more challenging, and this was not just because of an accompanying heatwave. The latest one was not so bedevilled.

Given the above, one might be tempted to go for a less arduous path, like my acquisition of an iMac last year for another place that I own. After all, a Mac Mini packs in quite a lot of power, and it is not the only miniature option. Now that I have one, I have moved image processing off the workstation and onto it. The images are stored on the Linux machine and edited on the Mac, which has plenty of memory and storage of its own. There is also an M4 chip, so processing power is not lacking either.

It could have been used for work affairs, yet I acquired a Geekom A8 for just that. Though seeking work as I write this, my being an incorporated freelancer means that having a dedicated machine that uses my main monitor has its advantages. Virtualisation can allow drift from business affairs to business matters, that is not so easy when a separate machine is involved. There is no shortage of power either with an AMD Ryzen 9 8945HS and Radeon 780M Graphics on board. Add in 32 GB of memory and 2 TB of storage and all is commodious. It can be surprising what a small package can do.

The Iiyama's travails also pop up with these smaller machines, less so on the Geekom than with the Mac. The latter needs the HDMI cable to be removed and reinserted after a delay to sort out things. Maybe that new monitor may not be such an off the wall idea after all.

Fixing an Ansible warning about boolean type conversion

27th October 2022

My primary use for Ansible is doing system updates using the inbuilt apt module. Recently, I updated my main system to Linux Mint 21 and a few things like Ansible stopped working. Removing instances that I had added with pip3 sorted the problem, but I then ran playbooks manually, only for various warning messages to appear that I had not noticed before. What follows below is one of these.

[WARNING]: The value True (type bool) in a string field was converted to u'True' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.

The message is not so clear in some ways, not least because it had me looking for a boolean value of True when it should have been yes. A search on the web revealed something about the apt module that surprised me.: the value of the upgrade parameter is a string, when others like it take boolean values of yes or no. Thus, I had passed a bareword of yes when it should have been declared in quotes as "yes". To my mind, this is an inconsistency, but I have changed things anyway to get rid of the message.

Resolving VirtualBox Shared Folders problems in Fedora 32 through a correct Guest Additions installation

26th July 2020

While some Linux distros like Fedora install VirtualBox drivers during installation time, I prefer to install the VirtualBox Guest Additions themselves. Before doing this, it is best to remove the virtualbox-guest-additions package from Fedora to avoid conflicts. After that, execute the following command to ensure that all prerequisites for the VirtualBox Guest Additions are in place before mounting the VirtualBox Guest Additions ISO image and installing from there:

sudo dnf -y install gcc automake make kernel-headers dkms bzip2 libxcrypt-compat kernel-devel perl

During the installation, you may encounter a message like the following:

ValueError: File context for /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf already defined

This is generated by SELinux, so the following commands need to be executed before repeating the installation of VirtualBox Guest Additions:

sudo semanage fcontext -d /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf
sudo restorecon /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf

Without doing the above step and fixing the preceding error message, I had an issue with mounting of Shared Folders whereby the mount point was set up, but no folder contents were displayed. This happened even when my user account was added to the vboxsf group, and it proved to be the SELinux context issue that was the cause.

Generating PNG files in SAS using ODS Graphics

21st December 2019

Recently, I had someone ask me how to create PNG files in SAS using ODS Graphics, so I sought out the answer for them. Normally, the suggestion would have been to create RTF or PDF files instead, but there was a specific need that needed a different approach. Adding something like the following lines before an SGPLOT, SGPANEL or SGRENDER procedure should do the needful:

ods listing gpath='E:\';
ods graphics / imagename="test" imagefmt=png;

Here, the ODS LISTING statement declares the destination for the desired graphics file, while the ODS GRAPHICS statement defines the file name and type. In the above example, the file test.png would be created in the root of the E drive of a Windows machine. However, this also works with Linux or UNIX directory paths.

Sorting out sluggish start-up and shutdown times in Linux Mint 19

9th August 2018

The Linux Mint team never forces users to upgrade to the latest version of their distribution, but curiosity often provides a strong enough impulse for me to do so. When I encounter rough edges, the wisdom of leaving things unchanged becomes apparent. Nevertheless, the process brings learning opportunities, which I am sharing in this post. It also allows me to collect various useful titbits that might help others.

Again, I went with the in-situ upgrade option, though the addition of the Timeshift backup tool means that it is less frowned upon than once would have been the case. It worked well too, apart from slow start-up and shutdown times, so I set about tracking down the causes on the two machines that I have running Linux Mint. As it happens, the cause was different on each machine.

On one PC, it was networking that holding up things. The cause was my specifying a fixed IP address in /etc/network/interfaces instead of using the Network Settings GUI tool. Resetting the configuration file back to its defaults and using the Cinnamon settings interface took away the delays. It was inspecting /var/log/boot.log that highlighted problem, so that is worth checking if I ever encounter slow start times again.

As I mentioned earlier, the second PC had a very different problem, though it also involved a configuration file. What had happened was that /etc/initramfs-tools/conf.d/resume contained the wrong UUID for my system's swap drive, so I was seeing messages like the following:

W: initramfs-tools configuration sets RESUME=UUID=<specified UUID for swap partition>
W: but no matching swap device is available.
I: The initramfs will attempt to resume from <specified file system location>
I: (UUID=<specified UUID for swap partition>)
I: Set the RESUME variable to override this.

Correcting the file and executing the following command resolved the issue by updating the affected initramfs image for all installed kernels and speeded up PC start-up times:

sudo update-initramfs -u -k all

Though it was not a cause of system sluggishness, I also sorted another message that I kept seeing during kernel updates and removals on both machines. This has been there for a while and causes warning messages about my system locale not being recognised. The problem has been described elsewhere as follows: /usr/share/initramfs-tools/hooks/root_locale is expecting to see individual locale directories in /usr/lib/locale, but locale-gen is configured to generate an archive file by default.  Issuing the following command sorted that:

sudo locale-gen --purge --no-archive

Following these, my new Linux Mint 19 installations have stabilised with more speedy start-up and shutdown times. That allows me to look at what is on Flathub to see what applications and if they get updated to the latest version on an ongoing basis. That may be a topic for another entry on here, but the applications that I have tried work well so far.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.