Technology Tales

Notes drawn from experiences in consumer and enterprise technology

Welcome

  • We live in a world where computer software penetrates every life and is embedded in nearly every piece of hardware that we have. Everything is digital now; it goes beyond computers and cameras, a major point of encounter when this website was started.

  • As if that were not enough, we have AI inexorably encroaching on our daily lives. The possibilities are becoming so endless that one needs to be careful not to get lost in it all. Novelty beguiles us now as much around the time of the expansion of personal computing availability decades ago, and then we got the internet. Excitement has returned, at least for a while.

  • All this percolates what is here, just dive in to see what you can find!

Sending email reliably from Linux web servers and WordPress websites

6th March 2026

Setting up a new VPS brought with it the need to get email services working at the operating system level, and that particular endeavour proved stubbornly resistant to resolution. What eventually broke the deadlock was stepping back from the OS entirely and turning to the Post SMTP plugin for WordPress, which handled the complexity of Gmail authentication cleanly and got things moving. Since then, things have moved on again: Proton Mail, with a subscription upgrade that adds a custom domain and an associated address, now handles outgoing mail seamlessly.

Sending email from a server involves block lists, authentication requirements and cloud provider restrictions that have a habit of turning a simple task into an extended troubleshooting session. That even applies with well-established approaches using Postfix or Sendmail, relaying through Gmail or specialist SMTP providers, continue to serve administrators reliably with minimal overhead. The list of things to do and manage is not a short one at the server level.

What follows draws together practical guidance from the Linode documentation on configuring Postfix with external SMTP servers, the Linode guide on relaying through Gmail specifically, the Cloudways walkthrough for Post SMTP on WordPress and the linuxconfig.org guide to Sendmail with Gmail, keeping to fundamentals that have changed little even as the surrounding tooling has moved on. In some ways, the sources are disparate yet complementary, something that is reflected in the rest of what you find below. The whole intent is that all this is in file for everyone.

Starting with the Environment

A sensible first step is to examine the environment in which the server runs. Some cloud platforms, including Akamai Cloud's Linode Compute Instances for certain new accounts, block outbound SMTP ports 25, 465 and 587 by default to combat abuse. This blocking prevents applications from sending any email until the restrictions are lifted. The remedy is procedural rather than technical: platform documentation explains how to request removal of these restrictions after acknowledging the provider's email policies. Tackling this early avoids fruitless troubleshooting later on.

Alongside the port restriction check, it is worth setting a proper fully qualified domain name (FQDN) on the host and applying all available system updates. A correctly configured hostname reduces delays during mailer start-up and helps ensure that headers and logs appear coherent to downstream systems. Basic checks, such as confirming that you can log in to the mail account you intend to use as a relay, will also spare time later.

Configuring Postfix on Debian and Ubuntu

On Debian or Ubuntu, Postfix offers a straightforward route to sending mail via a relay. Installing the required packages begins with apt-get update followed by apt-get install of libsasl2-modules and postfix. The installer will prompt for a general type of mail configuration, and choosing "Internet Site" is appropriate in this scenario. The System Mail Name should then be set to the domain through which you intend to send.

After installation, verify that the myhostname parameter in /etc/postfix/main.cf reflects the server's FQDN, for example:

myhostname = fqdn.example.com

This setting anchors Postfix's identity and helps downstream receivers interpret messages correctly. The myhostname value is also used in outgoing SMTP greetings, so accuracy matters.

Relaying through Gmail

Relaying through Gmail or Google Workspace adds a layer of security considerations that are worth understanding before proceeding. Google retired its "less secure apps" feature in 2024, which had previously allowed basic username-and-password authentication over SMTP. All third-party SMTP connections now require either OAuth or an app password, and traditional password-based authentication is no longer accepted. Google is also pushing towards passkeys as a replacement for passwords at the account sign-in level, though their practical applicability to server-level SMTP relay remains limited for now. App passwords, whilst still available, are presented by Google as a transitional measure rather than a long-term solution, so OAuth is the more future-proof path where it is supported.

Where two-step verification is enabled on a Google account, the recommended approach for Postfix relay is to generate an app password. Within the Google Account security settings, enabling two-step verification unlocks the ability to create app passwords under the "How you sign in to Google" section. Choosing a descriptive name such as "Postfix" keeps records intelligible, and the resulting 16-character password should be stored securely since it will not be displayed again. This app password is then used in place of your regular account password throughout the Postfix configuration.

Storing SMTP Credentials

With credentials in hand, Postfix needs to know how to authenticate to the relay. Depending on the guide you follow, credentials may be stored at /etc/postfix/sasl/sasl_passwd or at /etc/postfix/sasl_passwd. Either location works as long as the corresponding path is referenced correctly in main.cf. In the credentials file, the entry for Gmail using STARTTLS on port 587 takes this form:

[smtp.gmail.com]:587 username@gmail.com:app-password

The square brackets around the hostname instruct Postfix not to perform MX lookups for that host, ensuring it connects directly to the submission server. After saving the file, create the hash database with postmap, using whichever path you chose:

sudo postmap /etc/postfix/sasl/sasl_passwd

This produces a .db file that Postfix consults at run-time. Because both the plain-text file and the .db file contain credentials, you should tighten permissions so that only root can read or write them:

sudo chown root:root /etc/postfix/sasl/sasl_passwd /etc/postfix/sasl/sasl_passwd.db
sudo chmod 0600 /etc/postfix/sasl/sasl_passwd /etc/postfix/sasl/sasl_passwd.db

Configuring the Gmail Relay

The relay configuration in /etc/postfix/main.cf forms the core of the setup. Setting relayhost to [smtp.gmail.com]:587 instructs Postfix to deliver all mail via Google's submission server. At the end of the file, the following block enables SASL authentication and STARTTLS, points to the hash file created earlier, disallows anonymous mechanisms and specifies the CA bundle for TLS verification:

relayhost = [smtp.gmail.com]:587
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl/sasl_passwd
smtp_tls_security_level = encrypt
smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt

Restarting Postfix applies the changes:

sudo systemctl restart postfix

A simple test uses Postfix's built-in sendmail implementation. Invoking sendmail recipient@elsewhere.com, then entering From: and Subject: headers followed by a message body and a single dot on a line by itself, is sufficient to trigger a delivery attempt. Watching sudo tail -f /var/log/syslog (or /var/log/mail.log on some distributions) while testing helps confirm that authentication and delivery are succeeding, and the way back to the shell from sendmail is Ctrl+C.

Using Third-Party SMTP Providers

The same relay pattern works with third-party SMTP providers that specialise in transactional delivery. Service-specific details differ only in hostnames and credentials, while the underlying mechanism remains identical.

For Mandrill (Mailchimp's transactional email service), the credentials file would contain:

[smtp.mandrillapp.com]:587 USERNAME:API_KEY

The relayhost line in main.cf becomes [smtp.mandrillapp.com]:587. Note that the password field takes an API key, not your Mailchimp account password. Running postmap on the credentials file and restarting Postfix completes the switch.

For SendGrid, the entry for credentials is:

[smtp.sendgrid.net]:587 apikey:YOUR_API_KEY

Here, the username is the literal string apikey (not your account name), and the password is the API key generated within your SendGrid account. The relayhost becomes [smtp.sendgrid.net]:587, followed by the same postmap and restart sequence.

One practical point worth noting: some guides place the credentials file directly under /etc/postfix/sasl_passwd, whilst the Linode Gmail relay guide uses the subdirectory path /etc/postfix/sasl/sasl_passwd. Either location is valid, but the path set in smtp_sasl_password_maps within main.cf must match whichever you choose. A mismatch produces unhelpful "file not found" errors at send time that can take some effort to diagnose.

Configuring Sendmail as an Alternative

Some administrators prefer Sendmail, particularly on distributions where it remains the default. Relaying through Gmail with Sendmail follows its own clear sequence. Installing the required packages varies by distribution: on Ubuntu, Debian and Linux Mint the command is apt install sendmail mailutils sendmail-bin, whilst on CentOS, Fedora, AlmaLinux and Red Hat systems, dnf install sendmail is used instead.

Authentication details live in an authinfo map under /etc/mail/authinfo. Creating that directory with restricted permissions and then adding a file such as gmail-auth allows the following entry to be stored:

AuthInfo: "U:root" "I:YOUR_GMAIL_EMAIL_ADDRESS" "P:YOUR_APP_PASSWORD"

The quotes here are significant: the P: is a literal prefix for the password field, not part of the password itself. Building the database with makemap hash gmail-auth < gmail-auth produces gmail-auth.db in the same directory, which Sendmail will consult when connecting to the smart host.

Sendmail's configuration is macro-driven and centred on /etc/mail/sendmail.mc. Placing the relay and authentication directives just above the first MAILER definition ensures they are processed correctly when sendmail.cf is rebuilt. The key definitions set SMART_HOST to [smtp.gmail.com], force the submission port by defining RELAY_MAILER_ARGS and ESMTP_MAILER_ARGS as TCP $h 587, enable authentication with confAUTH_OPTIONS set to A p, and wire in the authinfo map with:

FEATURE('authinfo', 'hash -o /etc/mail/authinfo/gmail-auth.db')

After saving those changes, running make -C /etc/mail regenerates sendmail.cf and systemctl restart sendmail brings the service up with the new configuration. Hosts without a resolvable FQDN may pause briefly at start-up, but the service typically continues after a short delay.

WordPress and the Post SMTP Plugin

Web applications introduce different constraints, particularly where user authentication is delegated to a third party. For WordPress sites, the Post SMTP plugin (originally forked from Postman SMTP) modernises the classic approach and integrates OAuth 2.0 so that Gmail and Google Workspace can be used without storing a mailbox password. With Google having retired basic password authentication for SMTP, an OAuth-based approach is now the standard requirement rather than an optional convenience.

The process begins with installation and activation of the plugin, after which its setup wizard auto-detects smtp.gmail.com and recommends SMTP-STARTTLS with OAuth 2.0 authentication on port 587. At this point, the wizard asks for a Client ID and Client Secret, which are obtained from the Google Cloud Console rather than the Gmail settings page. Creating a project in the console, enabling the Gmail API, and completing the OAuth consent screen with basic application information lays the necessary groundwork. Selecting "Web application" as the application type then allows you to enter the Authorised JavaScript origins and Authorised redirect URIs that the plugin displays during its setup step. Completing this creation reveals the Client ID and Client Secret, which are pasted back into the plugin wizard to proceed.

Before the plugin can authorise fully, the publishing status of the OAuth consent screen must usually be changed from "Testing" to "Production" (or "In production"). This step matters more than it might appear: whilst an app remains in "Testing" status, Google's authorisation tokens expire after just seven days, which means the connection will silently stop working and require reauthorisation on a weekly basis. Moving to "In production" removes this expiry, and refresh tokens then remain valid indefinitely unless revoked. The Google console provides a "Publish App" option on the OAuth consent screen page to make this change. Once published, returning to the WordPress dashboard and clicking "Grant permission with Google" allows you to select the desired account and accept the requested permissions. The plugin's status view then confirms that authorisation has succeeded. A test email through the plugin's own action validates that messages are leaving the site as expected.

This OAuth-based arrangement aligns with Google's current security model, avoids the need for app passwords, and reduces the risk of unauthorised access if a site is compromised. General security hardening of the WordPress installation itself remains essential regardless.

The Underlying Protocols

Underpinning all of these approaches are protocols that remain stable and well understood. SMTP still carries the mail, STARTTLS upgrades plaintext connections to encrypted channels either opportunistically or by policy, and DNS resolves relay hostnames to IP addresses. The role of DNS here is easy to overlook, but it is foundational: as The TCP/IP Guide sets out in its coverage of SMTP and related protocols, correct name resolution underpins every step of message delivery. If a relay hostname cannot be resolved, nothing else will proceed. If the certificate bundle pointed to by smtp_tls_CAfile is missing or outdated, STARTTLS negotiation will fail. Logs record these conditions at the time they occur, which is precisely why watching syslog during tests is more informative than simply checking whether a message arrives in an inbox.

A few operational considerations round out a dependable setup. Permission hygiene on credential files protects against accidental disclosure during audits or backups, and commands that manipulate maps (such as postmap and makemap) must be re-run after any edit to their corresponding source files. Consistency between the port specified in the credentials file and the one set in main.cf's relayhost parameter also matters: mismatches lead to confusing connection attempts. Postfix's postconf command lists all current configuration values, making it a convenient way to verify that paths and flags are set as expected.

On Reflection

Reliable email from servers involves the installation of supporting right component software, authentication in the way the provider expects, encrypting the submission channel, keeping credentials safe and testing with your eyes on the logs. This list makes it sound like the complex endeavour that it is. Thus, If your remit extends to a WordPress dashboard, it is better to use a plugin that speaks OAuth 2.0 and complete the corresponding setup in the Google Cloud Console so that everything flows cleanly.

Windows 11 virtualisation on Linux using KVM and QEMU

5th March 2026

Windows 11 arrived in October 2021 with a requirement that posed a challenge to many virtualisation users: TPM 2.0 was mandatory, not optional. For anyone running Windows in a virtual machine, that meant their hypervisor needed to emulate a Trusted Platform Module convincingly enough to satisfy the installer.

VirtualBox, which had been my go-to choice for desktop virtualisation for years, could not do this in its 6.1.x series. Support arrived only with VirtualBox 7.0 in October 2022, meaning anyone who needed Windows 11 in a VM faced roughly a year with no straightforward path through their existing tool.

That gap prompted a look at KVM (Kernel-based Virtual Machine), which could handle the TPM requirement through software emulation. This article documents what that investigation found, what the rough edges were at the time, and how the situation has developed in the years since.

What KVM Actually Is

KVM is not a standalone application. It is a virtualisation infrastructure built directly into the Linux kernel, and has been since the module was merged between 2006 and 2007. Rather than sitting on top of the operating system as a separate layer, it turns the Linux kernel itself into a hypervisor. This makes KVM a type-1 hypervisor in practice, even when running on a desktop machine, which is part of why its performance characteristics compare favourably with hosted solutions.

In use, KVM operates alongside QEMU for hardware emulation, libvirt for virtual machine management and virt-manager as a graphical front end. The distinction matters because problems and improvements tend to originate in different parts of that stack. KVM itself is rarely the issue; QEMU and libvirt are where the day-to-day configuration lives.

To confirm that the host CPU supports hardware virtualisation before beginning, the following command checks for the relevant flags:

egrep -c '(vmx|svm)' /proc/cpuinfo

Any result above zero means the hardware is capable. Intel processors expose the vmx flag and AMD processors expose svm.

Installing the Required Packages

The installation is straightforward on any major distribution.

On Debian and Ubuntu:

sudo apt install qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils virt-manager

On Fedora:

sudo dnf install @virtualization

On Arch Linux:

sudo pacman -S qemu libvirt virt-manager bridge-utils

After installation, the current user needs to be added to the libvirt and kvm groups before the tools will work without root privileges:

sudo usermod -aG libvirt,kvm $(whoami)

Logging out and back in instates the group membership.

Configuring Network Bridging

The default network configuration in libvirt uses NAT, which is sufficient for most purposes and requires no additional setup. The VM can reach the internet and the host, but the host cannot initiate connections to the VM. For a Windows 11 guest used primarily for application compatibility, NAT works without complaint.

A bridged network, which places the VM on the same network segment as the host, requires a wired Ethernet connection. Wireless interfaces do not support bridging in the standard Linux networking stack due to how 802.11 handles MAC addresses. For those on a wired connection, a bridge can be defined with a file named bridge.xml:

<network>
  <name>br0</name>
  <forward mode="bridge"/>
  <bridge name="br0"/>
</network>

The bridge is then activated with:

sudo virsh net-define bridge.xml
sudo virsh net-start br0
sudo virsh net-autostart br0

Installing Windows 11

Windows 11 requires TPM 2.0 and Secure Boot. Neither is present in a default KVM configuration, and both need to be added explicitly.

The swtpm package provides software TPM emulation:

sudo apt install swtpm swtpm-tools   # Debian/Ubuntu
sudo dnf install swtpm swtpm-tools   # Fedora

UEFI firmware is provided by the ovmf package, which supplies the file that virt-manager needs for Secure Boot:

sudo apt install ovmf   # Debian/Ubuntu
sudo dnf install edk2-ovmf   # Fedora

In virt-manager, when creating the VM, the firmware should be set to UEFI x86_64: /usr/share/OVMF/OVMF_CODE.fd rather than the default BIOS option. A TPM 2.0 device should be added in the hardware configuration before the VM is started. With those two elements in place, the Windows 11 installer proceeds without complaint about the hardware requirements.

The VirtIO drivers ISO should be attached as a second virtual CD-ROM drive during installation. The installer will not find the storage device otherwise because the VirtIO disk controller is not a standard device that Windows recognises without a driver. When prompted to select an installation location and no disks appear, clicking "Load driver" and browsing to the VirtIO ISO resolves it.

During the out-of-box experience, Windows 11 requires a Microsoft account and an internet connection by default. To bypass this and create a local account instead, opening a command prompt with Shift+F10 and running the following works on the Home edition:

oobebypassNRO

The machine restarts and presents an option to proceed without internet access.

Performance Considerations

KVM performance for a Windows 11 guest is generally good, but one factor specific to Windows 11 is worth understanding. Memory Integrity, also referred to as Hypervisor-Protected Code Integrity (HVCI), is a Windows security feature that uses virtualisation to protect the kernel. Running it inside a virtual machine creates nested virtualisation overhead because the guest is attempting to run its own virtualisation layer inside the host's. The performance impact is more pronounced on processors predating Intel Kaby Lake or AMD Zen 2, where the hardware support for nested virtualisation is less capable.

The CPU type selection in virt-manager also matters more than it might appear. Setting the CPU model to host-passthrough exposes the actual host CPU flags to the guest, which improves performance compared to emulated CPU models, at the cost of reduced portability if the VM image is ever moved to a different machine.

Host File System Access and Clipboard Sharing

This was where the experience diverged most noticeably from VirtualBox. VirtualBox Guest Additions handle shared folders and clipboard integration as a single installation, and the result works reliably with minimal configuration. KVM requires separate solutions for each, and in 2022 neither was as seamless as it has since become.

Clipboard Sharing via SPICE

Clipboard sharing uses the SPICE display protocol rather than VNC. The VM needs a SPICE display and a virtio-serial controller, which virt-manager adds automatically when SPICE is selected. Within the Windows guest, the installer for SPICE guest tools provides the clipboard agent. Once installed, clipboard text passes between host and guest in both directions.

The critical dependency that caused problems in 2022 was the virtio-serial channel. Without a com.redhat.spice.0 character device present in the VM configuration, the clipboard agent installs successfully but does nothing. Virt-manager now adds this automatically when SPICE is selected, which removes one of the more common failure points.

Host Directory Sharing via Virtiofs

At the time of this investigation, the practical option for sharing files between the Linux host and a Windows guest was WebDAV, which worked but felt like a workaround. The proper solution, virtiofs, existed but was not yet well-supported on Windows guests. The situation has since improved to the point where virtiofs is now the standard recommended approach.

It requires three components: the virtiofsd daemon on the host (included in recent QEMU packages), the virtiofs driver from the VirtIO Windows drivers package and WinFsp, which is the Windows equivalent of FUSE. Once configured through virt-manager's file system hardware settings, the shared directory appears as a mapped drive in Windows Explorer. The virtiofsd daemon was also rewritten in Rust in the intervening period, improving both its reliability and performance.

To configure a shared directory, shared memory must first be enabled in the VM's memory settings, then a file system device added with the driver set to virtiofs, a source path on the host and an arbitrary mount tag. The corresponding libvirt XML looks like this:

<memoryBacking>
  <source type='memfd'/>
  <access mode='shared'/>
</memoryBacking>

<filesystem type='mount' accessmode='passthrough'>
  <driver type='virtiofs' queue='1024'/>
  <source dir='/home/user/shared'/>
  <target dir='host_share'/>
</filesystem>

This was the area where VirtualBox held a clear practical advantage in 2022. The gap has since narrowed considerably.

Migrating from VirtualBox

Moving existing VirtualBox VMs to KVM is possible using qemu-img, which converts between disk image formats. The straightforward conversion from VDI to QCOW2 is:

qemu-img convert -f vdi -O qcow2 windows11.vdi windows11.qcow2

For large images or where reliability is a concern, converting via an intermediate RAW format reduces the risk of issues:

qemu-img convert -f vdi -O raw windows11.vdi windows11.raw
qemu-img convert -f raw -O qcow2 windows11.raw windows11.qcow2

The resulting QCOW2 file can then be used when creating a new VM in virt-manager, selecting "Import existing disk image" rather than creating a new one.

How the Landscape Has Shifted Since

The investigation described here took place during a specific window: VirtualBox 6.1.x was the current release, Windows 11 had just launched, and KVM was the most practical route to TPM emulation on Linux. That context has changed in several ways worth noting for anyone reading this in 2026.

VirtualBox 7.0 arrived in October 2022 with TPM 1.2 and 2.0 support, Secure Boot and a number of additional improvements. The original reason for investigating KVM was resolved, and for those who had moved across during the gap period, returning to VirtualBox for Windows guests made sense given its more straightforward Guest Additions integration.

QEMU reached version 10.0 in April 2025, a significant milestone reflecting years of accumulated improvements to hardware emulation, storage performance and x86 guest support. Libvirt has kept pace, adding reliable internal snapshots for UEFI-based VMs, evdev input device hot plug and improved unprivileged user support. The virtiofs situation for Windows guests has moved from "technically possible but awkward" to "the recommended approach with good documentation and a rewritten daemon", which addresses the most significant practical shortcoming from 2022 directly.

The broader desktop virtualisation landscape shifted when VMware Workstation Pro became free for all users, including commercial ones, in November 2024. VMware Workstation Player was discontinued as a separate product at the same time, having become redundant once Workstation Pro was available at no cost. This gave desktop users a third credible option alongside VirtualBox and KVM, with VMware's historically strong Windows guest integration now accessible without a licence fee, though users of the free version are not entitled to support through the global support team.

The miniature PC market also expanded considerably from 2023 onwards, with Intel N100-based and AMD Ryzen Embedded systems offering enough performance to run Windows natively at modest cost. For many people, that proves a cleaner solution than any hypervisor, eliminating the integration limitations entirely by giving Windows its own dedicated hardware.

Final Assessment

KVM handled Windows 11 competently during a period when the alternatives could not, and the platform has continued to improve in the years since. The two areas that fell short in 2022, host file sharing and clipboard integration, have been addressed by developments in virtiofs and the SPICE tooling, and a new user starting today may find the experience noticeably smoother.

Whether KVM is the right choice in 2026 depends on the use case. For Linux-native workloads and server-style VM management, it remains the strongest option on Linux. For a Windows desktop guest where ease of integration matters most, VirtualBox 7.x and VMware Workstation Pro are both strong alternatives, with the latter now free to use for both commercial and personal purposes. The question that drove this investigation was answered by VirtualBox itself in October 2022. KVM provided a workable solution in the meantime, and the platform has only become more capable since then.

Additional Reading

How To Convert VirtualBox Disk Image (VDI) to Qcow2 format

How to enable TPM and secure boot on KVM?

Windows 11 on KVM – How to Install Step by Step?

Enable Virtualization-based Protection of Code Integrity in Microsoft Windows

An unseen arsenal: How web developers can use specialised tools to build better websites

4th March 2026

Modern web development takes place within an ecosystem of tools so precisely suited to individual tasks that they often go unnoticed by anyone outside the profession. These utilities, spanning performance analysers, security checkers and colour palette generators, form the backbone of a workflow that must balance speed, security and visual consistency. For an industry where user experience and technical efficiency are inseparable priorities, such tools are far from optional luxuries.

Performance Testing and Page Speed Analysis

The first hurdle most developers encounter is performance measurement, and several tools have established themselves as essential in this space. GTmetrix, Google PageSpeed Insights and WebPageTest each draw on Google's open-source Lighthouse framework to varying degrees, though each approaches the task differently.

A performance grade alongside separate scores for page speed and structural quality is what GTmetrix produces for any URL submitted to it. It measures Core Web Vitals, including Largest Contentful Paint (LCP), Total Blocking Time (TBT) and Cumulative Layout Shift (CLS), which are the same metrics Google uses as ranking signals in search. The tool can run tests from multiple global server locations and simulates a real browser loading your page, producing a waterfall chart and a video replay of the load process, so developers can identify precisely which elements are causing delays.

Maintained directly by Google, PageSpeed Insights analyses pages against both laboratory data generated through Lighthouse and real-world field data drawn from the Chrome User Experience Report (CrUX). It provides separate performance scores for mobile and desktop, which is significant given that Google confirmed page speed as a ranking factor for mobile searches in July 2018. Both GTmetrix and PageSpeed Insights go well beyond raw figures, mapping out a prioritised list of optimisations so that developers can address the most impactful issues first.

A different position in the toolkit is occupied by WebPageTest, originally created by Patrick Meenan and open-sourced in 2008, and acquired by Catchpoint in 2020. Rather than returning a simple score, it runs tests from a choice of locations across the globe using real browsers at actual connection speeds, and produces detailed waterfall charts that break down every individual network request. This makes it the tool of choice when the question is not just how fast a page is, but precisely why a particular element is slow.

One of the longer-established names in website speed testing, Pingdom offers a free tool that remains widely used for its accessible reporting. Tests can be run from seven global server locations, and results are presented in four sections: a waterfall breakdown, a performance grade, a page analysis and a historical record of previous tests. The page analysis breaks down asset sizes by domain and content type, which is useful for comparing the weight of CDN-served assets against those served directly. Pingdom is based on the YSlow open-source project and does not currently measure the Core Web Vitals metrics that Google uses as ranking signals, so it is best treated as a quick and readable first pass rather than a definitive audit.

Security and Infrastructure Diagnostics

Performance alone cannot sustain a trustworthy website, as a misconfigured certificate, an insecure resource or a flagged IP address can each undermine user confidence and search visibility. One of the most frustrating post-migration problems is the disappearance of the HTTPS padlock despite an SSL certificate being in place, and Why No Padlock? exists specifically to address it. The cause is almost always mixed content, where a page served over HTTPS loads at least one resource (an image, a script or a stylesheet) over plain HTTP. Why No Padlock? scans any HTTPS URL and returns a list of every insecure resource found, along with the HTML element responsible, making it straightforward to trace and resolve the problem. Google has used HTTPS as a ranking signal since 2014, so unresolved mixed content issues carry an SEO cost as well as a security one.

For traffic-level threats, AbuseIPDB operates as a community-maintained IP blacklist. Managed by Marathon Studios Inc., the project allows system administrators and webmasters to report IP addresses involved in malicious behaviour, including hacking attempts, spam campaigns, DDoS attacks and phishing, and to check any IP address against the database before acting on traffic from it. A free API is available for integration with server tools such as Fail2Ban, enabling automatic reporting and real-time checks.

Bot traffic and automated form submissions are a persistent nuisance for any site that accepts user input, and hCaptcha addresses this by presenting challenges that are straightforward for human visitors but reliably difficult for automated scripts. Operated by Intuition Machines, it positions itself as a privacy-focused alternative to reCAPTCHA, collecting minimal data and retaining no personally identifiable information beyond what is necessary to complete a challenge. It is compliant with GDPR, CCPA and several other international privacy frameworks, and holds both ISO 27001 and SOC 2 Type II certifications. A free tier is available, with a Pro plan covering 100,000 evaluations per month, and an Enterprise tier offering additional controls including data localisation and zero-PII processing modes.

Red Sift offers two distinct products that address different aspects of infrastructure security, both relevant to the day-to-day operation of a website. Red Sift OnDMARC automates the configuration and monitoring of DMARC, SPF, DKIM, BIMI and MTA-STS, which are the protocols that collectively prevent attackers from sending spoofed emails that appear to originate from a legitimate domain. This is the basis for most phishing and business email compromise (BEC) attacks, and OnDMARC guides teams to full enforcement typically within six to eight weeks. Red Sift Certificates Lite addresses a separate but equally critical concern, monitoring SSL/TLS certificates for upcoming expiry and alerting administrators seven days ahead of time. It is free for up to 250 certificates and has been formally recommended by Let's Encrypt as its preferred monitoring service, following the retirement of Let's Encrypt's own expiry notification emails. The product was built on the foundation of Hardenize, which Red Sift acquired in 2022, a company founded by Ivan Ristić, creator of SSL Labs.

Colour Management and Visual Design

A website's visual coherence depends heavily on colour consistency, and the distance between a palette sketched on paper and one that functions in code can be significant. With over two million active users, Coolors is a fast and intuitive palette generator built around a simple interaction: pressing the space bar produces a new five-colour palette derived from colour theory algorithms. The platform includes an accessibility checker that calculates contrast ratios against WCAG standards and a colour extractor that derives palettes from uploaded photographs. It also offers interoperability with Figma, Adobe Creative Suite and the Chrome browser. A free tier is available, with a Pro plan at approximately $3 per month for unlimited saving and export options.

A quite different approach is taken by Colormind, which uses a deep learning model based on Generative Adversarial Networks (GANs) to generate harmonious colour schemes. The model is trained on datasets drawn from photographs, films, popular art and website designs, and is updated daily with fresh material. A particularly useful feature allows users to preview how a generated palette would look applied to a website layout, which is a more direct test of practicality than viewing swatches in isolation. A REST API is available for personal and non-commercial use. For converting between colour formats, tools such as Color-Hex, RGBtoHex and the WebFX Hex to RGB converter bridge the gap between design decisions and code implementation, translating colour values in both directions between the hexadecimal and RGB formats that CSS requires.

Optimisation and Code Utilities

Lean, efficient code is a direct contributor to load speed, and unused CSS is a surprisingly common source of unnecessary page weight that PurifyCSS Online addresses by scanning a website's HTML and JavaScript source against its stylesheets to identify selectors that are never used. CSS frameworks such as Bootstrap or Tailwind ship with many utility classes, and most websites use only a small fraction of them. Removing the unused rules can reduce stylesheet file size substantially, which in turn shortens the time a browser spends processing styles before rendering a page. The online version requires no build pipeline or command-line tools, making it accessible to developers at any workflow stage.

Image compression is equally important, as unoptimised images are among the most common causes of slow load times. ImageCompressor handles JPEG, PNG, WebP, GIF and SVG files in the browser, applying lossy or lossless algorithms with adjustable quality settings to reduce file sizes without visible degradation, and processes everything locally, which means that no images are uploaded to an external server. Contact forms and directory listings on websites are a persistent target for spam harvesters, and Email Obfuscator encodes email addresses into a format that is readable by browsers but opaque to most automated scrapers, generating both a plain HTML entity version and a JavaScript-dependent alternative for stronger protection.

For websites that publish mathematical or scientific content, QuickLaTeX provides a practical solution to embedding equations in web pages without a local LaTeX installation. Authors write standard LaTeX expressions directly in their content, and the service renders them as high-quality images that are cached and returned via URL for embedding. Its companion WordPress plugin, WP QuickLaTeX, handles this process automatically within the editor, supporting inline formulas, numbered displayed equations and TikZ graphics.

Server Response and Infrastructure Monitoring

Infrastructure performance sits beneath the layer that most visitors ever see, yet it determines how quickly any content reaches a browser at all, and the Time to First Byte (TTFB) is the metric that captures this most directly. It measures the interval between a browser sending an HTTP request and receiving the first byte of data from the server, and ByteCheck exists solely to measure it. This metric captures the combined effect of DNS resolution time, TCP connection time, SSL negotiation time and server processing time. Google considers a TTFB of 200ms or below to be good, and Byte Check breaks the total down into each constituent step, so developers can identify precisely where delays are occurring. Slow TTFB is often a server-side issue, such as inadequate caching, an overloaded database or a lack of a content delivery network (CDN).

Analytics and Content Evaluation

The final layer of tooling concerns understanding what content a site serves and how it performs in context. Dandelion is a natural language processing API developed by SpazioDati that can extract entities, classify text and analyse the semantic content of web pages, which has applications in content tagging, SEO auditing and editorial quality control. A free tier, covering up to 1,000 API units per day, is available without a credit card, making it accessible for developers who need semantic analysis at low to moderate volume.

Quiet Workhorses of the Web

Individually, each of these tools addresses a specific and well-defined problem. Taken together, they form a coherent toolkit that covers the full lifecycle of a web project, from initial performance diagnosis through to deployment of a secure, efficiently coded and visually consistent site. They do not replace professional judgement but extend it, handling time-consuming checks and conversions that would otherwise consume the attention needed for more complex work. As websites grow in complexity and user expectations continue to rise, familiarity with this kind of specialist tooling becomes a practical necessity rather than an optional extra.

Getting to know Jira, its workflows, test management capabilities and the need for governance

3rd March 2026

Developed by Atlassian and first released in 2002 as a straightforward bug and issue tracker aimed at software developers, Jira has since grown into a platform used for project management across a wide range of industries and disciplines. The name itself is a truncation of Gojira, the Japanese word for Godzilla, originating as an internal nickname used by Atlassian developers for Bugzilla, the bug-tracking tool they had previously relied upon.

A Family of Products, Each With a Purpose

The Jira ecosystem has expanded well beyond its original single offering, and it is worth understanding what each product is designed to do. Jira (formerly marketed as Jira Software, now unified with Jira Work Management) remains the flagship, built around agile project management with Scrum and Kanban boards at its core. Jira Service Management serves IT operations and service desk teams, handling ticketing and customer support workflows; it originated as Jira Service Desk in 2013, following Atlassian's discovery that nearly 40 per cent of their customers had already adapted the base product for service requests, and it was rebranded in 2020. At the enterprise level, Jira Align connects team delivery to strategic business goals, while Jira Product Discovery helps product teams capture feedback, prioritise ideas and build roadmaps. Together, these products span the full organisational hierarchy, from individual contributors up to executive portfolio management.

Core Features

Agile Boards and Backlog Management

Jira supports a range of agile methodologies, with two primary project templates available to teams. The Scrum template is designed for teams that deliver work in time-boxed sprints, providing backlog management, sprint planning and capacity tracking in a single view. The Kanban template, by contrast, is built around a continuous flow of work, helping teams visualise tasks as they move through each stage of a process without the constraint of fixed iterations. Both templates support custom configurations for teams whose ways of working do not map neatly to either model.

Reporting and Analytics

Jira's reporting suite provides visibility into project progress through various charts and metrics. The Burndown chart tracks remaining story points against the time left in a sprint, offering an indication of whether the team is on course to complete its committed work. The Burnup chart takes a complementary view, tracking how much work has been completed over time and making it straightforward to compare planned scope against actual delivery. These tools are useful for identifying patterns in team performance, though they are most informative when used consistently over several sprints rather than in isolation.

Custom Workflows

Teams can design workflows that reflect their own processes, defining the states an issue passes through and the transitions between them. Automation rules can be applied to handle repetitive steps without manual intervention, reducing administrative overhead on routine tasks. This flexibility is one of the more frequently cited reasons for adopting Jira, though it does require ongoing governance to prevent workflows from becoming inconsistent or unwieldy as teams and processes evolve.

Jira Query Language

Jira Query Language (JQL) provides a structured way to search and filter issues across projects, enabling teams to construct precise queries based on any combination of fields, statuses, assignees, dates and custom attributes. For organisations that invest time in learning it, JQL is a practical tool for building custom reports and dashboards. It is also the underlying mechanism for many of Jira's more advanced filtering and automation features.

Integration Options

Jira connects with a range of tools both within and outside the Atlassian ecosystem. Confluence handles documentation, Bitbucket manages code repositories and links commits directly to Jira issues, and Loom, acquired by Atlassian in 2023, adds asynchronous video communication. Third-party integrations, including Zoom and a broad catalogue of tools available through the Atlassian Marketplace, extend this further for teams with specific requirements.

Test Management With Xray

Jira does not include dedicated test management functionality by default, and teams that need to manage structured test cases alongside their development work typically turn to the Xray plugin, one of the most widely used additions in the Atlassian Marketplace. Xray operates as a native Jira application, meaning it adds new issue types directly to the Jira instance rather than sitting as a separate external tool. The issue types it introduces include Test, Test Set, Test Plan and Test Execution, all of which behave like standard Jira issues and can be searched, filtered and reported on using JQL.

A key capability is requirements traceability: Xray links test cases directly to the user stories and requirements they cover, and connects those in turn to any defects raised during execution. This gives teams a clear picture of test coverage and release readiness without having to leave Jira or reconcile data from separate systems. Test executions can be manual or automated, and Xray integrates with CI/CD toolchains (including Jenkins and Robot Framework) via a REST API, allowing automated test results to be published back into Jira and associated with the relevant requirements.

Xray also supports Behaviour-Driven Development (BDD), enabling teams to write tests in Gherkin syntax and manage them alongside their other Jira work. For organisations already using Jira as their central project management tool, Xray offers a practical route to bringing QA activities into the same workflow rather than maintaining a separate test management system.

Who is Jira Best Suited For?

Jira is generally considered most suitable for larger teams that require detailed control over workflows, reporting and resource allocation, and that have the capacity to dedicate administrative effort to the platform. Smaller teams or those without a dedicated Jira administrator may find the learning curve significant, particularly when configuring custom workflows or working with more advanced reporting features. Pricing is subscription-based, with tiers determined by user count and deployment model (cloud-hosted or self-managed), which means costs can increase substantially as an organisation grows.

Project Types: Tailoring Access to Needs

Jira divides its project spaces into two categories that serve different audiences. Team-managed projects offer simplified configuration for smaller, autonomous teams that want to get started without involving a Jira administrator. Company-managed projects grant administrators full control over customisation, permissions and settings, making them more appropriate for enterprises with complex requirements and multiple teams operating within the same instance. The two types can coexist within the same deployment, giving organisations the option to apply different governance models to different teams as their needs dictate.

Strengths and Limitations

Jira's scalability is one of its more consistent strengths, in terms of both the size of the user base it can support and the complexity of workflows it can accommodate. Its query functions give teams a precise way to interrogate project data, and its breadth of integrations means it can be connected to most standard development and collaboration toolchains.

A significant consideration for any Jira deployment is the degree of upfront decision-making it requires. Because the platform places few constraints on how it is configured, teams must establish their own conventions around workflow design, issue hierarchy, naming and permissions before adoption begins in earnest. Without that groundwork, it is straightforward for individual teams to configure Jira in incompatible ways, making cross-team reporting difficult and creating inconsistencies that become harder to unpick over time. Organisations that treat Jira as something to be governed, rather than simply installed, tend to get considerably more out of it.

The principal technical limitation is its dependence on the wider Atlassian ecosystem. Advanced portfolio planning, capacity forecasting and cross-programme dependency management typically require either a higher-tier plan or additional tooling. Advanced Roadmaps (now called Plans) are available natively within Jira Premium and Enterprise, providing cross-team timeline planning and scenario modelling. For capacity planning, budget tracking and timesheet management, many organisations turn to third-party Marketplace tools such as Tempo. Teams evaluating Jira should factor in both the cost of the appropriate licence tier and any supplementary tooling they are likely to need.

Where to Go From Here

Jira has grown considerably from the issue tracker it was when first released in 2002, and is now used by over 300,000 organisations worldwide. Its capabilities are broad, and its configurability makes it adaptable to a wide range of team structures and workflows. That same configurability, however, means the platform rewards investment in setup and ongoing administration, and organisations should assess whether they have the resources to realise that potential before committing. For those looking to explore further, Atlassian's official guides, its wider documentation, the support portal, the Atlassian Community and the developer documentation are useful starting points, and there are courses from an independent provider too.

Technology retail in North America: Five retailers worth knowing

2nd March 2026

The technology retail landscape in North America is shaped by a tension between convenience, expertise and competitive pricing. From sprawling big-box chains to specialist online stores, the sector contains a varied mix of established names and niche operators, each competing for customers who expect rapid delivery, accurate product information and reliable after-sales support. Five retailers stand out for the distinctly different approaches they take to serving that audience: Tech-America, Best Buy, Newegg, PC-Canada and Micro Center.

Tech-America

Tech-America presents itself as a direct-to-consumer online retailer covering a broad range of electronics and computer components. Its selling points include a large inventory and an emphasis on prompt shipping, with the site targeting a mix of hobbyists and small businesses. Questions have been raised about the company's legitimacy, with multiple consumer forums and review aggregators reflecting mixed opinions on its reliability and operational structure. Prospective customers are advised to research the retailer carefully before committing to a purchase, as third-party assessments remain inconclusive.

Best Buy

Best Buy is one of the most recognisable names in North American consumer electronics retail, and its history stretches back further than many of its customers might expect. The company was founded by Richard M. Schulze and James Wheeler in 1966 as an audio speciality store called Sound of Music, operating its first location in St. Paul, Minnesota. It was rebranded as Best Buy in 1983, at which point it had seven stores and around $10 million in annual sales, and it subsequently expanded its product range well beyond audio equipment to become a broad-based electronics retailer.

Today, Best Buy operates over 1,000 stores across the United States and Canada, combining physical retail with online sales in a model that the company describes as omnichannel. A key differentiator is its Geek Squad service division, which provides technical support, repairs and installation services across all store locations, and which has become a recognisable brand in its own right since being acquired by Best Buy in 2002. That combination of an extensive retail footprint and in-house technical services has allowed the company to retain a large and varied customer base that includes households, businesses and educational institutions.

Newegg

Newegg occupies a distinct position as a specialist online retailer focused primarily on computer hardware and components. Founded in 2001 by Fred Chang, a Taiwanese-American entrepreneur who had previously run ABS Computer Technologies, the company was established in California and initially targeted PC builders and enthusiasts who wanted detailed product information and user reviews alongside their purchases. The name itself was chosen to suggest new hope for e-commerce at a time when many dot-com businesses were struggling to survive.

Newegg operates a hybrid model that combines first-party sales with a marketplace for third-party sellers, expanding available inventory without the company needing to hold all stock itself. This approach has attracted a loyal community of technically minded buyers who value the depth of product listings on the platform. However, the marketplace model also introduces variability in seller quality, and some customers have noted inconsistencies in their experiences depending on which seller fulfilled their order. Newegg has been publicly listed on the Nasdaq under the ticker NEGG since May 2021, following a reverse merger with a Chinese special-purpose acquisition company.

PC-Canada

PC-Canada is a Waterloo, Ontario-based retailer that has served both individual consumers and business customers since its founding in 2003, making it one of Canada's longer-standing e-commerce technology retailers. The company offers a broad catalogue of IT products and components, and it holds an A+ rating from the Better Business Bureau, having been accredited since December 2015. Customer reviews present a more mixed picture, with some praising competitive pricing and fast shipping, while others have reported issues around order fulfilment and pricing changes after purchase. That gap between institutional accreditation and individual customer experience is a useful reminder that smaller regional retailers can face difficulties scaling consistently as their customer base grows.

Micro Center

Micro Center has taken a path that runs counter to the broader shift towards online-only retail, continuing to invest in physical stores and in-person expertise. The company currently operates 30 locations across the United States, with recent openings in Charlotte, Miami and Santa Clara adding to its footprint. Each store carries over 25,000 products and is staffed by associates who are recruited specifically for their technical knowledge, rather than general retail experience.

A notable feature of every Micro Center location is the Knowledge Bar, a dedicated in-store support desk offering diagnostics, repairs, authorised servicing for brands including Apple and Dell, and consultations for customers building their own PCs. The concept was introduced in 2007 and has since become central to the company's identity. Micro Center was ranked the number one tech retailer in the United States by PC Magazine in 2024, a recognition that reflects the premium its customers place on accessible, knowledgeable in-store service.

Closing Remarks

Each of these five retailers demonstrates a different answer to the same underlying question: what do technology buyers actually value? Tech-America and Newegg lean on the convenience and inventory breadth that online retail makes possible, while Best Buy and Micro Center make the case that physical presence and expert service remain compelling. PC-Canada illustrates the particular pressures facing regional players operating in a market where large international competitors set the expectations for pricing and delivery speed. As consumer habits continue to evolve, the retailers that balance adaptability with a clearly defined offering are likely to be the ones that endure.

The Fediverse: A decentralised alternative to centralised social media

27th February 2026

The Fediverse is not a single platform but a network of interconnected services, each operating independently yet communicating through shared open standards. Rather than centralising power in one company or product, it distributes control across thousands of independently run servers, known as instances, that nonetheless talk to one another through a common language. That language has a longer history than most users realise.

Those with long memories of the federated web may recall Identica, one of the earliest federated microblogging services, which ran on the OStatus protocol. In December 2012, Identica transitioned to new underlying software called pump.io, which took a different architectural approach: rather than relying on OStatus, it used JSON-LD and a REST-based inbox system designed to handle general activity streams rather than simple status updates. In time, pump.io itself eventually would be discontinued, but it was not a dead end. Its data model and design decisions fed directly into the development of what became ActivityPub, the protocol that now underpins the modern Fediverse.

ActivityPub became a W3C Recommendation in January 2018, formalising an approach to federated social networking that Identica and pump.io had helped to pioneer. Through this standard, users on different platforms can follow, reply to and interact with one another across server and software boundaries, in much the same way that email allows a Gmail user to correspond with someone on Outlook.

Microblogging at the Core

At the heart of the Fediverse is a cluster of microblogging platforms, each with its own character and community. Mastodon, the most widely used, mirrors much of what Twitter once offered but with a firm emphasis on community governance and decentralised ownership. Its character limit of 500 characters and the absence of algorithmic ranking set it apart from the mainstream.

Misskey, which enjoys particular popularity in Japan, introduces custom emoji reactions and extensive rich-text formatting, appealing to users who want greater expressiveness than Mastodon provides. Pleroma offers a lightweight alternative with a default character limit of 5,000, making it more suitable for longer posts, while Akkoma (a fork of Pleroma) adds features such as a bubble timeline, local-only posting and improved moderation tooling. Both are well regarded among technically minded administrators who want to run their own servers without the resource demands that Mastodon can place on smaller machines.

Beyond Microblogging

The Fediverse extends well beyond short-form text. PeerTube provides a decentralised video-hosting platform comparable in purpose to YouTube, using peer-to-peer technology so that popular videos gain additional bandwidth as viewership grows. Pixelfed fulfils a similar role for photo sharing, operating as an open and federated counterpart to Instagram, with a focus on privacy and user control.

For forum-style discussion, Lemmy takes the role of a decentralised Reddit, built around threaded community posts, voting and link aggregation. Event coordination is handled by Mobilizon, which provides a federated alternative to Facebook Events and allows communities to publish, share and manage gatherings without relying on any proprietary platform.

Audio is covered by Funkwhale, a federated platform for uploading and sharing music, podcasts and other audio content. It operates through ActivityPub and functions as a community-driven alternative to services such as Spotify, Bandcamp and SoundCloud, allowing instance operators to share their libraries with one another across the network.

Each of these services runs independently on its own set of instances but remains interconnected across the wider Fediverse through ActivityPub, meaning a Mastodon user can, for instance, follow a PeerTube channel and see new video posts appear directly in their timeline.

Social Networking and Multi-Protocol Platforms

Some Fediverse platforms aim less at replicating a single mainstream service and more at providing a broad social networking experience. Friendica is perhaps the most ambitious of these, supporting not only ActivityPub but also the diaspora* and OStatus protocols, as well as RSS feed ingestion and two-way email contacts. The result is a platform that can serve as a hub for a user's entire federated social life, pulling in posts from Mastodon, Pixelfed, Lemmy and other networks into a single, unified timeline. Its Facebook-like interface, with threaded comments and no character limit, makes it a natural fit for users who found Twitter-style microblogging too constraining.

Hubzilla takes a similarly expansive approach, but pushes further still, incorporating file hosting, photo sharing, a calendar and website publishing alongside its social networking features. Its distinguishing characteristic is nomadic identity, a system by which a user's account can exist simultaneously across multiple servers and be migrated or cloned without loss of data or followers. Hubzilla federates over ActivityPub, the diaspora* protocol, OStatus and its own native Zot protocol, giving it an unusually wide reach across the federated web.

Having launched in 2010, diaspora is one of the earliest decentralised social networks. It operates through its own diaspora protocol rather than ActivityPub, making it technically distinct from much of the rest of the Fediverse, though it can still communicate with platforms such as Friendica and Hubzilla that support both standards. Its central design principle is user ownership of data: posts are stored on the user's chosen server (called a pod) and the platform uses an Aspects system to let users control precisely which groups of contacts see any given post, offering fine-grained privacy controls that most other Fediverse platforms do not match.

Infrastructure and Discovery

Navigating the Fediverse is made easier by a range of supporting tools and directories. Fedi.Directory catalogues interesting and active accounts across the network, helping newcomers find communities aligned with their interests. Fediverse.Party offers an overview of the many software projects that make up the ecosystem, acting as a starting point for those deciding which platform or instance to join.

For bloggers who already maintain an RSS feed, tools such as Mastofeed can automatically publish new posts to a Mastodon account, bringing older publishing workflows into the federated network. Those who prefer more control over what gets posted and how it is worded may find a better fit in toot, a command-line and terminal user interface client for Mastodon written in Python. Because toot accepts piped input, it can be combined with a script or an AI model to generate a short, readable announcement for each new article, complete with a link, and post it directly to Mastodon without any manual intervention. This kind of bridging reflects the Fediverse's broader philosophy: existing content and communities should be able to participate without requiring users to abandon what already works for them.

Community Governance and Its Challenges

The challenge of moderating online communities is not new. Website forums, which dominated community discussion through the late 1990s and 2000s, often became ungovernable at scale, with administrators struggling to maintain civility against a tide of bad-faith participation that no small volunteer team could reliably contain. Centralised platforms such as Twitter and Facebook presented themselves as a solution, with algorithmic moderation and corporate policy appearing to offer consistency at scale. That promise has not aged well. Discourse on those platforms has deteriorated markedly, and the tools that were supposed to manage it have proved either ineffective or applied so inconsistently as to erode trust in the platforms themselves.

The Fediverse's instance-based model sits in an instructive position relative to both of those histories. Like the old forum model, each instance is self-governing, with administrators setting their own rules and moderating their own communities. Unlike a standalone forum, however, an instance has a tool that forum administrators never possessed: the ability to defederate, cutting off contact with a badly behaved community entirely rather than having to manage it directly. The European Commission operates its own official Mastodon instance, as does the European Data Protection Supervisor, reflecting a growing interest among public institutions in this kind of platform independence and controlled self-governance.

The model is not without its own difficulties. With no central authority, ensuring consistent moderation across the network is impossible by design. Harmful content that might be removed swiftly on a centralised platform can persist on instances that choose not to act, and defederation, while effective, is a blunt instrument that severs all contact rather than addressing specific behaviour. User experience also varies considerably from one instance to the next, which can make the Fediverse feel fragmented to those accustomed to the uniformity of mainstream social media. Whether that fragmentation is a flaw or a feature depends largely on what one values more: consistency or autonomy.

A Democratic Model for the Open Web

What unifies these varied platforms, tools and governance approaches is a shared commitment to an internet where users are participants rather than products. The Fediverse offers no advertising and no algorithmic manipulation of feeds, and the open-source nature of most of its software means that anyone with the technical means can inspect, fork or improve the code. The network's future will depend on continued developer investment, user education and the willingness of new arrivals to engage with an ecosystem that is deliberately more complex than a single sign-up page.

For now, the Fediverse stands as a working demonstration that a more democratic and user-directed model of online social life is achievable. Whether through microblogging on Mastodon, sharing videos on PeerTube, discovering music on Funkwhale, coordinating events through Mobilizon or managing a rich personal social hub on Friendica, it offers something that centralised platforms structurally cannot: the ability for communities to own their own corner of the internet.

Generating commit messages and summarising text locally with Ollama running on Linux

26th February 2026

For generating GitHub commit messages, I use aicommit, which I have installed using Homebrew on macOS and on Linux. By default, this needs access to the OpenAI API using a token for identification. However, I noticed that API usage is heavier than when I summarise articles using Python scripting. In the interest of cutting the load and the associated cost, I began to look at locally run LLM options. Here, I discuss things mainly from a Linux point of view, particularly since I use Linux Mint for daily work.

Hardware Considerations

That led me to Ollama, which also has a local API in the mould of what you get from OpenAI. It also offers a Python interface, which has plenty of uses. This experimentation began on an iMac, where macOS can access all the available memory, offering flexibility when it comes to model selection. On a desktop PC or workstation, the architecture is different, which means that you are dependent on GPU processing for added speed. Should the load fall on the CPU, the lag in performance cannot be missed. The situation can be seen from this command while an LLM is loaded:

ollama ps

That discovery was made at the end of 2024, prompting me to do a system upgrade that only partially addressed the need, even if a quieter cooler case was part of the new machine. Before that, I had tried a new Nvidia GeForce RTX 4060 graphics card with 8 GB of VRAM. That continued in use, though the amount of onboard memory meant that larger models overflowed into system memory, bringing the CPU in use, still substantially slowing processing. Though there are some reasonable models like llama3.1:8b that will fit within 8 GB of VRAM, that has limitations that became apparent with use. Hallucinations were among those, and that also afflicted alternative options.

That led me to upgrade to a GeForce RTX 5060 Ti with 16 GB of VRAM, which meant that larger models could be used. Two of these have become my choices for different tasks: gpt-oss for GitHub commit messages and qwen3:14b for summarising blocks of text (albeit with Anthropic's API for when the output is not to my expectations, not that it happens often). Both fit of these within the available memory, allowing for GPU processing without any CPU involvement.

Generating Commit Messages

To use aicommit with Ollama, the command needs to be changed to use the Ollama API, and it is better to define a function like this:

run_aicommit() { env OPENAI_BASE_URL="http://localhost:11434/v1" OPENAI_API_KEY="ollama" AICOMMIT_MODEL="gpt-oss" /home/linuxbrew/.linuxbrew/bin/aicommit "$@"; }

This avoids having to alter the values of any global variables, with the env command setting up an ephemeral environment within which these are available. Here, using env may not be essential, even if it makes things clearer. The shell variable names should be self-explanatory given the names, and this way of doing things does not clash with any global variables that are set. Since aicommit was added using Homebrew, the full path is defined to avoid any ambiguity for the shell. At the end, "$@" passes any parameters or modifiers like 2>/dev/null, which redirects stderr output so that it does not appear when the function is being called. While you need to watch the volume of what is being passed to it, this approach works well and mostly produces sensible commit messages.

Text Summarisation

For text generation with a Python script, using streaming helps to keep everything in hand. Here is the core code:

chunks = []
for part in ollama.chat(
    model=model,
    messages=[{'role': 'user', 'content': prompt}],
    options={'num_ctx': context, 'temperature': 0.2, 'top_p': 0.9},
    stream=True,
):
    chunks.append(part['message']['content'])

summary = re.sub(r'\s+', ' ', ''.join(chunks)).strip()

Above, a for loop iterates over each streamed chunk as it arrives, extracting the text content from part['message']['content'] and appending it to the chunks list. Once streaming is finished, ''.join(chunks) reassembles all the pieces into a single string. The re.sub(r'\s+', ' ', ...) call then collapses any intermediate sequences of whitespace characters (newlines, tabs, multiple spaces) down to a single space, and .strip() removes any leading or trailing whitespace, storing the cleaned result in summary.

Within the loop itself, an ollama.chat() call initiates an interaction with the specified model (defined as qwen3:14b earlier in the code), passing the user's prompt as a message. This is controlled by a few parameters, with num_ctx controlling the context window size and 4096 as the recommended limit to ensure that everything remains on the GPU. Defining a model temperature of 0.2 grounds the model to keep the output focussed and deterministic, while a top_p value of 0.9 applies nucleus sampling to filter the token pool. Setting stream=True means the model returns its response incrementally as a series of chunks, rather than waiting until generation is complete.

A Beneficial Outcome

Most of the time, local LLM usage suffices for my needs and reserves the use of remote models from the likes of OpenAI or Anthropic for when they add real value. The hardware outlay remains a sizeable investment, though, even if it adds significantly to one's personal privacy. For a long time, graphics cards have not interested me aside from basic functions like desktop display, making this a change from how I used to view such devices before the advent of generative AI.

A survey of commenting systems for static websites

25th February 2026

This piece grew out of a practical problem. When building a Hugo website, I went looking for a way to add reader comments. The remotely hosted options I found were either subscription-based or visually intrusive in ways that clashed with the site design. Moving to the self-hosted alternatives brought a different set of difficulties: setup proved neither straightforward nor reliably successful, and after some time I concluded that going without comments was the more sensible outcome.

That experience is, it turns out, a common one. The commenting problem for static sites has no clean solution, and the landscape of available tools is wide enough to be disorienting. What follows is a survey of what is currently out there, covering federated, hosted and self-hosted approaches, so that others facing the same decision can at least make an informed choice about where to invest their time.

Federated Options

At one end of the spectrum sit the federated solutions, which take the most principled approach to data ownership. Federated systems such as Cactus Comments stand out by building on the Matrix open standard, a decentralised protocol for real-time communication governed by the Matrix.org Foundation. Because comments exist as rooms on the Matrix network, they are not siloed within any single server, and users can engage with discussions using an existing Matrix account on any compatible home server, or follow threads using any Matrix client of their choosing. Site owners, meanwhile, retain the flexibility to rely on the public Cactus Comments service or to run their own Matrix home server, avoiding third-party tracking and centralised control alike. The web client is LGPLv3 licensed and the backend service is AGPLv3 licensed, making the entire stack free and open source.

Solutions for Publishers and Media Outlets

For publishers and media organisations, Coral by Vox Media offers a well-established and feature-rich alternative. Originally founded in 2014 as a collaboration between the Mozilla Foundation, The New York Times and The Washington Post, with funding from the Knight Foundation, it moved to Vox Media in 2019 and was released as open-source software. It provides advanced moderation tools supported by AI technology, real-time comment alerts and in-depth customisation through its GraphQL API. Its capacity to integrate with existing user authentication systems makes it a compelling choice for organisations that wish to maintain editorial control without sacrificing community engagement. Coral is currently deployed across 30 countries and in 23 languages, a breadth of adoption that reflects its standing among publishers of all sizes. The team has recently expanded the product to include a live Q&A tool alongside the core commenting experience, and the open-source codebase means that organisations with the technical resources can self-host the entire platform.

A strong alternative for publishers who handle large discussion volumes is GraphComment, a hosted platform developed by the French company Semiologic. It takes a social-network-inspired approach, offering threaded discussions with real-time updates, relevance-based sorting, a reputation-based voting system that enables the community to assist with moderation, and a proprietary Bubble Flow interface that makes individual threads indexable by search engines. All data are stored on servers based in France, which will appeal to publishers with European data-residency requirements. Its client list includes Le Monde, France Info and Les Echos, giving it considerable credibility in the media sector.

Hosted Solutions: Ease of Setup and Performance

Hosted solutions cater to those who prioritise simplicity and page performance above all else. ReplyBox exemplifies this approach, describing itself as 15 times lighter than Disqus, with a design focused on clean aesthetics and fast page loads. It supports Markdown formatting, nested replies, comment upvotes, email notifications and social login via Google, and it comes with spam filtering through Akismet. A 14-day free trial is available with no payment required, and a WordPress plugin is offered for those already on that platform.

Remarkbox takes a similarly restrained approach. Founded in 2014 by Russell Ballestrini after he moved his own blog to a static site and found existing solutions too slow or ad-laden, it is open source, carries no advertising and performs no user tracking. Readers can leave comments without creating an account, using email verification to confirm their identity, and the platform operates on a pay-what-you-can basis that keeps it accessible to smaller sites. It supports Markdown with real-time comment previews and deeply nested replies, and its developer notes that comments that are served through the platform contribute to SEO by making user-generated content indexable by search engines.

The choice between hosted and self-hosted systems often hinges on the trade-off between convenience and control. Staticman was a notable option in this space, acting as a Node.js bridge that committed comment submissions as data files directly to a GitHub or GitLab repository. However, its website is no longer accessible, and the project has been effectively abandoned since around 2020, with its maintainers publicly confirming in early 2024 that neither they nor the original author have been active on it for some time and that no volunteer has stepped forward to take it over. Those with a need for similar functionality are directed by the project's own contributors towards Cloudflare Workers-based alternatives. Utterances remains a viable option in this category, using GitHub Issues as its backend so that all comment data stays within a repository the site owner already controls. It requires some technical setup, but rewards that effort with complete data ownership and no external dependencies.

Open-Source, Self-Hosted Options

For developers who value privacy and data sovereignty above the convenience of a hosted service, open-source and self-hosted options present a natural fit. Remark42 is an actively maintained project that supports threaded comments, social login, moderation tools and Telegram or email notifications. Written in Python and backed by a SQLite database, Isso has been available since 2013 and offers a straightforward deployment with a small resource footprint, together with anonymous commenting that requires no third-party authentication. Both projects reflect a broader preference among privacy-conscious developers for keeping comment data entirely under their own roof.

The Case of Disqus

Valued for its ease of integration and its social features, Disqus remains one of the most widely recognised hosted commenting platform. However, it comes with well-documented drawbacks. Disqus operates as both a commenting service and a marketing and data company, collecting browsing data via tracking scripts and sharing it with third-party advertising partners. In 2021, the Norwegian Data Protection Authority notified Disqus of its intention to issue an administrative fine of approximately 2.5 million euros for processing user data without valid consent under the General Data Protection Regulation. However, following Disqus's response, the authority's final decision in 2024 was to issue a formal reprimand rather than impose the financial penalty. The proceedings nonetheless drew renewed attention to the privacy implications of relying on the platform. Site owners who prefer the convenience of a hosted service without those trade-offs may find more suitable alternatives in Hyvor Talk or CommentBox, both of which are designed around privacy-first principles and minimal setup.

Bridging the Gap: Talkyard and Discourse

Functioning as both a commenting system and a full community forum, Talkyard occupies an interesting position in the landscape. It can be embedded on a blog in the same manner as a traditional commenting widget, yet it also supports standalone discussion boards, making it a viable option for content creators who anticipate their audience outgrowing a simple comment section.

It also happens that Discourse operates on a similar principle but at greater scale, providing a fully featured forum platform that can be embedded as a comment section on external pages. Co-founded by Jeff Atwood (also a co-founder of Stack Overflow), Robin Ward and Sam Saffron, it is an open-source project whose server side is built on Ruby on Rails with a PostgreSQL database and Redis cache, while the client side uses Ember.js. Both Talkyard and Discourse are available as hosted services or as self-hosted installations, and both carry open-source codebases for those who wish to inspect or extend them.

Self-Hosting Discourse With Cloudflare CDN

For those who wish to take the self-hosted route, Discourse distributes an official Docker image that considerably simplifies deployment. The process begins by cloning the official repository into /var/discourse and running the bundled setup tool, which prompts for a hostname, administrator email address and SMTP credentials. A Linux server with at least 2 GB of memory is required, and a SWAP partition should be enabled on machines with only 1 GB.

Pairing a self-hosted instance with Cloudflare as a global CDN is a practical choice, as Cloudflare provides CDN acceleration, DNS management and DDoS mitigation, with a free tier that suits most community deployments. When configuring SSL, the recommended approach is to select Full mode in the Cloudflare SSL/TLS dashboard and generate an origin certificate using the RSA key type for maximum compatibility. That certificate is then placed in /var/discourse/shared/standalone/ssl/, and the relevant Cloudflare and SSL templates are introduced into Discourse's app.yml configuration file.

One important point during initial DNS setup is to leave the Cloudflare proxy status set to DNS only until the Discourse configuration is complete and verified, switching it to Proxied only afterwards to avoid redirect errors during first deployment. Email setup is among the more demanding aspects of running Discourse, as the platform depends on it for user authentication and notifications. The notification_email setting and the disable_emails option both require attention after a fresh install or a migration restore. Once configuration is finalised, running ./launcher rebuild app from the /var/discourse directory completes the build, typically within ten minutes.

Plugins can be added at any time by specifying their Git repository URLs in the hooks section of app.yml and triggering a rebuild. Discourse creates weekly backups automatically, storing them locally under /var/discourse/shared/standalone/backups, and these can be synchronised offsite via rsync or uploaded automatically to Amazon S3 if credentials are configured in the admin panel.

At a Glance

Solution Type Best For
Cactus Comments Federated, open source Privacy-centric sites
Coral Open source, hosted or self-hosted Publishers and newsrooms
GraphComment Hosted Enhanced engagement and SEO
ReplyBox Hosted Simple static sites
Remarkbox Hosted, optional self-host Speed and simplicity
Utterances Repository-backed Developer-owned data
Remark42 Self-hosted, open source Privacy and control
Isso Self-hosted, open source Minimal footprint
Hyvor Talk Hosted Privacy-focused ease of use
CommentBox Hosted Clean design, minimal setup
Talkyard Hosted or self-hosted Comments and forums combined
Discourse Hosted or self-hosted Rich discussion communities
Disqus Hosted Ease of integration (privacy caveats apply)

Closing Thoughts

None of the options surveyed here is without compromise. The hosted services ask you to accept some degree of cost, design constraint or data trade-off. The self-hosted and repository-backed tools demand technical time that can outweigh the benefit for a small or personal site. The federated approach is principled but asks readers to have, or create, a Matrix account before they can participate. It is entirely reasonable to weigh all of that and, as I did, conclude that going without comments is the right call for now. The landscape does shift, and a solution that is cumbersome today may become more accessible as these projects mature. In the meantime, knowing what exists and where the friction lies is a reasonable place to start.

The Open Worldwide Application Security Project: A cornerstone of digital safety in an age of evolving cybersecurity threats

24th February 2026

When Mark Curphey registered the owasp.org domain and announced the project on a security mailing list on the 9th of September 2001, there was no particular reason to expect that it would become one of the defining frameworks in the world of application security. Yet, OWASP, originally the Open Web Application Security Project, has done exactly that, growing from an informal community into a globally recognised nonprofit foundation that shapes how developers, security professionals and businesses think about the security of software. In February 2023, the board voted to update the name to the Open Worldwide Application Security Project, a change that better reflects its modern scope, which now extends beyond web applications to cover IoT, APIs and software security more broadly.

At its heart, OWASP operates on a straightforward principle: knowledge about software security should be free and openly accessible to everyone. The foundation became incorporated as a United States 501(c)(3) nonprofit charity on the 21st of April 2004, when Jeff Williams and Dave Wichers formalised the legal structure in Delaware. What began as an informal mailing list community grew into one of the most trusted independent voices in application security, underpinned by a community-driven model in which volunteers and corporate supporters alike contribute to a shared vision.

The OWASP Top 10

Of all OWASP's contributions, the OWASP Top 10 remains its most widely cited publication. First released in 2003, it is a standard awareness document representing broad consensus among security experts about the most critical risks facing web applications. The list is updated periodically, with a 2025 edition now published, following the 2021 edition.

The 2021 edition reorganised a number of longstanding categories to reflect how the threat landscape has shifted. Broken access control rose to the top position, reflecting its presence in 94 per cent of tested applications, while injection (which encompasses SQL injection and cross-site scripting, among others) fell to third place. Cryptographic failures, previously listed as sensitive data exposure, took second place. By organising risks into categories rather than exhaustive lists of individual vulnerabilities, the Top 10 provides a practical starting point for prioritising security efforts, and it is widely referenced in compliance frameworks and security policies as a baseline. It is, however, designed to be the beginning of a conversation about security rather than the final word.

Projects and Tools

Beyond the Top 10, OWASP maintains a substantial portfolio of open-source projects spanning tools, documentation and standards. Among the most widely used is OWASP ZAP (Zed Attack Proxy), a dynamic application security testing tool that helps developers and security professionals identify vulnerabilities in web applications. Originally created in 2010 by Simon Bennetts, ZAP operates as a proxy between a tester's browser and the target application, allowing it to intercept, inspect and manipulate HTTP traffic. It supports both passive scanning, which observes traffic without modifying it, and active scanning, which simulates real attacks against targets for which the tester has explicit authorisation.

The OWASP Testing Guide is another widely consulted resource, offering a comprehensive methodology for penetration testing web applications. The OWASP API Security Project addresses the distinct risks that face APIs, which have become an increasingly prominent attack surface, and OWASP also maintains a curated directory of API security tools for those working in this area. For teams managing web application firewalls, the OWASP ModSecurity Core Rule Set provides guidance on handling false positives, which is one of the more practically demanding aspects of deploying rule-based defences. OWASP SEDATED, a more specialised project, focuses on preventing sensitive data from being committed to source code repositories, addressing a problem that continues to affect development teams of all sizes. Projects are categorised by their maturity and quality, allowing users to distinguish between stable, production-ready tools and those that are still in active development, and this tiered approach helps organisations make informed decisions about which tools are appropriate for their needs.

Influence on Industry Practice

The reach of OWASP's guidance is considerable. Security teams use its materials to structure risk assessments and threat modelling exercises, while developers integrate its recommendations into code reviews and secure coding training. Auditors and regulators frequently reference OWASP standards during compliance checks, creating a shared vocabulary that helps bridge the gap between technical staff and leadership. This alignment has done much to normalise application security as a core part of the software development lifecycle, rather than a task bolted on after the fact.

OWASP's influence also extends into regulatory and standards environments. Frameworks such as PCI DSS reference the Top 10 as part of their requirements for web application security, lending it a degree of formal weight that few community-produced documents achieve. That said, OWASP is not a regulatory body and has no enforcement powers of its own.

Education and Community

Education remains a central part of OWASP's mission. The foundation runs hundreds of local chapters across the globe, providing forums for knowledge exchange at a local level, as well as global conferences such as Global AppSec that bring together practitioners from across the industry. All of OWASP's projects, tools, documentation and chapter activities are free and open to anyone with an interest in improving application security. This open model lowers barriers for those starting out in the field and fosters collaboration across academia, industry and open-source communities, creating an environment where expertise circulates freely and innovation is encouraged.

Limitations and Appropriate Use

OWASP is not without its limitations, and it is worth acknowledging these clearly. Because it is not a regulatory body, it cannot enforce compliance, and the quality of individual projects can vary considerably. The Top 10, in particular, is sometimes misread as a comprehensive checklist that, once ticked off, certifies an application as secure. It is not. It is an awareness document designed to highlight the most prevalent categories of risk, not to enumerate every possible vulnerability. Treating it as a complete audit framework rather than a starting point for more in-depth analysis is one of the most common mistakes organisations make when engaging with OWASP materials.

The OWASP Top 10 for Large Language Model Applications

As artificial intelligence has moved from research curiosity to production deployment at scale, OWASP has responded with a dedicated framework for the security risks unique to large language models. The OWASP Top 10 for Large Language Model Applications, maintained under the broader OWASP GenAI Security Project, was first published in 2023 as a community-driven effort to document vulnerabilities specific to LLM-powered applications. A 2025 edition has since been released, reflecting how quickly both the technology and the associated threat landscape have evolved.

The list shares the same philosophy as the web application Top 10, using categories to frame risk rather than enumerating every individual attack variant. Its 2025 edition identifies prompt injection as the leading concern, a class of vulnerability in which crafted inputs cause a model to behave in unintended ways, whether by ignoring instructions, leaking sensitive information or performing unauthorised actions. Other entries cover sensitive information disclosure, supply chain risks (including vulnerable or malicious components sourced from model repositories), data and model poisoning, improper output handling, excessive agency (where an LLM is granted more autonomy or permissions than its task requires) and unbounded consumption, which addresses the risk of uncontrolled resource usage leading to service disruption or unexpected cost. Two categories introduced in the 2025 edition, system prompt leakage and vector and embedding weaknesses, reflect lessons learned from real-world RAG deployments, where retrieval-augmented pipelines have introduced new attack surfaces that did not exist in earlier LLM architectures.

The LLM Top 10 is distinct from the web application Top 10 in an important respect: because the threat landscape for AI applications is evolving considerably faster than that of traditional web software, the list is updated more frequently and carries a higher degree of uncertainty about what constitutes best practice. It is best treated as a living reference rather than a settled standard, and organisations deploying LLM-powered applications would do well to monitor the GenAI Security Project's ongoing work on agentic AI security, which addresses the additional risks that arise when models are given the ability to take real-world actions autonomously.

An Ongoing Work

In an era defined by rapid technological change and an ever-expanding threat landscape, OWASP continues to occupy a distinctive and valuable position in the world of application security. Its freely available standards, practical tools and community-driven approach have made it an indispensable reference point for organisations and individuals working to build safer software. The foundation's work is a practical demonstration that security need not be a competitive advantage hoarded by a few, but a collective responsibility shared across the entire industry.

For developers, security engineers and organisations navigating the challenges of modern software development, OWASP represents both a toolkit and a philosophy: that improving the security of software is work best done together, openly and without barriers.

Blocking thin scrollbar styles in Thunderbird on Linux Mint

23rd February 2026

When you get a long email, you need to see your reading progress as you work your way through it. Then, the last thing that you need is to have someone specifying narrow scrollbars in the message HTML like this:

<html style="scrollbar-width: thin;">

This is what I with an email newsletter on AI Governance sent to me via Substack. Thankfully, that behaviour can be disabled in Thunderbird. While my experience was on Linux Mint, the same fix may work elsewhere. The first step is to navigate the menus to where you can alter the settings: "Hamburger Menu" > Settings > Scroll to the bottom > Click on the Config Editor button.

In the screen that opens, enter layout.css.scrollbar-width-thin.disabled in the search and press the return key. Should you get an entry (and I did), click on the arrows button to the right to change the default value of False to True. Should your search be fruitless, right click anywhere to get a context menu where you can click on New and then Boolean to create an entry for layout.css.scrollbar-width-thin.disabled, which you then set to True. Whichever way you have accomplished the task, restarting Thunderbird ensures that the setting applies.

If the default scrollbar thickness in Thunderbird is not to your liking, returning to the Config Editor will address that. Here, you need to search for or create widget.non-native-theme.scrollbar.size.override. Since this takes a numeric value, pick the appropriate type if you are creating a new entry. Since that was not needed in my case, I pressed the edit button, chose a larger number and clicked on the tick mark button to confirm it. The effect was seen straight and all was how I wanted it.

In the off chance that the above does not work for you, there is one more thing that you can try, and this is specific to Linux. It sends you to the command line, where you issue this command:

gsettings get org.gnome.desktop.interface overlay-scrolling

Should that return a value of true, follow the with this command to change the setting to false:

gsettings set org.gnome.desktop.interface overlay-scrolling false

After that, you need to log off and back on again for the update to take effect. Since I had no recourse to that, it may be the same for you too.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.