Technology Tales

Notes drawn from experiences in consumer and enterprise technology

Welcome

  • We live in a world where computer software penetrates every life and is embedded in nearly every piece of hardware that we have. Everything is digital now; it goes beyond computers and cameras, a major point of encounter when this website was started.

  • As if that were not enough, we have AI inexorably encroaching on our daily lives. The possibilities are becoming so endless that one needs to be careful not to get lost in it all. Novelty beguiles us now as much around the time of the expansion of personal computing availability decades ago, and then we got the internet. Excitement has returned, at least for a while.

  • All this percolates what is here, just dive in to see what you can find!

Configuring RSS Feeds in Grav CMS using Collections

16th February 2026

While the Feed plugin for Grav CMS provides RSS, Atom and JSON feeds for your site content, setting up a feed in Grav differs from other content management systems because it requires explicit configuration rather than automatic discovery. The key to making feeds work lies in understanding how Grav uses collections to organise and present content.

Understanding Collections

In Grav, content lives in a file tree structure. The feed system works through collections, which tell Grav where to find pages and how to organise them. Collections are essential for feeds because they define exactly which content gets syndicated. Without a properly configured collection, your feed will be empty.

Collections use a specific syntax to target pages in different ways:

Collection Type Syntax What It Does
All descendants '@page.descendants': '/blog' Gathers a folder and all nested subdirectories recursively
Direct children only '@page.children': '/blog' Limits collection to immediate children, excluding deeper nesting
By tag '@taxonomy.tag': 'featured' Pulls all pages with a specific tag, regardless of location
By category '@taxonomy.category': 'news' Pulls all pages in a specific category, regardless of location
Multiple sources Array of any above Merges content from multiple folders or taxonomies

The @page.descendants syntax works well for blog structures where you might have posts organised by year, month or category within subdirectories. The @page.children syntax suits flatter structures where all posts sit directly under a single parent folder.

Taxonomy-based collections using @taxonomy.tag or @taxonomy.category pull pages based on their metadata rather than folder location. This allows you to create feeds for specific topics regardless of where those pages live in your file structure.

The real power emerges when combining multiple collection sources. You can specify an array of different folders or taxonomies, and Grav merges them into a single unified collection. This allows you to aggregate content across your entire site structure, pulling from blogs, articles, news sections or any combination that suits your needs.

Understanding Routes

Before configuring your feed, you need to understand how Grav translates folder paths into routes. This matters because you will reference routes in your collection configuration, not physical folder paths. Getting this wrong means your feed will be empty even when your folders contain content.

Grav stores pages in folders under /user/pages/, but these physical paths differ from the routes you use in configuration. Routes are based on folder names without numeric prefixes:

Physical Path Route in Configuration
/user/pages/03.deliberations/ /deliberations
/user/pages/writings/ /writings

The numeric prefix (such as 03.) determines ordering in navigation but does not appear in the route. This means when you create a folder named 03.deliberations, Grav automatically makes it accessible at the /deliberations route. Similarly, a folder without a numeric prefix like writings becomes the /writings route.

When you specify '@page.descendants': '/deliberations' in your collection configuration, Grav knows to look in the 03.deliberations folder and all its subdirectories. The route system abstracts away the physical folder structure, allowing you to reorganise content by changing numeric prefixes without breaking your feed configuration.

Creating Your Feed Configuration

Creating a feed requires a page that defines which content to include. The simplest approach is to create a file at feed/default.md with a structure like this:

---
title: Site Feed
visible: false

content:
  items:
    '@page.descendants': '/blog'
  order:
    by: date
    dir: desc
  limit: 20

feed:
  title: "My Site Feed"
  description: "Latest posts and updates"
---

Configuration Settings Explained

The visible: false setting ensures the feed page does not appear in navigation.

The content section defines your collection. Here, @page.descendants: '/blog' recursively includes all pages under the /blog folder, including any nested within subdirectories. If you only want immediate children, use @page.children: '/blog' instead.

The order section determines how posts appear in your feed. The example uses by: date with dir: desc to show the newest posts first. Ensure all pages in your collection have a date: field in their front matter:

---
title: My Post
date: 16-02-2026 14:30
---

Alternatively, you can order by folder name using by: folder. This sorts pages alphabetically and proves reliable when pages lack dates or when folder structure determines chronology:

order:
  by: folder
  dir: asc

The limit: 20 setting controls how many items appear in the feed.

The feed section provides metadata for the RSS feed itself, including the title and description that feed readers will display.

Combining Multiple Folders

You can pull content from multiple locations into a single feed:

content:
  items:
    - '@page.descendants': '/blog'
    - '@page.descendants': '/articles'
    - '@page.descendants': '/news'

This merges posts from all three sections into one unified feed. Each section is gathered separately and then combined.

Putting It All Together

Here is a complete working configuration that demonstrates these principles in practice. This example pulls all content from a single folder using folder-based ordering rather than dates:

---
title: Surroundings Feed
visible: false

content:
  items:
    '@page.descendants': '/deliberations'
  order:
    by: folder
    dir: asc
  limit: 20

feed:
  title: "Varied Surroundings, Deliberations"
  description: "Essays and reflections from Deliberations on Assorted Explorations"
  limit: 20
---

This configuration, saved as feed/default.md, pulls all content from the /deliberations folder and its subdirectories. The folder-based ordering uses Grav's numeric prefix system for sorting. Pages in folders named 01.first-post, 02.second-post, 03.third-post will appear in that sequence. The ascending direction means lower numbers appear first in the feed. The feed metadata provides a descriptive title and description that will appear in feed readers.

What the Output Looks Like

Once Grav processes your configuration, it generates an RSS XML file. Here is what the header section looks like using the real-world example above:

<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>Varied Surroundings, Deliberations</title>
        <link>https://www.assortedexplorations.com/surroundings/feed</link>
        <description>Essays and reflections from Deliberations on Assorted Explorations</description>
        <language>en</language>
        <lastBuildDate>Sun, 25 Jan 2026 18:16:13 +0000</lastBuildDate>

Each post appears as an item with its title, link, publication date and content excerpt:

        <item>
            <title>Nation or Nations?</title>
            <link>https://www.assortedexplorations.com/surroundings/deliberations/nation-or-nations</link>
            <guid>https://www.assortedexplorations.com/surroundings/deliberations/nation-or-nations</guid>
            <pubDate>Sun, 25 Jan 2026 17:59:23 +0000</pubDate>
            <description>
                <![CDATA[
                <h2 class="centre mb-5">Nation or Nations?</h2>
                <p style="text-align:center; margin-bottom: 2.5rem;"><img src="../images/51.jpg"></p>
                <p>The nature of the United Kingdom seems to be a source not only of some confusion...</p>
                ]]>
            </description>
        </item>

The Feed plugin handles all XML generation automatically. You simply configure which content to include and how to order it, and Grav produces properly formatted RSS, Atom or JSON feeds.

Accessing Your Feeds

Once configured, your feeds become accessible at /feed.rss, /feed.atom or /feed.json. While this guide focuses on RSS feeds, the collection principles apply much more broadly. Collections power archive pages, tag pages, search results, related posts and sitemaps. Understanding how they work is essential for any feature that lists content on your site. The official documentation provides additional detail on these concepts and advanced configuration options.

Related Reading

Grav Collections Documentation

Grav Feed Plugin

Creating modular pages in Grav CMS

15th February 2026

Here is a walkthrough that demonstrates how to create modular pages in Grav CMS. Modular pages allow you to build complex, single-page layouts by stacking multiple content sections together. This approach works particularly well for modern home pages where different content types need to be presented in a specific sequence. The example here stems from building a theme from scratch to ensure sitewide consistency across multiple subsites.

What Are Modular Pages

A modular page is a collection of content sections (called modules) that render together as a unified page. Unlike regular pages that have child pages accessible via separate URL's, modular pages display all their modules on a single URL. Each module is a self-contained content block with its own folder, markdown file and template. The parent page assembles these modules in a specified order to create the final page.

Understanding the Folder Structure

Modular pages use a specific folder structure. The parent folder contains a modular.md file that defines which modules to include and in what order. Module folders sit beneath the parent folder and are identified by an underscore at the start of their names. Numeric prefixes before the underscore control the display order.

Here is an actual folder structure from a travel subsite home page:

/user/pages/01.home/
    01._title/
    02._intro/
    03._call-to-action/
    04._ireland/
    05._england/
    06._scotland/
    07._scandinavia/
    08._wales-isle-of-man/
    09._alps-pyrenees/
    10._american-possibilities/
    11._canada/
    12._dreams-experiences/
    13._practicalities-inspiration/
    14._feature_1/
    15._feature_2/
    16._search/
    modular.md

The numeric prefixes (01, 02, 03) ensure modules appear in the intended sequence. The descriptive names after the prefix (_title, _ireland, _search) make the page structure immediately clear. Each module folder contains its own markdown file and any associated media. The underscore prefix tells Grav these folders contain modules rather than regular pages. Modules are non-routable, meaning visitors cannot access them directly via URLs. They exist only to provide content sections for their parent page.

The Workflow

Step One: Create the Parent Page Folder

Start by creating a folder for your modular page in /user/pages/. For a home page, you might create 01.home. The numeric prefix 01 ensures this page appears first in navigation and sets the display order. If you are using the Grav Admin interface, navigate to Pages and click the Add button, then select "Modular" as the page type.

Create a file named modular.md inside this folder. The filename is important because it tells Grav to use the modular.html.twig template from your active theme. This template handles the assembly and rendering of your modules.

Step Two: Configure the Parent Page

Open modular.md and add YAML front matter to define your modular page settings. Here is an example configuration:

---
title: Home
menu: Home
body_classes: "modular"

content:
    items: '@self.modular'
    order:
        by: default
        dir: asc
        custom:
            - _hero
            - _features
            - _callout
---

The content section is crucial. The items: '@self.modular' instruction tells Grav to collect all modules from the current page. The custom list under order specifies exactly which modules to include and their sequence. This gives you precise control over how sections appear on your page.

Step Three: Create Module Folders

Create folders for each content section you need. Each folder name must begin with an underscore. Add numeric prefixes before the underscore for ordering, such as 01._title, 02._intro, 03._call-to-action. The numeric prefixes control the display sequence. Module names are entirely up to you based on what makes sense for your content.

For multi-word module names, use hyphens to separate words. Examples include 08._wales-isle-of-man, 10._american-possibilities or 13._practicalities-inspiration. This creates readable folder names that clearly indicate each module's purpose. The official documentation shows examples using names like _features and _showcase. It does not matter what names that you choose, as long as you create matching templates. Descriptive names help you understand the page structure when viewing the file system.

Step Four: Add Content to Modules

Inside each module folder, create a markdown file. The filename determines which template Grav uses to render that module. For example, a module using text.md will use the text.html.twig template from your theme's /templates/modular/ folder. You decide on these names when planning your page structure.

Here is an actual example from the 04._ireland module:

---
title: 'Irish Encounters'
content_source: ireland
grid: true
sitemap:
    lastmod: '30-01-2026 00:26'
---
### Irish Encounters {.mb-3}
<p style="text-align:center" class="mt-3"><img class="w-100 rounded" src="https://www.assortedexplorations.com/photo_gallery_images/eire_kerry/small_size/eireCiarraigh_tomiesMountain.jpg" /></p>
From your first arrival to hidden ferry crossings and legendary names that shaped the world, these articles unlock Ireland's layers beyond the obvious tourist trail. Practical wisdom combines with unexpected perspectives to transform a visit into an immersive journey through landscapes, culture and connections you will not find in standard guidebooks.

The front matter contains settings specific to this module. The title appears in the rendered output. The content_source: ireland tells the template which page to link to (as shown in the text.html.twig template example). The grid: true flag signals the modular template to render this module within the grid layout. The sitemap settings control how search engines index the content.

The content below the front matter appears when the module renders. You can use standard Markdown syntax for formatting. You can also include HTML directly in markdown files for precise control over layout and styling. The example above uses HTML for image positioning and Bootstrap classes for responsive sizing.

Module templates can access front matter variables through page.header and use them to generate dynamic content. This provides flexibility in how modules appear and behave without requiring separate module types for every variation.

Step Five: Create Module Templates

Your theme needs templates in /templates/modular/ that correspond to your module markdown files. When building a theme from scratch, you create these templates yourself. A module using hero.md requires a hero.html.twig template. A module using features.md requires a features.html.twig template. The template name must match the markdown filename.

Each template defines how that module's content renders. Templates use standard Twig syntax to structure the HTML output. Here is an example text.html.twig template that demonstrates accessing page context and generating dynamic links:

{{ content|raw }}
{% set raw = page.header.content_source ?? '' %}
{% set route = raw ? '/' ~ raw|trim('/') : page.parent.route %}
{% set context = grav.pages.find(route) %}
{% if context %}
    <p>
        <a href="{{ context.url }}" class="btn btn-secondary mt-3 mb-3 shadow-none stretch">
            {% set count = context.children.visible|length + 1 %}
            Go and Have a Look: {{ count }} {{ count == 1 ? 'Article' : 'Articles' }} to Savour
        </a>
    </p>
{% endif %}

First, this template outputs the module's content. It then checks for a content_source variable in the module's front matter, which specifies a route to another page. If no source is specified, it defaults to the parent page's route. The template finds that page using grav.pages.find() and generates a button linking to it. The button text includes a count of visible child pages, with proper singular/plural handling.

The Grav themes documentation provides guidance on template creation. You have complete control over the design and functionality of each module through its template. Module templates can access the page object, front matter variables, site configuration and any other Grav functionality.

Step Six: Clear Cache and View Your Page

After creating your modular page and modules, clear the Grav cache. Use the command bin/grav clear-cache from your Grav installation directory, or click the Clear Cache button in the Admin interface. The CLI documentation details other cache management options.

Navigate to your modular page in a browser. You should see all modules rendered in sequence as a single page. If modules do not appear, verify the underscore prefix on module folders, check that template files exist for your markdown filenames and confirm your parent modular.md lists the correct module names.

Working with the Admin Interface

The Grav Admin plugin streamlines modular page creation. Navigate to the Pages section in the admin interface and click Add. Select "Add Modular" from the options. Fill in the title and select your parent page. Choose the module template from the dropdown.

The admin interface automatically creates the module folder with the correct underscore prefix. You can then add content, configure settings and upload media through the visual editor. This approach reduces the chance of naming errors and provides a more intuitive workflow for content editors.

Practical Tips for Modular Pages

Use descriptive folder names that indicate the module's purpose. Names like 01._title, 02._intro, 03._call-to-action make the page structure immediately clear. Hyphens work well for multi-word module names such as 08._wales-isle-of-man or 13._practicalities-inspiration. This naming clarity helps when you return to edit the page later or when collaborating with others. The specific names you choose are entirely up to you as the theme developer.

Keep module-specific settings in each module's markdown front matter. Place page-wide settings like taxonomy and routing in the parent modular.md file. This separation maintains clean organisation and prevents configuration conflicts.

Use front matter variables to control module rendering behaviour. For example, adding grid: true to a module's front matter can signal your modular template to render that module within a grid layout rather than full-width. Similarly, flags like search: true or fullwidth: true allow your template to apply different rendering logic to different modules. This keeps layout control flexible without requiring separate module types for every layout variation.

Test your modular page after adding each new module. This incremental approach helps identify issues quickly. If a module fails to appear, check the folder name starts with an underscore, verify the template exists and confirm the parent configuration includes the module in its custom order list.

For development work, make cache clearing a regular habit. Grav caches page collections and compiled templates, which can mask recent changes. Running bin/grav clear-cache after modifications ensures you see current content rather than cached versions.

Understanding Module Rendering

The parent page's modular.html.twig template controls how modules assemble. Whilst many themes use a simple loop structure, you can implement more sophisticated rendering logic when building a theme from scratch. Here is an example that demonstrates conditional rendering based on module front matter:

{% extends 'partials/home.html.twig' %}
{% block content %}
  {% set modules = page.collection({'items':'@self.modular'}) %}
  {# Full-width modules first (search, main/outro), then grid #}
  {% for m in modules %}
    {% if not (m.header.grid ?? false) %}
      {% if not (m.header.search ?? false) %}
            {{ m.content|raw }}
      {% endif %}
    {% endif %}
  {% endfor %}
  <div class="row g-4 mt-3">
    {% for m in modules %}
      {% if (m.header.grid ?? false) %}
        <div class="col-12 col-md-6 col-xl-4">
          <div class="h-100">
            {{ m.content|raw }}
          </div>
        </div>
      {% endif %}
    {% endfor %}
  </div>
  {# Move search box to end of page  #}
  {% for m in modules %}
    {% if (m.header.search ?? false) %}
          {{ m.content|raw }}
    {% endif %}
  {% endfor %}
{% endblock %}

This template demonstrates several useful techniques. First, it retrieves the module collection using page.collection({'items':'@self.modular'}). The template then processes modules in three separate passes. The first pass renders full-width modules that are neither grid items nor search boxes. The second pass renders grid modules within a Bootstrap grid layout, wrapping each in responsive column classes. The third pass renders the search module at the end of the page.

Modules specify their rendering behaviour through front matter variables. A module with grid: true in its front matter renders within the grid layout. A module with search: true renders at the page bottom. Modules without these flags render full-width at the top. This approach provides precise control over the layout whilst keeping content organised in separate module files.

Each module still uses its own template (like text.html.twig or feature.html.twig) to define its specific HTML structure. The modular template simply determines where on the page that content appears and whether it gets wrapped in grid columns. This separation between positioning logic and content structure keeps the system flexible and maintainable.

When to Use Modular Pages

Modular pages work well for landing pages, home pages and single-page sites where content sections need to appear in a specific order. They excel at creating modern, scrollable pages with distinct content blocks like hero sections, feature grids, testimonials and call-to-action areas.

For traditional multipage sites with hierarchical navigation, regular pages prove more suitable because modular pages do not support child pages in the conventional sense. Instead, choose modular pages when you need a unified, single-URL presentation of multiple content sections.

Related Reading

For further information, consult the Grav Modular Pages Documentation, Grav Themes and Templates guide and the Grav Admin Plugin documentation.

Grouping directories first in output from ls commands executed in terminal sessions on macOS and Linux

14th February 2026

This enquiry began with my seeing directories and files being sorted by alphabetical order without regard for type in macOS Finder. In Windows and Linux file managers, I am accustomed to directories and files being listed in distinct blocks, albeit within the same listings, with the former preceding the latter. What I had missed was that the ls command and its aliases did what I was seeing in macOS Finder, which perhaps is why the operating system and its default apps work like that.

Over to Linux

On the zsh implementation that macOS uses, there is no way to order the output so that directories are listed before files. However, the situation is different on Linux because of the use of GNU tooling. Here, the --group-directories-first switch is available, and I have started to use this on my own Linux systems, web servers as well as workstations. This can be set up in .bashrc or .bash_aliases like the following:

alias ls='ls --color=auto --group-directories-first'
alias ll='ls -lh --color=auto --group-directories-first'

Above, the --color=auto switch adds colour to the output too. Issuing the following command makes the updates available in a terminal session (~ is the shorthand for the home directory below):

source ~/.bashrc

Back to macOS

While that works well on Linux, additional tweaks are needed to implement the same on macOS. Firstly, you have to install Homebrew using this command (you may be asked for your system password to let the process proceed):

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

To make it work, this should be added to the .zshrc file in your home folder:

export PATH="/opt/homebrew/opt/coreutils/libexec/gnubin:$PATH"

Then, you need to install coreutils for GNU commands like gls (a different name is used to distinguish it from what comes with macOS) and adding dircolors gives you coloured text output as well:

brew install coreutils
brew install dircolors

Once those were in place, I found that adding these lines to the .zshrc file was all that was needed (note those extra g's):

alias ls='gls --color=auto --group-directories-first'
alias ll='gls -lh --color=auto --group-directories-first'

If your experience differs, they may need to be preceded with this line in the same configuration file:

eval "$(dircolors -b)"

The final additions then look like this:

export PATH="/opt/homebrew/opt/coreutils/libexec/gnubin:$PATH"
eval "$(dircolors -b)"
alias ls='gls --color=auto --group-directories-first'
alias ll='gls -lh --color=auto --group-directories-first'

Following those, issuing this command will make the settings available in your terminal session:

source ~/.zshrc

Closing Remarks

In summary, you have learned how to list directories before files, and not intermingled as is the default situation. For me, this discovery was educational and adds some extra user-friendliness that was not there before the tweaks. While we may be considering two operating systems and two different shells (bash and zsh), there is enough crossover to make terminal directory and file listing operations function consistently regardless of where you are working.

Safe file copying methods for cross platform SAS programming

9th February 2026

Not so long ago, I needed to copy a file within a SAS program, only to find that an X command would not accomplish the operation. That cast my mind back to file operations using SAS in order to be platform-independent. Thus, I fell upon statements within a data step.

Before going that far, you need to define filename statements for the source and target locations like this:

filename src "<location of source file or files>" lrecl=32767 encoding="utf-8";
filename dst "<target location for copies>" lrecl=32767 encoding="utf-8";

Above, I also specified logical record length (LRECL) and UTF-8 encoding. The latter is safer in these globalised times, when the ASCII character set does not suffice for everyday needs.

Once you have defined filename statements, you can move to the data step itself, which does not output any data set because of the data _null_ statement:

data _null_;
    length rc msg $ 300;
    rc = fcopy('src','dst');
    if rc ne 0 then do;
        msg = sysmsg();
        putlog "ERROR: FCOPY failed rc=" rc " " msg;
    end;
    else putlog "NOTE: Copied OK.";
run;

The main engine here is the fcopy function, which outputs a return code (rc). Other statements like putlog are there to communication outcomes and error messages when the file copying operation fails. The text of the error message (msg) comes from the sysmsg function. After file copying has completed, it is time to unset the filename statements as follows:

filename src clear;
filename dst clear;

One thing that needs to be pointed out here is that this is an approach best reserved for text files like what I was copying when doing this. When I attempted the same kind of operation with an XLSX file, the copy would not open in Excel afterwards. Along the way, it had got scrambled. Once you realise that an XLSX file is essentially a zip archive of XML files, you might see how that could go awry.

Understanding Bootstrap's twelve column grid for clean layouts

8th February 2026

When it comes to designing your page structure using Bootstrap, you need to work within a twelve column grid. For uneven column widths, you need to make everything add up, while it is perhaps simpler when every column width is the same.

For instance, encountering a case of the latter when porting a website landing page as part of a migration from Textpattern to Grav meant having evenly sized columns, with one block for each section. To make everything up to twelve, two featured article blocks were added. What follows is a little about why that choice was made.

How the Twelve-Column System Works

Bootstrap's grid divides into twelve columns The number twelve was chosen because it has more divisors than any number before it or after it, up to sixty, making it exceptionally flexible for layout work.

<!-- Two across -->
<div class="col-12 col-md-6">...</div>

<!-- Three across -->
<div class="col-12 col-md-6 col-xl-4">...</div>

<!-- Classic blog layout: sidebar and main content -->
<div class="col-md-3">Sidebar</div>
<div class="col-md-9">Main content</div>

The column classes work by specifying how many of the twelve columns each element should span. For example, a col-6 class spans six columns (half the width), whilst col-4 spans four columns (one third). In the classic blog layout, this means using col-3 for a sidebar (one quarter width) and col-9 for main content (three quarters width). Other common combinations include col-8 and col-4 (two thirds and one third), or col-2 and col-10 (one sixth and five sixths).

For consistent column widths, certain numbers divide cleanly: col-12 (full width, one across), col-6 (half width, two across), col-4 (one third width, three across), col-3 (one quarter width, four across), col-2 (one sixth width, six across) and col-1 (one twelfth width, twelve across). When using 2, 3, 4, 6 or 12 blocks with these classes, the grid divides evenly. However, with other numbers, challenges emerge. For instance, eleven blocks in three columns leaves two orphans, whilst seven blocks in two columns creates uneven rows.

Bootstrap Breakpoints Explained

Bootstrap 5 defines six responsive breakpoints for different device categories:

Breakpoint Screen Width Class Modifier
Extra small Below 576px None (just col-*)
Small 576px and above col-sm-*
Medium 768px and above col-md-*
Large 992px and above col-lg-*
Extra large 1200px and above col-xl-*
Extra extra large 1400px and above col-xxl-*

These breakpoints cascade upwards. A class like col-md-6 applies from 768 pixels and continues to apply at all larger breakpoints unless overridden by a more specific class like col-xl-4. This cascading behaviour allows responsive layouts to be built with minimal markup, where each breakpoint only needs to specify what changes, rather than repeating the entire layout definition.

Putting It Into Practice

When column widths are equal, the implementation uses Bootstrap grid classes with a three-tier responsive system so that each block receives consistent treatment, with padding, borders and hover effects. Here is some boilerplate code showing how this can be accomplished:

<div class="row g-4 mt-3">
  <div class="col-12 col-md-6 col-xl-4">
    <div class="h-100">
      <h3 class="mb-3">Irish Encounters</h3>
      <p style="text-align:center" class="mt-3">
        <img class="w-100 rounded" src="...">
      </p>
      <p>From your first arrival to hidden ferry crossings...</p>
      <p>
        <a href="/travel/ireland" class="btn btn-secondary mt-3 mb-3 shadow-none stretch">
          Go and Have a Look: 12 Articles to Savour
        </a>
      </p>
    </div>
  </div>
  <!-- Repeat for 9 more destination blocks -->
  <!-- Then 2 featured article blocks -->
</div>

Two Bootstrap utility classes proved particularly useful here. Firstly, the h-100 class sets the height to 100% of the parent container, ensuring all blocks in a row have equal height regardless of content length. Meanwhile, the w-100 class sets the width to 100% of the parent container, making images fill their containers whilst maintaining aspect ratio when combined with responsive image techniques. Together, these help create visual consistency across the grid.

The responsive behaviour works as follows for twelve blocks:

Screen Width Class Used Number of Columns Number of Rows
Below 768px col-12 1 12
768px and above col-md-6 2 6
1200px and above col-xl-4 3 4

The g-4 class adds consistent guttering between blocks across all breakpoints and is part of Bootstrap's spacing utilities, where the number (4) corresponds to a spacing value from Bootstrap's spacer scale. To accomplish this, the class applies gap spacing both horizontally and vertically between grid items, creating visual separation without needing to add margins to individual elements. This ensures blocks do not sit flush against each other whilst maintaining consistent spacing throughout the layout.

Taking Stock

Bootstrap's twelve-column grid works cleanly for certain block counts (1, 2, 3, 4, 6 and 12). In contrast, other numbers create visual imbalance in multi-column layouts. For this reason, the grid system should inform content decisions early in the planning process. Ultimately, planning block counts around the grid creates more harmonious layouts than forcing arbitrary numbers into place.

In this case, twelve blocks divided cleanly into the three-column grid, where other numbers would have created orphans. Beyond solving the layout challenge, featured articles provided value by drawing attention to important content whilst resolving the constraints of the grid system. The key takeaway is that content planning and grid design work together rather than in opposition.

Related Reading

For further exploration of these concepts, the Bootstrap Grid System documentation provides comprehensive coverage of the twelve-column system and its responsive capabilities. The Flexbox utilities documentation covers alignment and spacing options that complement the grid system.

Running local Large Language Models on desktop computers and workstations with 8GB VRAM

7th February 2026

Running large language models locally has shifted from being experimental to practical, but expectations need to match reality. A graphics card with 8 GB of VRAM can support local workflows for certain text tasks, though the results vary considerably depending on what you ask the models to do.

Understanding the Hardware Foundation

The Critical Role of VRAM

The central lesson is that VRAM is the engine of local performance on desktop systems. Whilst abundant system RAM helps avoid crashes and allows larger contexts, it cannot replace VRAM for throughput.

Models that fit in VRAM and keep most of their computation on the GPU respond promptly and maintain a steady pace. Those that overflow to system RAM or the CPU see noticeable slowdowns.

Hardware Limitations and Thresholds

On a desktop GPU with 8 GB of VRAM, this sets a practical ceiling. Models in the 7 billion to 14 billion parameter range fit comfortably enough to exploit GPU acceleration for typical contexts.

Much larger models tend to offload a significant portion of the work to the CPU. This shows up as pauses, lower token rates and lag when prompts become longer.

Monitoring GPU Utilisation

GPU utilisation is a reliable way to gauge whether a setup is efficient. When the GPU is consistently busy, generation is snappy and interactive use feels smooth.

A model like llama3.1:8b can run almost entirely on the GPU at a context length of 4,096 tokens. This translates into sustained responsiveness even with multi-paragraph prompts.

Model Selection and Performance

Choosing the Right Model Size

A frequent instinct is to reach for the largest model available, but size does not equal usefulness when running locally on a desktop or workstation. In practice, models between 7B and 14B parameters are what you can run on this class of hardware, though what they do well is more limited than benchmark scores might suggest.

What These Models Actually Do Well

Models in this range handle certain tasks competently. They can compress and reorganise information, expand brief notes into fuller text, and maintain a reasonably consistent voice across a draft. For straightforward summarisation of documents, reformatting content or generating variations on existing text, they perform adequately.

Where things become less reliable is with tasks that demand precision or structured output. Coding tasks illustrate this gap between benchmarks and practical use. Whilst llama3.1:8b scores 72.6% on the HumanEval benchmark (which tests basic algorithm problems), real-world coding tasks can expose deeper limitations.

Commit message generation, code documentation and anything requiring consistent formatting produce variable results. One attempt might give you exactly what you need, whilst the next produces verbose or poorly structured output.

The gap between solving algorithmic problems and producing well-formatted, professional code output is significant. This inconsistency is why larger local models like gpt-oss-20b (which requires around 16GB of memory) remain worth the wait despite being slower, even when the 8GB models respond more quickly.

Recommended Models for Different Tasks

Llama3.1:8b handles general drafting reasonably well and produces flowing output, though it can be verbose. Benchmark scores place it above average for its size, but real-world use reveals it is better suited to free-form writing than structured tasks.

Phi3:medium is positioned as stronger on reasoning and structured output. In practice, it can maintain logical order better than some alternatives, though the official documentation acknowledges quality of service limitations, particularly for anything beyond standard American English. User reports also indicate significant knowledge gaps and over-censorship that can affect practical use.

Gemma3 at 12B parameters produces polished prose and smooths rough drafts effectively when properly quantised. The Gemma 3 family offers models from 1B to 27B parameters with 128K context windows and multimodal capabilities in the larger sizes, though for 8GB VRAM systems you are limited to the 12B variant with quantisation. Google also offers Gemma 3n, which uses an efficient MatFormer architecture and Per-Layer Embedding to run on even more constrained hardware. These are primarily optimised for mobile and edge devices rather than desktop use.

Very large models remain less efficient on desktop hardware with 8 GB VRAM. Attempting to run them results in heavy CPU offloading, and the performance penalty can outweigh any quality improvement.

Memory Management and Configuration

Managing Context Length

Context length sits alongside model size as a decisive lever. Every extra token of context demands memory, so doubling the window is not a neutral choice.

At around 4,096 tokens, most of the well-matched models stay predominantly on the GPU and hold their speed. Push to 8,192 or beyond, and the memory footprint swells to the point where more of the computation ends up taking place on the CPU and in system RAM.

Ollama's Keep-Alive Feature

Ollama keeps already loaded models resident in VRAM for a short period after use so that the next call does not pay the penalty of a full reload. This is expected behaviour and is governed by a keep_alive parameter that can be adjusted to hold the model for longer if a burst of work is coming, or to release it sooner when conserving VRAM matters.

Practical Memory Strategies

Breaking long jobs into a series of smaller, well-scoped steps helps both speed and stability without constraining the quality of the end result. When writing an article, for instance, it can be more effective to work section by section rather than asking for the entire piece in one pass.

Optimising the Workflow

The Benefits of Streaming

Streaming changes the way output is experienced, rather than the content that is ultimately produced. Instead of waiting for a block of text to arrive all at once, words appear progressively in the terminal or application. This makes longer pieces easier to manage and revise on the fly.

Task-Specific Model Selection

Because each model has distinct strengths and weaknesses, matching the tool to the task matters. A fast, GPU-friendly model like llama3.1:8b works for general writing and quick drafting where perfect accuracy is not critical. Phi3:medium may handle structured content better, though it is worth testing against your specific use case rather than assuming it will deliver.

Understanding Limitations

It is important to be clear about what local models in this size range struggle with. They are weak at verifying facts, maintaining strict factual accuracy over extended passages, and providing up-to-date knowledge from external sources.

They also perform inconsistently on tasks requiring precise structure. Whilst they may pass coding benchmarks that test algorithmic problem-solving, practical coding tasks such as writing commit messages, generating consistent documentation or maintaining formatting standards can reveal deeper limitations. For these tasks, you may find yourself returning to larger local models despite preferring the speed of smaller ones.

Integration and Automation

Using Ollama's Python Interface

Ollama integrates into automated workflows on desktop systems. Its Python package allows calls from scripts to automate summarisation, article generation and polishing runs, with streaming enabled so that logs or interfaces can display progress as it happens. Parameters can be set to control context size, temperature and other behavioural settings, which helps maintain consistency across batches.

Building Production Pipelines

The same interface can be linked into website pipelines or content management tooling, making it straightforward to build a system that takes notes or outlines, expands them, revises the results and hands them off for publication, all locally on your workstation. The same keep_alive behaviour that aids interactive use also benefits automation, since frequently used models can remain in VRAM between steps to reduce start-up delays.

Recommended Configuration

Optimal Settings for 8 GB VRAM

For a desktop GPU with 8 GB of VRAM, an optimal configuration builds around models that remain GPU-efficient whilst delivering acceptable results for your specific tasks. Llama3.1:8b, phi3:medium and gemma3:12b are the models that fit this constraint when properly quantised, though you should test them against your actual workflows rather than relying on general recommendations.

Performance Monitoring

Keeping context windows around 4,096 tokens helps sustain GPU-heavy operation and consistent speeds, whilst streaming smooths the experience during longer outputs. Monitoring GPU utilisation provides an early warning if a job is drifting into a configuration that will trigger CPU fallbacks.

Planning for Resource Constraints

If a task does require more memory, it is worth planning for the associated slowdown rather than assuming that increasing system RAM or accepting a bigger model will compensate for the VRAM limit. Tuning keep_alive to the rhythm of work reduces the frequency of reloads during sessions and helps maintain responsiveness when running sequences of related prompts.

A Practical Content Creation Workflow

Multi-Stage Processing

This configuration supports a division of labour in content creation on desktop systems. You start with a compact model for rapid drafting, switch to a reasoning-focused option for structured expansions if needed, then finish with a model known for adding polish to refine tone and fluency. Insert verification steps between stages to confirm facts, dates and citations before moving on.

Because each stage is local, revisions maintain privacy, with minimal friction between idea and execution. When integrated with automation via Ollama's Python tools, the same pattern can run unattended for batches of articles or summaries, with human review focused on accuracy and editorial style.

In Summary

Desktop PCs and workstations with 8 GB of VRAM can support local LLM workflows for specific tasks, though you need realistic expectations about what these models can and cannot do reliably. They handle basic text generation and reformatting, though prone to hallucinations and misunderstandings. They struggle with precision tasks, structured output and anything requiring consistent formatting. Whilst they may score well on coding benchmarks that test algorithmic problem-solving, practical coding tasks can reveal deeper limitations.

The key is to select models that fit the VRAM envelope, keep context lengths within GPU-friendly bounds, and test them against your actual use cases. For tasks where local models prove inconsistent, such as generating commit messages or producing reliably structured output, larger local models like gpt-oss-20b (which requires around 16GB of memory) may still be worth the wait despite being slower. Local LLMs work best when you understand their limitations and use them for what they genuinely do well, rather than expecting them to replace more capable models across all tasks.

Additional Resources

Preventing authentication credentials from entering Git repositories

6th February 2026

Keeping credentials out of version control is a fundamental security practice that prevents numerous problems before they occur. Once secrets enter Git history, remediation becomes complex and cannot guarantee complete removal. Understanding how to structure projects, configure tooling, and establish workflows that prevent credential commits is essential for maintaining secure development practices.

Understanding What Needs Protection

Credentials come in many forms across different technology stacks. Database passwords, API keys, authentication tokens, encryption keys, and service account credentials all represent sensitive data that should never be committed. Configuration files containing these secrets vary by platform but share common characteristics: they hold information that would allow unauthorised access to systems, data, or services.

# This should never be in Git
database:
  password: secretpassword123
api:
  key: sk_live_abc123def456
# Neither should this
DB_PASSWORD=secretpassword123
API_KEY=sk_live_abc123def456
JWT_SECRET=mysecretkey

Even hashed passwords require careful consideration. Whilst bcrypt hashes with appropriate cost factors (10 or higher, requiring approximately 65 milliseconds per computation) provide protection against immediate exploitation, they still represent sensitive data. The hash format includes a version identifier, cost factor, salt, and hash components that could be targeted by offline attacks if exposed. Plaintext credentials offer no protection whatsoever and represent immediate critical exposure.

Establishing Git Ignore Rules from the Start

The foundation of credential protection is proper .gitignore configuration established before any sensitive files are created. This proactive approach prevents problems rather than requiring remediation after discovery. Begin every project by identifying which files will contain secrets and excluding them immediately.

# Credentials and secrets
.env
.env.*
!.env.example
config/secrets.yml
config/database.yml
config/credentials/

# Application-specific sensitive files
wp-config.php
settings.php
configuration.php
appsettings.Production.json
application-production.properties

# User data and session storage
/storage/credentials/
/var/sessions/
/data/users/

# Keys and certificates
*.key
*.pem
*.p12
*.pfx
!public.key

# Cache and logs that might leak data
/cache/
/logs/
/tmp/
*.log

The negation pattern !.env.example demonstrates an important technique: excluding all .env files whilst explicitly including example files that show structure without containing secrets. This pattern ensures that developers understand what configuration is needed without exposing actual credentials.

Notice the broad exclusions for entire categories rather than specific files. Excluding *.key prevents any private key files from being committed, whilst !public.key allows the explicit inclusion of public keys that are safe to share. This defence-in-depth approach catches variations and edge cases that specific file exclusions might miss.

Separating Examples from Actual Configuration

Version control should contain example configurations that demonstrate structure without exposing secrets. Create .example or .sample files that show developers what configuration is required, whilst keeping actual credentials out of Git entirely.

# config/secrets.example.yml
database:
  host: localhost
  username: app_user
  password: REPLACE_WITH_DATABASE_PASSWORD

api:
  endpoint: https://api.example.com
  key: REPLACE_WITH_API_KEY
  secret: REPLACE_WITH_API_SECRET

encryption:
  key: REPLACE_WITH_32_BYTE_ENCRYPTION_KEY

Documentation should explain where developers obtain the actual values. For local development, this might involve running setup scripts that generate credentials. For production, it involves deployment processes that inject secrets from secure storage. The example file serves as a template and checklist, ensuring nothing is forgotten whilst preventing accidental commits of real secrets.

Using Environment Variables

Environment variables provide a standard mechanism for separating configuration from code. Applications read credentials from the environment rather than from files tracked in Git. This pattern works across virtually all platforms and languages.

// Instead of hardcoding
$db_password = 'secretpassword123';

// Read from environment
$db_password = getenv('DB_PASSWORD');
// Instead of requiring a config file with secrets
const apiKey = 'sk_live_abc123def456';

// Read from environment
const apiKey = process.env.API_KEY;

Environment files (.env) provide convenience for local development, but must be excluded from Git. The pattern of .env for actual secrets and .env.example for structure becomes standard across many frameworks. Developers copy the example to create their local configuration, filling in actual values that never leave their machine.

Implementing Pre-Commit Hooks

Pre-commit hooks provide automated checking before changes enter the repository. These hooks scan staged files for patterns that match secrets and block commits when suspicious content is detected. This automated enforcement prevents mistakes that manual review might miss.

The pre-commit framework manages hooks across multiple repositories and languages. Installation is straightforward, and configuration defines which checks run before each commit.

pip install pre-commit

Create a configuration file defining which hooks to run:

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.4.0
    hooks:
      - id: check-added-large-files
      - id: check-json
      - id: check-yaml
      - id: detect-private-key
      - id: end-of-file-fixer
      - id: trailing-whitespace

  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.4.0
    hooks:
      - id: detect-secrets
        args: ['--baseline', '.secrets.baseline']

Install the hooks in your repository:

pre-commit install

Now every commit triggers these checks automatically. The detect-private-key hook catches SSH keys and other private key formats. The detect-secrets hook uses entropy analysis and pattern matching to identify potential credentials. When suspicious content is detected, the commit is blocked and the developer is alerted to review the flagged content.

Configuring Git-Secrets

The git-secrets tool from AWS specifically targets secret detection. It scans commits, commit messages, and merges to prevent credentials from entering repositories. Installation and configuration establish patterns that identify secrets.

# Install via Homebrew
brew install git-secrets

# Install hooks in a repository
cd /path/to/repo
git secrets --install

# Register AWS patterns
git secrets --register-aws

# Add custom patterns
git secrets --add 'password\s*=\s*["\047][^\s]+'
git secrets --add 'api[_-]?key\s*=\s*["\047][^\s]+'

The tool maintains a list of prohibited patterns and scans all content before allowing commits. Custom patterns can be added to match organisation-specific secret formats. The --register-aws command adds patterns for AWS access keys and secret keys, whilst custom patterns catch application-specific credential formats.

For teams, establishing git-secrets across all repositories ensures consistent protection. Template directories provide a mechanism for automatic installation in new repositories:

# Create a template with git-secrets installed
git secrets --install ~/.git-templates/git-secrets

# Use the template for all new repositories
git config --global init.templateDir ~/.git-templates/git-secrets

Now, every git init automatically includes secret scanning hooks.

Enabling GitHub Secret Scanning

GitHub Secret Scanning provides server-side protection that cannot be bypassed by local configuration. GitHub automatically scans repositories for known secret patterns and alerts repository administrators when matches are detected. This works for both new commits and historical content.

For public repositories, secret scanning is enabled by default. For private repositories, it requires GitHub Advanced Security. Enable it through repository settings under Security & Analysis. GitHub maintains partnerships with service providers to detect their specific secret formats, and when a partner pattern is found, both you and the service provider are notified.

The scanning covers not just code but also issues, pull requests, discussions, and wiki content. This comprehensive approach catches secrets that might be accidentally pasted into comments or documentation. The detection happens continuously, so even old content gets scanned when new patterns are added.

Custom patterns extend detection to organisation-specific secret formats. Define regular expressions that match your internal API key formats, authentication tokens, or other proprietary credentials. These custom patterns apply across all repositories in your organisation, providing consistent protection.

Structuring Projects for Credential Isolation

Project structure itself can prevent credentials from accidentally entering Git. Establish clear separation between code that belongs in version control and configuration that remains environment-specific. Create dedicated directories for credentials and ensure they are excluded from tracking.

project/
├── src/                    # Code - belongs in Git
├── tests/                  # Tests - belongs in Git
├── config/
│   ├── app.yml            # General config - belongs in Git
│   ├── secrets.example.yml # Example - belongs in Git
│   └── secrets.yml        # Actual secrets - excluded from Git
├── credentials/           # Entire directory excluded
│   ├── database.yml
│   └── api-keys.json
├── .env.example           # Example - belongs in Git
├── .env                   # Actual secrets - excluded from Git
└── .gitignore             # Defines exclusions

This structure makes it obvious which files contain secrets. The credentials/ directory is clearly separated from source code, and its exclusion from Git is explicit. Developers can see at a glance that this directory requires different handling.

Documentation should explain the structure and the reasoning behind it. New team members need to understand why certain directories are empty in their fresh clones and where to obtain the configuration files that populate them. Clear documentation prevents confusion and ensures everyone follows the same patterns.

Managing Development Credentials

Development environments require credentials but should never use production secrets. Generate separate development credentials that provide access to development resources only. These credentials can be less stringently protected whilst still not being committed to Git.

Development credential management varies by organisation size and infrastructure. For small teams, shared development credentials stored in a team password manager might suffice. For larger organisations, each developer receives individual credentials for development resources, with access controlled through identity management systems.

Some teams commit development credentials intentionally, arguing that development databases contain no sensitive data and convenience outweighs risk. This approach is controversial and depends on your security model. If development credentials can access any production resources or if development data has any sensitivity, they must be protected. Even purely synthetic development data might reveal business logic or system architecture worth protecting.

The safer approach maintains the same credential handling patterns across all environments. This ensures that developers build habits that prevent production credential exposure. When development and production follow identical patterns, muscle memory built during development prevents mistakes in production.

Provisioning Production Credentials

Production credentials should never touch developer machines or version control. Deployment processes inject credentials at runtime through environment variables, secret management services, or deployment-time configuration.

Continuous deployment pipelines read credentials from secret stores and make them available to applications without exposing them to humans. GitHub Actions, GitLab CI, Jenkins, and other CI/CD systems provide secure variable storage that is injected during builds and deployments.

# .github/workflows/deploy.yml
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Deploy to production
        env:
          DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
          API_KEY: ${{ secrets.API_KEY }}
        run: |
          ./deploy.sh

The secrets.DB_PASSWORD syntax references encrypted secrets stored in GitHub's secure storage. These values are never exposed in logs or visible to anyone except during the deployment process. The deployment script receives them as environment variables and can configure the application appropriately.

Secret management services like HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or Google Cloud Secret Manager provide centralised credential storage with access controls, audit logging, and automatic rotation. Applications authenticate to these services and retrieve credentials at runtime, ensuring that secrets are never stored on disk or in environment files.

Rotating Credentials Regularly

Regular credential rotation limits exposure duration if secrets are compromised. Establish rotation schedules based on credential sensitivity and access patterns. Database passwords might rotate quarterly, API keys monthly, and authentication tokens weekly or daily. Automated rotation reduces operational burden and ensures consistency.

Rotation requires coordination between secret generation, distribution, and application updates. Secret management services can automate much of this process, generating new credentials, updating secure storage, and triggering application reloads. Manual rotation involves generating new credentials, updating all systems that use them, and verifying functionality before disabling old credentials.

The rotation schedule balances security against operational complexity. More frequent rotation provides better security but increases the risk of service disruption if processes fail. Less frequent rotation simplifies operations but extends exposure windows. Find the balance that matches your risk tolerance and operational capabilities.

Training and Culture

Technical controls provide necessary guardrails, but security ultimately depends on people understanding why credentials matter and how to protect them. Training should cover the business impact of credential exposure, the techniques for keeping secrets out of Git, and the procedures for responding if mistakes occur.

New developer onboarding should include credential management as a core topic. Before developers commit their first code, they should understand what constitutes a secret, why it must stay out of Git, and how to configure their local environment properly. This prevents problems rather than correcting them after they occur.

Regular security reminders reinforce good practices. When new secret types are introduced or new tools are adopted, communicate the changes and update documentation. Security reviews should check credential handling practices, not just looking for exposed secrets, but also verifying that proper patterns are followed.

Creating a culture where admitting mistakes is safe encourages early reporting. If a developer accidentally commits a credential, they should feel comfortable immediately alerting the security team, rather than hoping no one notices. Early detection enables faster response and reduces damage.

Responding to Detection

Despite best efforts, secrets sometimes enter repositories. Rapid response limits damage. Immediate credential rotation assumes compromise and prevents exploitation. Removing the file from future commits whilst leaving it in history provides no security benefit, as the exposure has already occurred.

Tools like BFG Repo-Cleaner can remove secrets from Git history, but this is complex and cannot guarantee complete removal. Anyone who cloned the repository before clean-up retains the compromised credentials in their local copy. Forks, clones on other systems, and backup copies may all contain the secrets.

The most reliable response is assuming the credential is compromised and rotating it immediately. History clean-up can follow as a secondary measure to reduce ongoing exposure, but it should never be the primary response. Treat any secret that entered Git as if it were publicly posted because once in Git history, it effectively was.

Continuous Improvement

Credential management practices should evolve with your infrastructure and team. Regular reviews identify gaps and opportunities for improvement. When new credential types are introduced, update .gitignore patterns, secret scanning rules, and documentation. When new developers join, gather feedback on clarity and completeness of onboarding materials.

Metrics help track effectiveness. Monitor secret scanning alerts, track rotation compliance, and measure time-to-rotation when credentials are exposed. These metrics identify areas needing improvement and demonstrate progress over time.

Summary

Preventing credentials from entering Git repositories requires multiple complementary approaches. Establish comprehensive .gitignore configurations before creating any credential files. Separate example configurations from actual secrets, keeping only examples in version control. Use environment variables to inject credentials at runtime rather than storing them in configuration files. Implement pre-commit hooks and server-side scanning to catch mistakes before they enter history. Structure projects to clearly separate code from credentials, making it obvious what belongs in Git and what does not.

Train developers on credential management and create a culture where security is everyone's responsibility. Provision production credentials through deployment processes and secret management services, ensuring they never touch developer machines or version control. Rotate credentials regularly to limit exposure windows. Respond rapidly when secrets are detected, assuming compromise and rotating immediately.

Security is not a one-time configuration but an ongoing practice. Regular reviews, continuous improvement, and adaptation to new threats and technologies keep credential management effective. The investment in prevention is far less than the cost of responding to exposed credentials, making it essential to get right from the beginning.

Related Resources

When Operations and Machine Learning meet

5th February 2026

Here's a scenario you'll recognise: your SRE team drowns in 1,000 alerts daily. 95% are false positives. Meanwhile, your data scientists built five ML models last quarter. None have reached production.

These problems are colliding, and solving each other. Machine learning is moving out of research labs and into the operations that keep your systems running. At the same time, DevOps practices are being adapted to get ML models into production reliably.

This convergence has created three new disciplines (AIOps, MLOps and LLM observability), and according to recent analysis, it represents a $100 billion opportunity. Here's what you need to know.

Why Traditional Operations Can't Keep Up

Modern systems generate unprecedented volumes of operational data. Logs, metrics, traces, events and user interaction signals create a continuous stream that's too large and too fast for manual analysis.

Your monitoring system might send thousands of alerts per day, but most are noise. A CPU spike in one microservice cascades into downstream latency warnings, database connection errors and end-user timeouts, generating dozens or hundreds of alerts from a single root cause. Without intelligent correlation, engineers waste hours manually connecting the dots.

Meanwhile, machine learning models that could solve real business problems sit in notebooks, never making it to production. The gap between data science and operations is costly. Data scientists lack the infrastructure to deploy models reliably. Operations teams lack the tooling to monitor models that do make it live.

The complexity of cloud-native architectures, microservices and distributed systems has outpaced traditional approaches. Manual processes that worked for simpler systems simply cannot scale.

Three Emerging Practices Changing the Game

Three distinct but related practices have emerged to address these challenges. Each solves a specific problem whilst contributing to a broader transformation in how organisations build and run digital services.

AIOps: Intelligence for Your Operations

AIOps (Artificial Intelligence for IT Operations) applies machine learning to the work of IT operations. Originally coined by Gartner, AIOps platforms collect data from across your environment, analyse it in real-time and surface patterns, anomalies or likely incidents.

The key capability is event correlation. Instead of presenting 1,000 raw alerts, AIOps systems analyse metadata, timing, topological dependencies and historical patterns to collapse related events into a single coherent incident. What was 1,000 alerts becomes one actionable event with a causal chain attached.

Beyond detection, AIOps platforms can trigger automated responses to common problems, reducing time to remediation. Because they learn from historical data, they can offer predictive insights that shift operations away from constant firefighting.

Teams implementing AIOps report measurable improvements: 60-80% reduction in alert volume, 50-70% faster incident response and significant reductions in operational toil. The technology is maturing rapidly, with Gartner predicting that 60% of large enterprises will have adopted AIOps platforms by 2026.

MLOps: Getting Models into Production

Whilst AIOps uses ML to improve operations, MLOps (Machine Learning Operations) is about operationalising machine learning itself. Building a model is only a small part of making it useful. Models change, data changes, and performance degrades over time if the system isn't maintained.

MLOps is an engineering culture and practice that unifies ML development and ML operations. It extends DevOps by treating machine learning models and data assets as first-class citizens within the delivery lifecycle.

In practice, this means continuous integration and continuous delivery for machine learning. Changes to models and pipelines are tested and deployed in a controlled way. Model versioning tracks not just the model artefact, but also the datasets and hyperparameters that produced it. Monitoring in production watches for performance drift and decides when to retrain or roll back.

The MLOps market was valued at $2.2 billion in 2024 and is projected to reach $16.6 billion by 2030, reflecting rapid adoption across industries. Organisations that successfully implement MLOps report that up to 88% of ML initiatives that previously failed to reach production are now being deployed successfully.

A typical MLOps implementation looks like this: data scientists work in their preferred tools, but when they're ready to deploy, the model goes through automated testing, gets versioned alongside its training data and deploys with built-in monitoring for performance drift. If the model degrades, it can automatically retrain or roll back.

The SRE Automation Opportunity

Site Reliability Engineering, originally created at Google, applies software engineering principles to operations problems. It encompasses availability, latency, performance, efficiency, change management, monitoring, emergency response and capacity planning.

Recent analysis suggests that automating SRE work could offer substantially greater returns than automating traditional operations tasks. The reasoning centres on scope and impact. SRE automation touches almost every aspect of service delivery, rather than focusing mainly on operational analytics. It has direct and immediate impact on reliability and performance, which affects customer satisfaction and business outcomes.

The economic case is compelling. Whilst Gartner's predictions for the AIOps market are modest ($3.1 billion by 2025), analysis from Foundation Capital suggests that automating SRE's is worth 50 times that figure, a $100 billion opportunity. The value comes from reduced downtime, improved resource utilisation, increased innovation by freeing skilled staff from routine tasks and enhanced customer satisfaction through improved service quality.

Rather than replacing AIOps, the likely outcome is convergence. Analytics, automation and reliability engineering become mutually reinforcing, with organisations adopting integrated approaches that combine intelligent monitoring, automated operations and proactive reliability practices.

What This Looks Like in the Real World

The difference between traditional operations and ML-powered operations shows up in everyday scenarios.

Before: An application starts responding slowly. Monitoring systems fire hundreds of alerts across different tools. An engineer spends two hours correlating logs, metrics and traces to identify that a database connection pool is exhausted. They manually scale the service, update documentation and hope to remember the fix next time.

After: The same slowdown triggers anomaly detection. The AIOps platform correlates signals across the stack, identifies the connection pool issue and surfaces it as a single incident with context. Either an automated remediation kicks in (scaling the pool based on learned patterns) or the engineer receives a notification with diagnosis complete and remediation steps suggested. Resolution time drops from hours to minutes.

Before: A data science team builds a pricing optimisation model. After three months of development, they hand a trained model to engineering. Engineering spends another month building deployment infrastructure, writing monitoring code and figuring out how to version the model. By the time it reaches production, the model is stale and performs poorly.

After: The same team works within an MLOps platform. Development happens in standard environments with experiment tracking. When ready, the data scientist triggers deployment through a single interface. The platform handles testing, versioning, deployment and monitoring. The model reaches production in days instead of months, and automatic retraining keeps it current.

These patterns extend across industries. Financial services firms use MLOps for fraud detection models that need continuous updating. E-commerce platforms use AIOps to manage complex microservices architectures. Healthcare organisations use both to ensure critical systems remain available whilst deploying diagnostic models safely.

The Tech Behind the Transformation (Optional Deep Dive)

If you want to understand why this convergence is happening now, it helps to know about transformers and vector embeddings. If you're more interested in implementation, skip to the next section.

The breakthrough that enabled modern AI came in 2017 with a paper titled "Attention Is All You Need". Ashish Vaswani and colleagues at Google introduced the transformer architecture, a neural network design that processes sequential data (like sentences) by computing relationships across the entire sequence at once, rather than step by step.

The key innovation is self-attention. Earlier models struggled with long sequences because they processed data sequentially and lost context. Self-attention allows a model to examine all parts of an input simultaneously, computing relationships between each token and every other token. This parallel processing is a major reason transformers scale well and perform strongly on large datasets.

Transformers underpin models like GPT and BERT. They enable applications from chatbots to content generation, code assistance to semantic search. For operations teams, transformer-based models power the natural language interfaces that let engineers query complex systems in plain English and the embedding models that enable semantic search across logs and documentation.

Vector embeddings represent concepts as dense vectors in high-dimensional space. Similar concepts have embeddings that are close together, whilst unrelated concepts are far apart. This lets models quantify meaning in a way that supports both understanding and generation.

In operations contexts, embeddings enable semantic search. Instead of searching logs for exact keyword matches, you can search for concepts. Query "authentication failures" and retrieve related events like "login rejected", "invalid credentials" or "session timeout", even if they don't contain your exact search terms.

Retrieval-Augmented Generation (RAG) combines these capabilities to make AI systems more accurate and current. A RAG system pairs a language model with a retrieval mechanism that fetches external information at query time. The model generates responses using both its internal knowledge and retrieved context.

This approach is particularly valuable for operations. A RAG-powered assistant can pull current runbook procedures, recent incident reports and configuration documentation to answer questions like "how do we handle database failover in the production environment?" with accurate, up-to-date information.

The technical stack supporting RAG implementations typically includes vector databases for similarity search. As of 2025, commonly deployed options include Pinecone, Milvus, Chroma, Faiss, Qdrant, Weaviate and several others, reflecting a fast-moving landscape that's becoming standard infrastructure for many AI implementations.

Where to Begin

Starting with ML-powered operations doesn't require a complete transformation. Begin with targeted improvements that address your most pressing problems.

If you're struggling with alert-fatigue...

Start with event correlation. Many AIOps platforms offer this as an entry point without requiring full platform adoption. Look for solutions that integrate with your existing monitoring tools and can demonstrate noise reduction in a proof of concept.

Focus on one high-volume service or team first. Success here provides both immediate relief and a template for broader rollout. Track metrics like alerts per day, time to acknowledge and time to resolution to demonstrate impact.

Tools worth considering include established platforms like Datadog, Dynatrace and ServiceNow, alongside newer entrants like PagerDuty AIOps and specialised incident response platforms like incident.io.

If you have ML models stuck in development...

Begin with MLOps fundamentals before investing in comprehensive platforms. Focus on model versioning first (track which code, data and hyperparameters produced each model). This single practice dramatically improves reproducibility and makes collaboration easier.

Next, automate deployment for one model. Choose a model that's already proven valuable but requires manual intervention to update. Build a pipeline that handles testing, deployment and basic monitoring. Use this as a template for other models.

Popular MLOps platforms include MLflow (open source), cloud provider offerings like AWS SageMaker, Google Vertex AI and Azure Machine Learning, and specialised platforms like Databricks and Weights & Biases.

If you're building with LLMs...

Implement observability from day one. LLM applications are different from traditional software. They're probabilistic, can be expensive to run, and their behaviour varies with prompts and context. You need to monitor performance (response times, throughput), quality (output consistency, appropriateness), bias, cost (token usage) and explainability.

Common pitfalls include underestimating costs, failing to implement proper prompt versioning, neglecting to monitor for model drift and not planning for the debugging challenges that come with non-deterministic systems.

The LLM observability space is evolving rapidly, with platforms like LangSmith, Arize AI, Honeycomb and others offering specialised tooling for monitoring generative AI applications in production.

Why This Matters Beyond the Tech

The convergence of ML and operations isn't just a technical shift. It requires cultural change, new skills and rethinking of traditional roles.

Teams need to understand not only deployment automation and infrastructure as code, but also concepts like attention mechanisms, vector embeddings and retrieval systems because these directly influence how AI-enabled services behave in production. They also need operational practices that can handle both deterministic systems and probabilistic ones, whilst maintaining reliability, compliance and cost control.

Data scientists are increasingly expected to understand production concerns like latency budgets, deployment strategies and operational monitoring. Operations engineers are expected to understand model behaviour, data drift and the basics of ML pipelines. The gap between these roles is narrowing.

Security and governance cannot be afterthoughts. As AI becomes embedded in tooling and operations become more automated, organisations need to integrate security testing throughout the development cycle, implement proper access controls and audit trails, and ensure models and automated systems operate within appropriate guardrails.

The organisations succeeding with these practices treat them as both a technical programme and an organisational transformation. They invest in training, establish cross-functional teams, create clear ownership and accountability, and build platforms that reduce cognitive load whilst enabling self-service.

Moving Forward

The convergence of machine learning and operations isn't a future trend, it's happening now. AIOps platforms are reducing alert noise and accelerating incident response. MLOps practices are getting models into production faster and keeping them performing well. The economic case for SRE automation is driving investment and innovation.

The organisations treating this as transformation rather than tooling adoption are seeing results: fewer outages, faster deployments, models that actually deliver value. They're not waiting for perfect solutions. They're starting with focused improvements, learning from what works and scaling gradually.

The question isn't whether to adopt these practices. It's whether you'll shape the change or scramble to catch up. Start with the problem that hurts most (alert fatigue, models stuck in development, reliability concerns) and build from there. The convergence of ML and operations offers practical solutions to real problems. The hard part is committing to the cultural and organisational changes that make the technology work.

Adding a dropdown calendar to the macOS desktop with Itsycal

26th January 2026

In Linux Mint, there is a dropdown calendar that can be used for some advance planning. On Windows, there is a pop-up one on the taskbar that is as useful. Neither of these possibilities is there on a default macOS desktop, and I missed the functionality. Thus, a search began.

That ended with my finding Itsycal, which does exactly what I need. Handily, it also integrates with the macOS Calendar app, though I use other places for my appointments. In some ways, that is more than I need. The dropdown pane with the ability to go back and forth through time suffices for me.

While it would be ideal if I could go year by year as well as month by month, which is the case on Linux Mint, I can manage with just the latter. Anything is better than having nothing at all. Sometimes, using more than one operating system broadens a mind.

Switching from uBlock Origin to AdGuard and Stylus

15th January 2026

A while back, uBlock Origin broke this website when I visited it. There was a long AI conversation that left me with the impression that the mix of macOS, Firefox and WordPress presented an edge case that could not be resolved. Thus, I went looking for alternatives because I may not be able to convince else to look into it, especially when the issue could be so niche.

One thing the uBlock Origin makes very easy is the custom blocking of web page elements, so that was one thing that I needed to replace. A partial solution comes in the form of the Stylus extension. Though the CSS rules may need to be defined manually after interrogating a web page structure, the same effects came be achieved. In truth, it is not as slick as using a GUI element selector, but I have learned to get past that.

For automatic ad blocking, I have turned to AdGuard AdBlocker. Thus far, it is doing what I need it to do. One thing to note is that does nothing to stop your registering in website visitor analytics, not that it bothers me at all. That was something that uBlock Origin does out of the box, while my new ad blocker sticks more narrowly to its chosen task, and that suffices for now.

In summary, I have altered my tooling for controlling what websites show me. It is all too easy for otherwise solid tools to register false positives and cause other obstructions. That is why I find myself swapping between them every so often; after all, website code can be far too variable.

Maybe it highlights how challenging it is to make ad blocking and other similar software when your test cases cannot be as extensive as they need to be. Add in something of an arms race between advertisers and ad blockers for the ante to be upped even more. It does not help when we want the things free of charge too.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.