Building a modular Hugo website home page using block-driven front matter
Inspired by building a modular landing page on a Grav-powered subsite, I wondered about doing the same for a Hugo-powered public transport website that I have. It was part of an overall that I was giving it, with AI consultation running shotgun with the whole effort. The home page design was changed from a two-column design much like what was once typical of a blog, to a single column layout with two-column sections.
The now vertical structure consisted of numerous layers. First, there is an introduction with a hero image, which is followed by blocks briefly explaining what the individual sections are about. Below them, two further panels describe motivations and scope expansions. After those, there are two blocks displaying pithy details of recent public transport service developments before two final panels provide links to latest articles and links to other utility pages, respectively.
This was a conscious mix of different content types, with some nesting in the structure. Much of the content was described in page front matter, instead of where it usually goes. Without that flexibility, such a layout would not have been possible. All in all, this illustrates just how powerful Hugo is when it comes to constructing website layouts. The limits essentially are those of user experience and your imagination, and necessarily in that order.
On Hugo Home Pages
Building a home page in Hugo starts with understanding what content/_index.md actually represents. Unlike a regular article file, _index.md denotes a list page, which at the root of the content directory becomes the site's home page. This special role means Hugo treats it differently from a standard single page because the home is always a list page even when the design feels like a one-off.
Front matter in content/_index.md can steer how the page is rendered, though it remains entirely optional. If no front matter is present at all, Hugo still creates the home page at .Site.Home, draws the title from the site configuration, leaves the description empty unless it has been set globally, and renders any Markdown below the front matter via .Content. That minimal behaviour suits sites where the home layout is driven entirely by templates, and it is a common starting point for new projects.
How the Underlying Markdown File Looks
While this piece opens with a description of what was required and built, it is better to look at the real _index.md file. Illustrating the block-driven pattern in practical use, here is a portion of the file:
---
title: "Maximising the Possibilities of Public Transport"
layout: "home"
blocks:
- type: callout
text1: "Here, you will find practical, thoughtful insight..."
text2: "You can explore detailed route listings..."
image: "images/sam-Up56AzRX3uM-unsplash.jpg"
image_alt: "Transpennine Express train leaving Manchester Piccadilly train station"
- type: cards
heading: "Explore"
cols_lg: 6
items:
- title: "News & Musings"
text: "Read the latest articles on rail networks..."
url: "https://ontrainsandbuses.com/news-and-musings/"
- title: "News Snippets"
...
- type: callout
heading: "Motivation"
text2: "Since 2010, British public transport has endured severe challenges..."
image: "images/joseph-mama-aaQ_tJNBK4c-unsplash.jpg"
image_alt: "Buses in Leeds, England, U.K."
- type: callout
heading: "An Expanding Scope"
text2: "You will find content here drawn from Ireland..."
image: "images/snap-wander-RlQ0MK2InMw-unsplash.jpg"
image_alt: "TGV speeding through French countryside"
---
There are several things that are worth noting here. The title and layout: "home" fields appear at the top, with all structural content expressed as a blocks list beneath them. There is no Markdown body because the blocks supply all the visible content, and the file contains no layout logic of its own, only a description of what should appear and in what order. However, the lack of a Markdown body does pose a challenge for spelling and grammar checking using the LanguageTool extension in VSCode, which means that you need to ensure that proofreading needs to happen in a different way, such as using the editor that comes with the LanguageTool browser extension.
Template Selection and Lookup Order
Template selection is where Hugo's home page diverges most noticeably from regular sections. In Hugo v0.146.0, the template system was completely overhauled, and the lookup order for the home page kind now follows a straightforward sequence: layouts/home.html, then layouts/list.html, then layouts/all.html. Before that release, the conventional path was layouts/index.html first, falling back to layouts/_default/list.html, and the older form remains supported through backward-compatibility mapping. In every case, baseof.html is a wrapper rather than a page template in its own right, so it surrounds whichever content template is selected without substituting for one.
The choice of template can be guided further through front matter. Setting layout: "home" in content/_index.md, as in the example above, encourages Hugo to pick a template named home.html, while setting type: "home" enables more specific template resolution by namespace. These are useful options when the home page deserves its own template path without disturbing other list pages.
The Home Template in Practice
With the front matter established, the template that renders it is worth examining in its own right. It happens that the home.html for this site reads as follows:
<!DOCTYPE html>
{{- partial "head.html" . -}}
<body>
{{- partial "header.html" . -}}
<div class="container main" id="content">
<div class="row">
<h2 class="centre">{{ .Title }}</h2>
{{- partial "blocks/render.html" . -}}
</div>
{{- partial "recent-snippets-cards.html" . -}}
{{- partial "home-teasers.html" . -}}
{{ .Content }}
</div>
{{- partial "footer.html" . -}}
{{- partial "cc.html" . -}}
{{- partial "matomo.html" . -}}
</body>
</html>
This template is self-contained rather than wrapping a base template. It opens the full HTML document directly, calls head.html for everything inside the <head> element and header.html for site navigation, then establishes the main content container. Inside that container, .Title is output as an h2 heading, drawing from the title field in content/_index.md. The block dispatcher partial, blocks/render.html, immediately follows and is responsible for looping through .Params.blocks and rendering each entry in sequence, handling all the callout and cards blocks described in the front matter.
Below the blocks, two further partials render dynamic content independently of the front matter. recent-snippets-cards.html displays the two most recent news snippets as full-content cards, while home-teasers.html presents a compact linked list of recent musings alongside a weighted list of utility pages. After those, {{ .Content }} outputs any Markdown written below the front matter in content/_index.md, though in this case, the file has no body content, so nothing is rendered at that point. The template closes with footer.html, a cookie notice via cc.html and a Matomo analytics snippet.
Notice that this template does not use {{ define "main" }} and therefore does not rely on baseof.html at all. It owns the full document structure itself, which is a legitimate approach when the home page has a sufficiently distinct shape that sharing a base template would add complexity rather than reduce it.
The Block Dispatcher
The blocks/render.html partial is the engine that connects the front matter to the individual block templates. Its full content is brief but does considerable work:
{{ with .Params.blocks }}
{{ range . }}
{{ $type := .type | default "text" }}
{{ partial (printf "blocks/%s.html" $type) (dict "page" $ "block" .) }}
{{ end }}
{{ end }}
The with .Params.blocks guard means the entire loop is skipped cleanly if no blocks key is present in the front matter, so pages that do not use the system are unaffected. For each block in the list, the type field is read and passed through printf to build the partial path, so type: callout resolves to blocks/callout.html and type: cards resolves to blocks/cards.html. If a block has no type, the fallback is text, so a blocks/text.html partial would handle it. The dict call constructs a fresh context map passing both the current page (as page) and the raw block data (as block) into the partial, keeping the two concerns cleanly separated.
The Callout Blocks
The callout.html partial renders bordered, padded sections that can carry a heading, an image and up to five paragraphs of text. Used for the website introduction, motivation and expanded scope sections, its template is as follows:
{{ $b := .block }}
<section class="mt-4">
<div class="p-4 border rounded">
{{ with $b.heading }}<h3>{{ . }}</h3>{{ end }}
{{ with $b.image }}
<img
src="{{ . }}"
class="img-fluid w-100 rounded"
alt="{{ $b.image_alt | default "" }}">
{{ end }}
<div class="text-columns mt-4">
{{ with $b.text1 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text2 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text3 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text4 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text5 }}<p>{{ . }}</p>{{ end }}
</div>
</div>
</section>
The pattern here is consistent and deliberate. Every field is wrapped in a {{ with }} block, so fields absent from the front matter produce no output and no empty elements. The heading renders as an h3, sitting one level below the page's h2 title and maintaining a coherent document outline. The image uses img-fluid and w-100 alongside rounded, making it fully responsive and visually consistent with the bordered container. According to the Bootstrap documentation, img-fluid applies max-width: 100% and height: auto so the image scales with its parent, while w-100 ensures it fills the container width regardless of its intrinsic size. The image_alt field falls back to an empty string via | default "" rather than omitting the attribute entirely, which keeps the rendered HTML valid.
Text content sits inside a text-columns wrapper, which allows a stylesheet to apply a CSS multi-column layout to longer passages without altering the template. The numbered paragraph fields text1 through text5 reflect the varying depth of the callout blocks in the front matter: the introductory callout uses two paragraphs, while the Motivation callout uses four. Adding another paragraph field to a block requires only a new {{ with $b.text6 }} line in the partial and a matching text6 key in the front matter entry.
The Section Introduction Blocks
The cards.html partial renders a headed grid of linked blocks, with the column width at large viewports driven by a front matter parameter. This is used for the website section introductions and its template is as follows:
{{ $b := .block }}
{{ $colsLg := $b.cols_lg | default 4 }}
<section class="mt-4">
{{ with $b.heading }}<h3 class="h4 mb-3">{{ . }}</h3>{{ end }}
<div class="row">
{{ range $b.items }}
<div class="col-12 col-md-6 col-lg-{{ $colsLg }} mb-3">
<div class="card h-100 ps-2 pe-2 pt-2 pb-2">
<div class="card-body">
<h4 class="h5 card-title mt-1 mb-2">
<a href="{{ .url }}">{{ .title }}</a>
</h4>
{{ with .text }}<p class="card-text mb-0">{{ . }}</p>{{ end }}
</div>
</div>
</div>
{{ end }}
</div>
</section>
The cols_lg value defaults to 4 if not specified, which produces a three-column grid at large viewports using Bootstrap's twelve-column grid. The transport site's cards block sets cols_lg: 6, giving two columns at large viewports and making better use of the wider reading space for six substantial card descriptions. At medium viewports, the col-md-6 class produces two columns regardless of cols_lg, and col-12 ensures single-column stacking on small screens.
The heading uses the h4 utility class on an h3 element, pulling the visual size down one step while keeping the document outline correct, since the page already has an h2 title and h3 headings in the callout blocks. Each card title then uses h5 on an h4 for the same reason. The h-100 class on the card sets its height to one hundred percent of the column, so all cards in a row grow to match the tallest one and baselines align even when descriptions vary in length. The padding classes ps-2 pe-2 pt-2 pb-2 add a small inset without relying on custom CSS.
Brief Snippets of Recent Public Transport Developments
The recent-snippets-cards.html partial sits outside the blocks system and renders the most recent pair of short transport news posts as full-content cards. Here is its template:
<h3 class="h4 mt-4 mb-3">Recent Snippets</h3>
<div class="row">
{{ range ( first 2 ( where .Site.Pages "Type" "news-snippets" ) ) }}
<div class="col-12 col-md-6 mb-3">
<div class="card h-100">
<div class="card-body">
<h4 class="h6 card-title mt-1 mb-2">
{{ .Date.Format "15:04, January 2" }}<sup>{{ if eq (.Date.Format "2") "2" }}nd{{ else if eq (.Date.Format "2") "22" }}nd{{ else if eq (.Date.Format "2") "1" }}st{{ else if eq (.Date.Format "2") "21" }}st{{ else if eq (.Date.Format "2") "3" }}rd{{ else if eq (.Date.Format "2") "23" }}rd{{ else }}th{{ end }}</sup>, {{ .Date.Format "2006" }}
</h4>
<div class="snippet-content">
{{ .Content }}
</div>
</div>
</div>
</div>
{{ end }}
</div>
The where function filters .Site.Pages to the news-snippets content type, and first 2 takes only the two most recently created entries. Notably, this collection does not call .ByDate.Reverse before first, which means it relies on Hugo's default page ordering. Where precise newest-first ordering matters, chaining ByDate.Reverse before first makes the intent explicit and avoids surprises if the default ordering changes.
The date heading warrants attention. It formats the time as 15:04 for a 24-hour clock display, followed by the month name and day number, then appends an ordinal suffix using a chain of if and else if comparisons against the raw day string. The logic handles the four irregular cases (1st, 21st, 2nd, 22nd, 3rd and 23rd) before falling back to th for all other days. The suffix is wrapped in a <sup> element so it renders as a superscript. The year follows as a separate .Date.Format "2006" call, separated from the day by a comma. Each card renders the full .Content of the snippet rather than a summary, which suits short-form posts where the entire entry is worth showing on the home page.
Latest Musings and Utility Pages Blocks
The home-teasers.html partial renders a two-column row of linked lists, one for recent long-form articles and one for utility pages. Its template is as follows:
<div class="row mt-4">
<div class="col-12 col-md-6 mb-3">
<div class="card h-100">
<div class="card-body">
<h3 class="h5 card-title mb-3">Recent Musings</h3>
{{ range first 5 ((where .Site.RegularPages "Type" "news-and-musings").ByDate.Reverse) }}
<p class="mb-2">
<a href="{{ .Permalink }}">{{ .Title }}</a>
</p>
{{ end }}
</div>
</div>
</div>
<div class="col-12 col-md-6 mb-3">
<div class="card h-100">
<div class="card-body">
<h3 class="h5 card-title mb-3">Extras & Utilities</h3>
{{ $extras := where .Site.RegularPages "Type" "extras" }}
{{ $extras = where $extras "Title" "ne" "Thank You for Your Message!" }}
{{ $extras = where $extras "Title" "ne" "Whoops!" }}
{{ range $extras.ByWeight }}
<p class="mb-2">
<a href="{{ .Permalink }}">{{ .Title }}</a>
</p>
{{ end }}
</div>
</div>
</div>
</div>
The left column uses .Site.RegularPages rather than .Site.Pages to exclude list pages, taxonomy pages and other non-content pages from the results. The news-and-musings type is filtered, sorted with .ByDate.Reverse and then limited to five entries with first 5, producing a compact, current list of article titles. The heading uses h5 on an h3 for the same visual-scale reason seen in the cards blocks, and h-100 on each card ensures the two columns match in height at medium viewports and above.
The right column builds the extras list through three chained where calls. The first narrows to the extras content type, and the subsequent two filter out utility pages that should never appear in public navigation, specifically the form confirmation and error pages. The remaining pages are then sorted by ByWeight, which respects the weight value set in each page's front matter. Pages without a weight default to zero, so assigning small positive integers to the pages that should appear first gives stable, editorially controlled ordering without touching the template.
Diagnosing Template Choices
Diagnosing which template Hugo has chosen is more reliable with tooling than with guesswork. Running the development server with debug output reveals the selected templates in the terminal logs. Another quick technique is to place a visible marker in a candidate file and inspect the page source.
HTML comments are often stripped during minified builds, and Go template comments never reach the output, so an innocuous meta tag makes a better marker because a minifier will not remove it. If the marker does not appear after a rebuild, either the template being edited is not in use because another file higher in the lookup order is taking precedence, or a theme is providing a matching file without it being obvious.
Front Matter Beyond Layout
Front matter on the home page earns its place when it supplies values that make their way into head tags and structured sections, rather than when it tries to replicate layout logic. A brief description is valuable for metadata and social previews because many base templates output it as a meta description tag. Where a site uses social cards, parameters for images and titles can be added and consumed consistently.
Menu participation also remains available to the home page, with entries in front matter allowing the home to appear in navigation with a given weight. Less common but still useful fields include outputs, which can disable or configure output formats, and cascade, which can provide defaults to child pages when site-wide consistency matters. Build controls can influence whether a page is rendered or indexed, though these are rarely changed on a home page once the structure has settled.
Template Hygiene
Template hygiene pays off throughout this process. Whether the home page uses a self-contained template or wraps baseof.html, the principle is the same: each file should own a clearly bounded responsibility. The home template in the example above does this well, with head.html, header.html and footer.html each handling their own concerns, and the main content area occupied by the blocks dispatcher and the two dynamic partials. Column wrappers are easiest to manage when each partial opens and closes its own structure, rather than relying on a sibling to provide closures elsewhere.
That self-containment prevents subtle layout breakage and means that adding a new block type requires only a small partial in layouts/partials/blocks/ and a new entry in the front matter blocks list, with no changes to any existing template. Once the home page adopts this pattern, the need for CSS overrides recedes because the HTML shape finally expresses intent instead of fighting it.
Bootstrap Utility Classes in Summary
Understanding Bootstrap's utility classes rounds off the technique because these classes anchor the modular blocks without the need for custom CSS. h-100 sets height to one hundred percent and works well on cards inside a flex row so that their bottoms align across a grid, as seen in both the cards block and the home teasers. The h4, h5 and h6 utilities apply a different typographic scale to any element without changing the document outline, which is useful for keeping headings visually restrained while preserving accessibility. img-fluid provides responsive behaviour by constraining an image to its container width and maintaining aspect ratio, and w-100 makes an image or any element fill the container width even if its intrinsic size would let it stop short. Together, these classes produce predictable and adaptable blocks that feel consistent across all viewports.
Closing Remarks
The result of combining Hugo's list-page model for the home, a block-driven front matter design and Bootstrap's light-touch utilities is a home page that reads cleanly and remains easy to extend. New block types become a matter of adding a small partial and a new blocks entry, with the dispatcher handling the rest automatically. Dynamic sections such as recent snippets sit in dedicated partials called directly from the template, updating without any intervention in content/_index.md. Existing sections can be reordered without editing templates, shared structure remains in one place, and the need for brittle CSS customisation fades because the templates do the heavy lifting.
A final point returns to content/_index.md. Keeping front matter purposeful makes it valuable. A title, a layout directive and a blocks list that models the editorially controlled page structure are often enough, as we have seen in this example from my public transport website. More seldom-used fields such as outputs, cascade and build remain available should a site require them, but their restraint reflects the wider approach: let content describe structure, let templates handle layout and avoid unnecessary complexity.
Lessons learned during migrations to Grav CMS from Textpattern
After the most of four years since the arrival of Textpattern 4.8.8, Textpattern 4.9.0 was released. Since, I was running two subsites using the software, I tried one of them out with the new version. That broke an elderly navigation plugin that no longer appears in any repository, prompting a rollback, which was successful. Even so, it stirred some curiosity about alternatives to this veteran of the content management system world, which is pushing on for twenty-two years of age. That might have been just as well, given the subsequent release of Textpattern 4.9.1 because of two reported security issues, one of which affecting all preceding versions of the software.
Well before that came to light, there had been a chat session in a Gemini app on a mobile which travelling on a bus. This started with a simple question about alternatives to Textpattern. The ensuing interaction led me to choose Grav CMS after one other option turned out to involve a subscription charge; A personal website side hustle generating no revenue was not going to become a more expensive sideline than it already was, the same reasoning that stops me paying for WordPress plugins.
Exploring Known Options
Without any recourse to AI capability, I already had options. While WordPress was one of those that was well known to me, the organisation of the website was such that it would be challenging to get everything placed under one instance and I never got around to exploring the multisite capability in much depth. Either way, it would prove to involve quite an amount of restructuring, Even having multiple instances would mean an added amount of maintenance, though I do automate things heavily. The number of attack surfaces because of database dependence is another matter.
In the past, I have been a user of Drupal, though its complexity and the steepness of the associated learning curve meant that I never exploited it fully. Since those were pre-AI days, I wonder how things would differ now. Nevertheless, the need to make parts of a website fit in with each other was another challenge that I failed to overcome in those days. Thus, this content framework approach was not one that I wished to use again. In short, this is an enterprise-grade tool that may be above the level of personal web tinkering, and I never did use its capabilities to their full extent.
The move away from Drupal brought me to Hugo around four years ago. That too presents a learning curve, though its inherent flexibility meant that I could do anything that I want with it once I navigated its documentation and ironed out oversights using web engine searches. This static website generator is what rebuilds a public transport website, a history website comprised of writings by my late father together with a website for my freelancing company. There is no database involved, and you can avoid having any dynamic content creation machinery on the web servers too. Using Git, it is possible to facilitate content publishing from anywhere as well.
Why Grav?
Of the lot, Hugo had a lot going for it. The inherent flexibility would not obstruct getting things to fit with a website wide consistency of appearance, and there is nothing to stop one structuring things how they wanted. However, Grav has one key selling point in comparison: straightforward remote production of content without recourse to local builds being uploaded to a web server. That decouples things from needing one to propagate the build machinery across different locations.
Like Hugo, Grav had an active project behind it and a decent supply of plugins and an architecture that bested Textpattern and its usual languid release cycle. The similarity also extended as far as not having to buy into someone else's theme: any theming can be done from scratch for consistency of appearance across different parts of a website. In Grav's case, that means using the Twig PHP templating engine, another thing to learn and reminiscent of Textpattern's Textile as much as what Hugo itself has.
The centricity of Markdown files was another area of commonality, albeit with remote editing. If you are conversant with page files having a combination of YAML front matter and subsequent page content from Hugo, Grav will not seem so alien to you, even if it has a web interface for editing that front matter. This could help if you need to edit the files directly for any reason.
That is never to say that there were no things to learn, for there was plenty of that. For example, it has its own way of setting up modular pages, an idea that I was to retrofit back into a Hugo website afterwards. This means care with module naming as well as caching, editor choice and content collections, each with their own uniqueness that rewards some prior reading. A learning journey was in the offing, a not attractive part of the experience in any event.
Considerations
There have been a number of other articles published here regarding the major lessons learned during the transitions from Textpattern to Grav. Unlike previous experiences with Hugo, another part of this learning was the use of AI as part of any debugging. At times, there was a need to take things step by step, interacting with the AI instead of trying out a script that it had put my way. There are times when one's own context window gets overwhelmed by the flow of text, meaning that such behaviour needs to be taken in hand.
Another thing to watch is that human consultation of the official documentation is not neglected in a quest for speed that lets the AI do that for you; after all, this machinery is fallible; nothing we ever bring into being is without its flaws. Grav itself also comes from a human enterprise that usefully includes its own Discord community. The GitHub repository was not something to which I had recourse, even if the Admin plugin interface has prompts for reporting issues on there. Here, I provide a synopsis of the points to watch that may add to the help provided elsewhere.
Choosing an Editor
By default, Grav Admin uses CodeMirror as its content editor. While CodeMirror is well-suited to editing code, offering syntax highlighting, bracket matching and multiple cursors, it renders its editing surface in a way that standard browser extension APIs cannot reach. Grammar checkers and spell-check extensions such as LanguageTool rely on native editable elements to detect text, and CodeMirror does not use these. The result is that browser-based writing tools produce no output in Grav Admin at all, which is a confirmed architectural incompatibility rather than a configuration issue documented in the LanguageTool issue tracker.
This can be addressed by replacing CodeMirror using the TinyMCE Editor Integration plugin, installable directly from the Admin plugin interface, which brings a familiar style of editor that browser extensions can access normally. Thus, LanguageTool functionality is restored, the writing workflow stays inside Grav Admin and the change requires only a small amount of configuration to prevent TinyMCE from interfering with Markdown files in undesirable ways. Before coming across the TinyMCE Editor plugin, I was seriously toying with the local editing option centred around a Git-based workflow. Here, using VS Code with the LanguageTool extension like I do for Hugo websites remained a strong possibility. The plugin means that the need to do this is not as pressing as it otherwise might be.
None of this appears to edit Twig templates and other configuration files unless one makes use of the Editor plugin. My brief dalliance with this revealed a clunky interface and interference with the appearance of the website, something that I never appreciated when I saw it with Drupal. Thus, the plugin was quickly removed, and I do not miss it. As it happened, editing and creating files over an SSH connection with a lightweight terminal editor worked well enough for me during the setup phase anyway. If I wanted a nicer editing experience, then a Git-based approach would allow local editing in VSCode before pushing the files back onto the server.
Grav Caching
Unlike WordPress, which requires plugins to do so, Grav maintains its own internal cache for compiled pages and assets. Learning to work with it is part of understanding the platform: changes to CSS, JavaScript and other static assets are served from this cache until it is refreshed. That can be accomplished using the admin panel or by removing the contents of the cache directory directly. Once this becomes second nature, it adds very little overhead to the development process.
Modular Pages
On one of the Textpattern subsites, I had set up the landing page in a modular fashion. This carried over to Grav, which has its own way of handling modular pages. There, the modular page system assembles a single page from the files found within a collection of child folders, each presenting a self-contained content block with its own folder, Markdown file and template.
All modules render together under a single URL; they are non-routable, meaning visitors cannot access them directly. When the parent folder contains a modular.md file, the name tells Grav to use the modular.html.twig template and whose front matter defines which modules to include and in what order.
Module folders are identified by an underscore at the start of their name, and numeric prefixes control the display sequence. The prefix must come before the underscore: _01.main is the correct form. For a home page with many sections this structure scales naturally, with folder names such as 01._title, 04._ireland or 13._practicalities-inspiration making the page architecture immediately readable from the file system alone.
Each module's Markdown filename determines which template renders it: a file named text.md looks for text.html.twig in the theme's modular templates folder. The parent modular.md assembles the modules using @self.modular to collect them, with a custom order list giving precise control over sequence. Once the folder naming convention and the template matching relationship are clear, the system is very workable.
Setting Up RSS Feeds
Grav does not support the generation of RSS feeds out of the box; it needs a plugin and some extra configuration. The latter means that you need to get your head around the Grav concept of a collection because without it, you will not see anything in your feed. In contrast, database-driven platforms like WordPress or Drupal push out the content by default, which may mean that you are surprised when you first come across how Grav needs you to specify the collections explicitly.
There are two details that make performing configuration of a feed straightforward once understood. The first is that Grav routes do not match physical folder paths: a folder named 03.deliberations on disk is referenced in configuration as /deliberations, since the numeric prefix controls navigation ordering but does not appear in the route, that is the actual web page address. The second is the choice between @page.children, which collects only the immediate children of a folder, and @page.descendants, which collects recursively through all subdirectories. The collection definition belongs in the feed page's front matter, specifying where the content lives, how it should be ordered and in which direction.
Where All This Led
Once I got everything set in place, the end results were pleasing, with much learned along the way. Web page responsiveness was excellent, an experience enhanced by the caching of files. In the above discussion, I hardly mentioned the transition of existing content. For one subsite, this was manual because the scale was smaller, and the Admin plugin's interface made everything straightforward such that all was in place after a few hours of work. In the case of the other, the task was bigger, so I fell on an option used for a WordPress to Hugo migration: Python scripting. That greatly reduced the required effort, allowing me to focus on other things like setting up a modular landing page. The whole migration took around two weeks, all during time outside my client work. There are other places where I can use Grav, which surely will add to what I already have learned. For me. Textpattern is feeling a little like history now.
Enhancing grammar checking for proofing written content in Grav
For text proofing, I have used LanguageTool in my browser for a while now. It has always performed flawlessly in WordPress and Textpattern, catching errors as I type. When I began to use Grav as a CMS, I expected the same experience in its content editor. However, the project chose CodeMirror, causing me to undertake a search for a better alternative because the LanguageTool extension does not work with that at all.
Why CodeMirror Needed Replacing
Browser extensions such as LanguageTool and Grammarly rely on standard editable elements: <textarea> or elements with contenteditable="true". Both expose text directly in the Document Object Model (DOM), where extensions can access and analyse it.
In contrast, CodeMirror takes a different approach. Built for code editing rather than the writing of prose, it renders text through a JavaScript-managed DOM structure whilst hiding the actual textarea. While I can see how Markdown editing might fit this mode for some, and it claims to facilitate collaborative editing which also has its appeal, the match with content production is uneasy when you lose the functionality of browser spell-check and grammar extensions.
Returning to the Familiar with TinyMCE
Thankfully, there is a way to replace CodeMirror with something that works better for content writing. Moving to the TinyMCE Editor Integration plugin brings a traditional WYSIWYG editor that browser extensions can access. That restores LanguageTool functionality whilst remaining within the Admin interface.
It helps that installation is simple via the Admin plugin interface. For command line installation, make your way to the Grav folder on your web server and issue the following command:
bin/gpm install tinymce-editor
To make TinyMCE treat your Markdown content as plain text, add these parameters in the plugin settings. You will find that by going to Admin → Plugins → TinyMCE Editor Integration → Parameters. Once there, proceed to the Parameters section of the screen, and you can specify these using the Add Item button to create places for the information to go:
| Name | Value |
|---|---|
| forced_root_block | false |
| verify_html | false |
| clean-up | false |
| entity_encoding | raw |
These settings should prevent forced paragraph tags and automatic HTML clean-up that can change your Markdown files in ways that are not desirable. If this still remains a concern, there is another option.
Using VSCode for Editing
The great thing about having access to files is that they can be edited directly, not something that is possible with a database-focussed system like WordPress. Thus, you can use VSCode to create and update any content. This approach may seem unconventional for a code editor, but the availability of the LanguageTool extension makes it viable for this kind of task. In a nutshell, this offers a distraction-free writing and real-time grammar checking, with Git integration that eliminates the need for separate SFTP or rsync uploads, which suits authors who prefer working directly with source files rather than relying on visual editors.
Rounding Things Off
From my experience, it appears that the incompatibility between CodeMirror and browser extensions stems from a fundamental mismatch between code editing and content writing. When CodeMirror abstracts text into a JavaScript model to enable features like syntax highlighting and multiple cursors, browser extensions lose direct DOM access to text fields. These approaches cannot coexist.
For configuration or theme files involving Twig logic or complex modular structures, using the nano editor in an SSH session on a web server remains sufficient. It is difficult to see how CodeMirror would help with this activity and retains direct control with little overhead.
Usefully, we can replace CodeMirror with TinyMCE using the TinyMCE Editor Integration plugin. This restores browser extension compatibility, enables real-time grammar checking and provides a familiar editing interface. The advantages are gained by a quick installation, a little configuration and no workflow changes. If more control is needed, mixing VSCode and Git will facilitate that way of working. It is not as if we do not have options.
Related Reading
Ensuring that website updates make it through every cache layer and onto the web
How Things Used to Be Simple
There was a time when life used to be much simpler when it came to developing, delivering and maintaining websites. Essentially, seeing your efforts online was a matter of storing or updating your files on a web server, and a hard refresh in the browser would render the updates for you. Now we have added caches here, there and everywhere in the name of making everything load faster at the cost of added complexity.
Today, these caches are found in the application layer, the server level and we also have added content delivery network (CDN) systems too. When trying to see a change that you made, you need to flush the lot, especially when you have been a good citizen and added long persistence times for files that should not change so often. For example, a typical Apache configuration might look like this:
<IfModule mod_expires.c>
# Enable expiries
ExpiresActive On
# Default directive
ExpiresDefault "access plus 1 month"
# My favicon
ExpiresByType image/x-icon "access plus 1 year"
# Images
ExpiresByType image/gif "access plus 1 month"
ExpiresByType image/png "access plus 1 month"
ExpiresByType image/jpg "access plus 1 month"
ExpiresByType image/jpeg "access plus 1 month"
# CSS
ExpiresByType text/css "access plus 1 month"
# Javascript
ExpiresByType application/javascript "access plus 1 year"
</IfModule>
These settings tell browsers to keep CSS files for a month and JavaScript for a year. This is excellent for performance, but when you update one of these files, you need to override these instructions at every layer. Note that this configuration only controls what your web server tells browsers. The application layer and CDN have their own separate caching rules.
All this is a recipe for confusion when you want to see how everything looks after making a change somewhere. Then, you need a process to make things appear new again. To understand why you need to flush multiple layers, you do need to see where these caches actually sit in your setup.
Understanding the Pipeline
It means that your files travel through several systems before anyone sees them, with each system storing a copy to avoid repeatedly fetching the same file. The pipeline often looks like this:
| Layer | Examples | Actions |
|---|---|---|
| Your application | Hugo, Grav or WordPress | Reads the static files and generates HTML pages |
| Your web server | Nginx or Apache | Delivers these pages and files |
| Your CDN | Cloudflare | Distributes copies globally |
| Browsers | Chrome, Firefox, Safari | Receive and display the content |
Anything generated dynamically, for example by PHP, can flow through this pipeline freshly on every request. Someone asks for a page, your application generates it, the web server sends it, the CDN passes it along, and the browser displays it. The information flow is more immediate.
Static components like CSS, JavaScript and images work differently. They flow through the pipeline once, then each layer stores a copy. The next time someone requests that file, each layer serves its stored version instead of asking the previous layer. This is faster, but it means your updates do not flow through automatically. HTML itself might be limited by this retardation, but not so much as other components, in my experience.
When you change a static file, you need to tell each layer to fetch the new version. You work through the pipeline in sequence, starting where the information originates.
Step 1: Update the Application Layer
After uploading your new static files to the server, the first system that needs updating is your application. This is where information enters the pipeline, and we consider Hugo, Grav and WordPress in turn, moving from simplest to most complex. Though these are examples, some of the considerations should be useful elsewhere as well.
Hugo
Hugo is a static site generator, which makes cache management simpler than dynamic CMS systems. When you build your site, Hugo generates all the HTML files in the public/ directory. There is no application-level cache to clear because Hugo does not run on the server. After modifying your templates or content, rebuild your site:
hugo
Then, upload the new files from public/ to your web server. Since Hugo generates static HTML, the complexity is reduced to just the web server, CDN and browser layers. The application layer refresh is simply the build step on your local machine.
Grav CMS
Grav adds more complexity as it runs on the server and manages its own caching. When Grav reads your static files and combines them, it also compiles Twig templates that reference these files. Once you are in the Grav folder on your web server in an SSH session, issue this command to make it read everything fresh:
bin/grav clear-cache
Or manually:
rm -rf cache/* tmp/*
When someone next requests a page, Grav generates HTML that references your new static files. If you are actively developing a Grav theme, you can disable Twig and asset caching to avoid constantly clearing cache. Edit user/config/system.yaml:
cache:
enabled: true
check:
method: file
twig:
cache: false # Disable Twig caching
debug: true
auto_reload: true
assets:
css_pipeline: false # Disable CSS combination
js_pipeline: false # Disable JS combination
While this keeps page caching on but disables the caches that interfere with development, do not forget to turn them back on before deploying to production. Not doing so may impact website responsiveness.
WordPress
WordPress introduces the most complexity with its plugin-based architecture. Since WordPress uses plugins to build and store pages, you have to tell these plugins to rebuild so they reference your new static files. Here are some common examples that generate HTML and store it, along with how to make them refresh their cached files:
Page Cache Plugins
| Plugin | How to Clear Cache |
|---|---|
| WP Rocket | Settings > WP Rocket > Clear Cache (or use the admin bar button) |
| W3 Total Cache | Performance > Dashboard > Empty All Caches |
| WP Super Cache | Settings > WP Super Cache > Delete Cache |
| LiteSpeed Cache | LiteSpeed Cache > Dashboard > Purge All |
Redis Object Cache
Redis stores database query results, which are separate from page content. If your static file changes affect database-stored information (like theme options), tell Redis to fetch fresh data.
From the WordPress dashboard, the Redis Object Cache plugin provides: Settings > Redis > Flush Cache. An alternative to do likewise from the command line:
redis-cli FLUSHALL
Note this clears everything in Redis. If you are sharing a Redis instance with other applications, use the WordPress plugin interface instead.
Step 2: Refresh the Web Server Cache
Once your application is now reading the new static files, the next system in the pipeline is your web server. Because it has stored copies of files it previously delivered, it has to be told to fetch fresh copies from your application.
Nginx
The first step is to find where Nginx stores files:
grep -r "cache_path" /etc/nginx/
This shows you lines like fastcgi_cache_path /var/cache/nginx/fastcgi which tell you the location. Using this information, you can remove the stored copies:
# Clear FastCGI cache
sudo rm -rf /var/cache/nginx/fastcgi/*
# Reload Nginx
sudo nginx -s reload
When Nginx receives a request, it fetches the current version from your application instead of serving its stored copy. If you are using a reverse proxy setup, you might also have a proxy cache at /var/cache/nginx/proxy/* which you can clear the same way.
Apache
Apache uses mod_cache for storing files, and the location depends on your configuration. Even so, common locations are /var/cache/apache2/ or /var/cache/httpd/. Using the following commands, you can find your cache directory:
# Ubuntu/Debian
grep -r "CacheRoot" /etc/apache2/
# CentOS/RHEL
grep -r "CacheRoot" /etc/httpd/
Armed with the paths that you have found, you can remove the stored copies:
# Ubuntu/Debian
sudo rm -rf /var/cache/apache2/mod_cache_disk/*
sudo systemctl reload apache2
# CentOS/RHEL
sudo rm -rf /var/cache/httpd/mod_cache_disk/*
sudo systemctl reload httpd
Alternatively, if you have mod_cache_disk configured with htcacheclean, you can use:
sudo htcacheclean -t -p /var/cache/apache2/mod_cache_disk/
When Apache receives a request, it fetches the current version from your application.
Step 3: Update the CDN Layer Cache Contents
After your web server is now delivering the new static files, the next system in the pipeline is your CDN. This has stored copies at edge servers worldwide, which need to be told to fetch fresh copies from your web server. Here, Cloudflare is given as an example.
Cloudflare
First, log into the Cloudflare dashboard and navigate to Caching. Then, click "Purge Everything" and wait a few seconds. Now, when someone requests your files, Cloudflare fetches the current version from your web server instead of serving its stored copy.
If you are actively working on a file and deploying repeatedly, enable Development Mode in the Cloudflare dashboard. This tells Cloudflare to always fetch from your server rather than serving stored copies. Helpfully, it automatically turns itself off after three hours.
Step 4: Refresh What Is Loaded in Your Browser
Having got everything along the pipeline so far, we finally come to the browser. This is where we perform hard refreshing of the content. Perform a forced refresh of what is in your loading browser using appropriate keyboard shortcuts depending on what system you are using. The shortcuts vary, though holding down the Shift key and clicking on the Refresh button works a lot of the time. Naturally, there are other options and here are some suggestions:
| Operating System | Keyboard Shortcut |
|---|---|
| macOS | Command + Shift + R |
| Linux | Control + Shift + R |
| Windows | Control + F5 |
Making use of these operations ensures that your static files come through the whole way so you can see them along with anyone else who visits the website.
Recap
Because a lot of detail has been covered on this journey, let us remind ourselves where we have been with this final run-through. Everything follows sequentially:
- Upload your changed files to the server
- Verify the files uploaded correctly:
ls -l /path/to/your/css/file.cssCheck the modification time matches when you uploaded.
- Refresh your application layer:
# Grav bin/grav clear-cache # WordPress - via WP-CLI wp cache flush # Or use your caching plugin's interface - Refresh Redis (if you use it for object caching):
redis-cli FLUSHALL # Or via WordPress plugin interface - Refresh your web server layer (if using Nginx or Apache):
# Nginx sudo rm -rf /var/cache/nginx/fastcgi/* sudo nginx -s reload # Apache (Ubuntu/Debian) sudo rm -rf /var/cache/apache2/mod_cache_disk/* sudo systemctl reload apache2 - Refresh your CDN layer: Cloudflare dashboard > Purge Everything
- Perform a forced refresh of what is in your loading browser using appropriate keyboard shortcuts depending on what system you are using
- Test the page to confirm the update has come through fully
Any changes become visible because the new files have travelled from your application through each system in the pipeline. Sometimes, this may happen seamlessly without intervention, but it is best to know what to do when that is not how things proceed.
Related Reading
Configuring RSS Feeds in Grav CMS using Collections
While the Feed plugin for Grav CMS provides RSS, Atom and JSON feeds for your site content, setting up a feed in Grav differs from other content management systems because it requires explicit configuration rather than automatic discovery. The key to making feeds work lies in understanding how Grav uses collections to organise and present content.
Understanding Collections
In Grav, content lives in a file tree structure. The feed system works through collections, which tell Grav where to find pages and how to organise them. Collections are essential for feeds because they define exactly which content gets syndicated. Without a properly configured collection, your feed will be empty.
Collections use a specific syntax to target pages in different ways:
| Collection Type | Syntax | What It Does |
|---|---|---|
| All descendants | '@page.descendants': '/blog' |
Gathers a folder and all nested subdirectories recursively |
| Direct children only | '@page.children': '/blog' |
Limits collection to immediate children, excluding deeper nesting |
| By tag | '@taxonomy.tag': 'featured' |
Pulls all pages with a specific tag, regardless of location |
| By category | '@taxonomy.category': 'news' |
Pulls all pages in a specific category, regardless of location |
| Multiple sources | Array of any above | Merges content from multiple folders or taxonomies |
The @page.descendants syntax works well for blog structures where you might have posts organised by year, month or category within subdirectories. The @page.children syntax suits flatter structures where all posts sit directly under a single parent folder.
Taxonomy-based collections using @taxonomy.tag or @taxonomy.category pull pages based on their metadata rather than folder location. This allows you to create feeds for specific topics regardless of where those pages live in your file structure.
The real power emerges when combining multiple collection sources. You can specify an array of different folders or taxonomies, and Grav merges them into a single unified collection. This allows you to aggregate content across your entire site structure, pulling from blogs, articles, news sections or any combination that suits your needs.
Understanding Routes
Before configuring your feed, you need to understand how Grav translates folder paths into routes. This matters because you will reference routes in your collection configuration, not physical folder paths. Getting this wrong means your feed will be empty even when your folders contain content.
Grav stores pages in folders under /user/pages/, but these physical paths differ from the routes you use in configuration. Routes are based on folder names without numeric prefixes:
| Physical Path | Route in Configuration |
|---|---|
/user/pages/03.deliberations/ |
/deliberations |
/user/pages/writings/ |
/writings |
The numeric prefix (such as 03.) determines ordering in navigation but does not appear in the route. This means when you create a folder named 03.deliberations, Grav automatically makes it accessible at the /deliberations route. Similarly, a folder without a numeric prefix like writings becomes the /writings route.
When you specify '@page.descendants': '/deliberations' in your collection configuration, Grav knows to look in the 03.deliberations folder and all its subdirectories. The route system abstracts away the physical folder structure, allowing you to reorganise content by changing numeric prefixes without breaking your feed configuration.
Creating Your Feed Configuration
Creating a feed requires a page that defines which content to include. The simplest approach is to create a file at feed/default.md with a structure like this:
---
title: Site Feed
visible: false
content:
items:
'@page.descendants': '/blog'
order:
by: date
dir: desc
limit: 20
feed:
title: "My Site Feed"
description: "Latest posts and updates"
---
Configuration Settings Explained
The visible: false setting ensures the feed page does not appear in navigation.
The content section defines your collection. Here, @page.descendants: '/blog' recursively includes all pages under the /blog folder, including any nested within subdirectories. If you only want immediate children, use @page.children: '/blog' instead.
The order section determines how posts appear in your feed. The example uses by: date with dir: desc to show the newest posts first. Ensure all pages in your collection have a date: field in their front matter:
---
title: My Post
date: 16-02-2026 14:30
---
Alternatively, you can order by folder name using by: folder. This sorts pages alphabetically and proves reliable when pages lack dates or when folder structure determines chronology:
order:
by: folder
dir: asc
The limit: 20 setting controls how many items appear in the feed.
The feed section provides metadata for the RSS feed itself, including the title and description that feed readers will display.
Combining Multiple Folders
You can pull content from multiple locations into a single feed:
content:
items:
- '@page.descendants': '/blog'
- '@page.descendants': '/articles'
- '@page.descendants': '/news'
This merges posts from all three sections into one unified feed. Each section is gathered separately and then combined.
Putting It All Together
Here is a complete working configuration that demonstrates these principles in practice. This example pulls all content from a single folder using folder-based ordering rather than dates:
---
title: Surroundings Feed
visible: false
content:
items:
'@page.descendants': '/deliberations'
order:
by: folder
dir: asc
limit: 20
feed:
title: "Varied Surroundings, Deliberations"
description: "Essays and reflections from Deliberations on Assorted Explorations"
limit: 20
---
This configuration, saved as feed/default.md, pulls all content from the /deliberations folder and its subdirectories. The folder-based ordering uses Grav's numeric prefix system for sorting. Pages in folders named 01.first-post, 02.second-post, 03.third-post will appear in that sequence. The ascending direction means lower numbers appear first in the feed. The feed metadata provides a descriptive title and description that will appear in feed readers.
What the Output Looks Like
Once Grav processes your configuration, it generates an RSS XML file. Here is what the header section looks like using the real-world example above:
<?xml version="1.0" encoding="utf-8"?>
<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
<channel>
<title>Varied Surroundings, Deliberations</title>
<link>https://www.assortedexplorations.com/surroundings/feed</link>
<description>Essays and reflections from Deliberations on Assorted Explorations</description>
<language>en</language>
<lastBuildDate>Sun, 25 Jan 2026 18:16:13 +0000</lastBuildDate>
Each post appears as an item with its title, link, publication date and content excerpt:
<item>
<title>Nation or Nations?</title>
<link>https://www.assortedexplorations.com/surroundings/deliberations/nation-or-nations</link>
<guid>https://www.assortedexplorations.com/surroundings/deliberations/nation-or-nations</guid>
<pubDate>Sun, 25 Jan 2026 17:59:23 +0000</pubDate>
<description>
<![CDATA[
<h2 class="centre mb-5">Nation or Nations?</h2>
<p style="text-align:center; margin-bottom: 2.5rem;"><img src="../images/51.jpg"></p>
<p>The nature of the United Kingdom seems to be a source not only of some confusion...</p>
]]>
</description>
</item>
The Feed plugin handles all XML generation automatically. You simply configure which content to include and how to order it, and Grav produces properly formatted RSS, Atom or JSON feeds.
Accessing Your Feeds
Once configured, your feeds become accessible at /feed.rss, /feed.atom or /feed.json. While this guide focuses on RSS feeds, the collection principles apply much more broadly. Collections power archive pages, tag pages, search results, related posts and sitemaps. Understanding how they work is essential for any feature that lists content on your site. The official documentation provides additional detail on these concepts and advanced configuration options.
Related Reading
Creating modular pages in Grav CMS
Here is a walkthrough that demonstrates how to create modular pages in Grav CMS. Modular pages allow you to build complex, single-page layouts by stacking multiple content sections together. This approach works particularly well for modern home pages where different content types need to be presented in a specific sequence. The example here stems from building a theme from scratch to ensure sitewide consistency across multiple subsites.
What Are Modular Pages
A modular page is a collection of content sections (called modules) that render together as a unified page. Unlike regular pages that have child pages accessible via separate URL's, modular pages display all their modules on a single URL. Each module is a self-contained content block with its own folder, markdown file and template. The parent page assembles these modules in a specified order to create the final page.
Understanding the Folder Structure
Modular pages use a specific folder structure. The parent folder contains a modular.md file that defines which modules to include and in what order. Module folders sit beneath the parent folder and are identified by an underscore at the start of their names. Numeric prefixes before the underscore control the display order.
Here is an actual folder structure from a travel subsite home page:
/user/pages/01.home/
01._title/
02._intro/
03._call-to-action/
04._ireland/
05._england/
06._scotland/
07._scandinavia/
08._wales-isle-of-man/
09._alps-pyrenees/
10._american-possibilities/
11._canada/
12._dreams-experiences/
13._practicalities-inspiration/
14._feature_1/
15._feature_2/
16._search/
modular.md
The numeric prefixes (01, 02, 03) ensure modules appear in the intended sequence. The descriptive names after the prefix (_title, _ireland, _search) make the page structure immediately clear. Each module folder contains its own markdown file and any associated media. The underscore prefix tells Grav these folders contain modules rather than regular pages. Modules are non-routable, meaning visitors cannot access them directly via URLs. They exist only to provide content sections for their parent page.
The Workflow
Step One: Create the Parent Page Folder
Start by creating a folder for your modular page in /user/pages/. For a home page, you might create 01.home. The numeric prefix 01 ensures this page appears first in navigation and sets the display order. If you are using the Grav Admin interface, navigate to Pages and click the Add button, then select "Modular" as the page type.
Create a file named modular.md inside this folder. The filename is important because it tells Grav to use the modular.html.twig template from your active theme. This template handles the assembly and rendering of your modules.
Step Two: Configure the Parent Page
Open modular.md and add YAML front matter to define your modular page settings. Here is an example configuration:
---
title: Home
menu: Home
body_classes: "modular"
content:
items: '@self.modular'
order:
by: default
dir: asc
custom:
- _hero
- _features
- _callout
---
The content section is crucial. The items: '@self.modular' instruction tells Grav to collect all modules from the current page. The custom list under order specifies exactly which modules to include and their sequence. This gives you precise control over how sections appear on your page.
Step Three: Create Module Folders
Create folders for each content section you need. Each folder name must begin with an underscore. Add numeric prefixes before the underscore for ordering, such as 01._title, 02._intro, 03._call-to-action. The numeric prefixes control the display sequence. Module names are entirely up to you based on what makes sense for your content.
For multi-word module names, use hyphens to separate words. Examples include 08._wales-isle-of-man, 10._american-possibilities or 13._practicalities-inspiration. This creates readable folder names that clearly indicate each module's purpose. The official documentation shows examples using names like _features and _showcase. It does not matter what names that you choose, as long as you create matching templates. Descriptive names help you understand the page structure when viewing the file system.
Step Four: Add Content to Modules
Inside each module folder, create a markdown file. The filename determines which template Grav uses to render that module. For example, a module using text.md will use the text.html.twig template from your theme's /templates/modular/ folder. You decide on these names when planning your page structure.
Here is an actual example from the 04._ireland module:
---
title: 'Irish Encounters'
content_source: ireland
grid: true
sitemap:
lastmod: '30-01-2026 00:26'
---
### Irish Encounters {.mb-3}
<p style="text-align:center" class="mt-3"><img class="w-100 rounded" src="https://www.assortedexplorations.com/photo_gallery_images/eire_kerry/small_size/eireCiarraigh_tomiesMountain.jpg" /></p>
From your first arrival to hidden ferry crossings and legendary names that shaped the world, these articles unlock Ireland's layers beyond the obvious tourist trail. Practical wisdom combines with unexpected perspectives to transform a visit into an immersive journey through landscapes, culture and connections you will not find in standard guidebooks.
The front matter contains settings specific to this module. The title appears in the rendered output. The content_source: ireland tells the template which page to link to (as shown in the text.html.twig template example). The grid: true flag signals the modular template to render this module within the grid layout. The sitemap settings control how search engines index the content.
The content below the front matter appears when the module renders. You can use standard Markdown syntax for formatting. You can also include HTML directly in markdown files for precise control over layout and styling. The example above uses HTML for image positioning and Bootstrap classes for responsive sizing.
Module templates can access front matter variables through page.header and use them to generate dynamic content. This provides flexibility in how modules appear and behave without requiring separate module types for every variation.
Step Five: Create Module Templates
Your theme needs templates in /templates/modular/ that correspond to your module markdown files. When building a theme from scratch, you create these templates yourself. A module using hero.md requires a hero.html.twig template. A module using features.md requires a features.html.twig template. The template name must match the markdown filename.
Each template defines how that module's content renders. Templates use standard Twig syntax to structure the HTML output. Here is an example text.html.twig template that demonstrates accessing page context and generating dynamic links:
{{ content|raw }}
{% set raw = page.header.content_source ?? '' %}
{% set route = raw ? '/' ~ raw|trim('/') : page.parent.route %}
{% set context = grav.pages.find(route) %}
{% if context %}
<p>
<a href="{{ context.url }}" class="btn btn-secondary mt-3 mb-3 shadow-none stretch">
{% set count = context.children.visible|length + 1 %}
Go and Have a Look: {{ count }} {{ count == 1 ? 'Article' : 'Articles' }} to Savour
</a>
</p>
{% endif %}
First, this template outputs the module's content. It then checks for a content_source variable in the module's front matter, which specifies a route to another page. If no source is specified, it defaults to the parent page's route. The template finds that page using grav.pages.find() and generates a button linking to it. The button text includes a count of visible child pages, with proper singular/plural handling.
The Grav themes documentation provides guidance on template creation. You have complete control over the design and functionality of each module through its template. Module templates can access the page object, front matter variables, site configuration and any other Grav functionality.
Step Six: Clear Cache and View Your Page
After creating your modular page and modules, clear the Grav cache. Use the command bin/grav clear-cache from your Grav installation directory, or click the Clear Cache button in the Admin interface. The CLI documentation details other cache management options.
Navigate to your modular page in a browser. You should see all modules rendered in sequence as a single page. If modules do not appear, verify the underscore prefix on module folders, check that template files exist for your markdown filenames and confirm your parent modular.md lists the correct module names.
Working with the Admin Interface
The Grav Admin plugin streamlines modular page creation. Navigate to the Pages section in the admin interface and click Add. Select "Add Modular" from the options. Fill in the title and select your parent page. Choose the module template from the dropdown.
The admin interface automatically creates the module folder with the correct underscore prefix. You can then add content, configure settings and upload media through the visual editor. This approach reduces the chance of naming errors and provides a more intuitive workflow for content editors.
Practical Tips for Modular Pages
Use descriptive folder names that indicate the module's purpose. Names like 01._title, 02._intro, 03._call-to-action make the page structure immediately clear. Hyphens work well for multi-word module names such as 08._wales-isle-of-man or 13._practicalities-inspiration. This naming clarity helps when you return to edit the page later or when collaborating with others. The specific names you choose are entirely up to you as the theme developer.
Keep module-specific settings in each module's markdown front matter. Place page-wide settings like taxonomy and routing in the parent modular.md file. This separation maintains clean organisation and prevents configuration conflicts.
Use front matter variables to control module rendering behaviour. For example, adding grid: true to a module's front matter can signal your modular template to render that module within a grid layout rather than full-width. Similarly, flags like search: true or fullwidth: true allow your template to apply different rendering logic to different modules. This keeps layout control flexible without requiring separate module types for every layout variation.
Test your modular page after adding each new module. This incremental approach helps identify issues quickly. If a module fails to appear, check the folder name starts with an underscore, verify the template exists and confirm the parent configuration includes the module in its custom order list.
For development work, make cache clearing a regular habit. Grav caches page collections and compiled templates, which can mask recent changes. Running bin/grav clear-cache after modifications ensures you see current content rather than cached versions.
Understanding Module Rendering
The parent page's modular.html.twig template controls how modules assemble. Whilst many themes use a simple loop structure, you can implement more sophisticated rendering logic when building a theme from scratch. Here is an example that demonstrates conditional rendering based on module front matter:
{% extends 'partials/home.html.twig' %}
{% block content %}
{% set modules = page.collection({'items':'@self.modular'}) %}
{# Full-width modules first (search, main/outro), then grid #}
{% for m in modules %}
{% if not (m.header.grid ?? false) %}
{% if not (m.header.search ?? false) %}
{{ m.content|raw }}
{% endif %}
{% endif %}
{% endfor %}
<div class="row g-4 mt-3">
{% for m in modules %}
{% if (m.header.grid ?? false) %}
<div class="col-12 col-md-6 col-xl-4">
<div class="h-100">
{{ m.content|raw }}
</div>
</div>
{% endif %}
{% endfor %}
</div>
{# Move search box to end of page #}
{% for m in modules %}
{% if (m.header.search ?? false) %}
{{ m.content|raw }}
{% endif %}
{% endfor %}
{% endblock %}
This template demonstrates several useful techniques. First, it retrieves the module collection using page.collection({'items':'@self.modular'}). The template then processes modules in three separate passes. The first pass renders full-width modules that are neither grid items nor search boxes. The second pass renders grid modules within a Bootstrap grid layout, wrapping each in responsive column classes. The third pass renders the search module at the end of the page.
Modules specify their rendering behaviour through front matter variables. A module with grid: true in its front matter renders within the grid layout. A module with search: true renders at the page bottom. Modules without these flags render full-width at the top. This approach provides precise control over the layout whilst keeping content organised in separate module files.
Each module still uses its own template (like text.html.twig or feature.html.twig) to define its specific HTML structure. The modular template simply determines where on the page that content appears and whether it gets wrapped in grid columns. This separation between positioning logic and content structure keeps the system flexible and maintainable.
When to Use Modular Pages
Modular pages work well for landing pages, home pages and single-page sites where content sections need to appear in a specific order. They excel at creating modern, scrollable pages with distinct content blocks like hero sections, feature grids, testimonials and call-to-action areas.
For traditional multipage sites with hierarchical navigation, regular pages prove more suitable because modular pages do not support child pages in the conventional sense. Instead, choose modular pages when you need a unified, single-URL presentation of multiple content sections.
Related Reading
For further information, consult the Grav Modular Pages Documentation, Grav Themes and Templates guide and the Grav Admin Plugin documentation.
Grouping directories first in output from ls commands executed in terminal sessions on macOS and Linux
This enquiry began with my seeing directories and files being sorted by alphabetical order without regard for type in macOS Finder. In Windows and Linux file managers, I am accustomed to directories and files being listed in distinct blocks, albeit within the same listings, with the former preceding the latter. What I had missed was that the ls command and its aliases did what I was seeing in macOS Finder, which perhaps is why the operating system and its default apps work like that.
Over to Linux
On the zsh implementation that macOS uses, there is no way to order the output so that directories are listed before files. However, the situation is different on Linux because of the use of GNU tooling. Here, the --group-directories-first switch is available, and I have started to use this on my own Linux systems, web servers as well as workstations. This can be set up in .bashrc or .bash_aliases like the following:
alias ls='ls --color=auto --group-directories-first'
alias ll='ls -lh --color=auto --group-directories-first'
Above, the --color=auto switch adds colour to the output too. Issuing the following command makes the updates available in a terminal session (~ is the shorthand for the home directory below):
source ~/.bashrc
Back to macOS
While that works well on Linux, additional tweaks are needed to implement the same on macOS. Firstly, you have to install Homebrew using this command (you may be asked for your system password to let the process proceed):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
To make it work, this should be added to the .zshrc file in your home folder:
export PATH="/opt/homebrew/opt/coreutils/libexec/gnubin:$PATH"
Then, you need to install coreutils for GNU commands like gls (a different name is used to distinguish it from what comes with macOS) and adding dircolors gives you coloured text output as well:
brew install coreutils
brew install dircolors
Once those were in place, I found that adding these lines to the .zshrc file was all that was needed (note those extra g's):
alias ls='gls --color=auto --group-directories-first'
alias ll='gls -lh --color=auto --group-directories-first'
If your experience differs, they may need to be preceded with this line in the same configuration file:
eval "$(dircolors -b)"
The final additions then look like this:
export PATH="/opt/homebrew/opt/coreutils/libexec/gnubin:$PATH"
eval "$(dircolors -b)"
alias ls='gls --color=auto --group-directories-first'
alias ll='gls -lh --color=auto --group-directories-first'
Following those, issuing this command will make the settings available in your terminal session:
source ~/.zshrc
Closing Remarks
In summary, you have learned how to list directories before files, and not intermingled as is the default situation. For me, this discovery was educational and adds some extra user-friendliness that was not there before the tweaks. While we may be considering two operating systems and two different shells (bash and zsh), there is enough crossover to make terminal directory and file listing operations function consistently regardless of where you are working.
Safe file copying methods for cross platform SAS programming
Not so long ago, I needed to copy a file within a SAS program, only to find that an X command would not accomplish the operation. That cast my mind back to file operations using SAS in order to be platform-independent. Thus, I fell upon statements within a data step.
Before going that far, you need to define filename statements for the source and target locations like this:
filename src "<location of source file or files>" lrecl=32767 encoding="utf-8";
filename dst "<target location for copies>" lrecl=32767 encoding="utf-8";
Above, I also specified logical record length (LRECL) and UTF-8 encoding. The latter is safer in these globalised times, when the ASCII character set does not suffice for everyday needs.
Once you have defined filename statements, you can move to the data step itself, which does not output any data set because of the data _null_ statement:
data _null_;
length rc msg $ 300;
rc = fcopy('src','dst');
if rc ne 0 then do;
msg = sysmsg();
putlog "ERROR: FCOPY failed rc=" rc " " msg;
end;
else putlog "NOTE: Copied OK.";
run;
The main engine here is the fcopy function, which outputs a return code (rc). Other statements like putlog are there to communication outcomes and error messages when the file copying operation fails. The text of the error message (msg) comes from the sysmsg function. After file copying has completed, it is time to unset the filename statements as follows:
filename src clear;
filename dst clear;
One thing that needs to be pointed out here is that this is an approach best reserved for text files like what I was copying when doing this. When I attempted the same kind of operation with an XLSX file, the copy would not open in Excel afterwards. Along the way, it had got scrambled. Once you realise that an XLSX file is essentially a zip archive of XML files, you might see how that could go awry.
Understanding Bootstrap's twelve column grid for clean layouts
When it comes to designing your page structure using Bootstrap, you need to work within a twelve column grid. For uneven column widths, you need to make everything add up, while it is perhaps simpler when every column width is the same.
For instance, encountering a case of the latter when porting a website landing page as part of a migration from Textpattern to Grav meant having evenly sized columns, with one block for each section. To make everything up to twelve, two featured article blocks were added. What follows is a little about why that choice was made.
How the Twelve-Column System Works
Bootstrap's grid divides into twelve columns The number twelve was chosen because it has more divisors than any number before it or after it, up to sixty, making it exceptionally flexible for layout work.
<!-- Two across -->
<div class="col-12 col-md-6">...</div>
<!-- Three across -->
<div class="col-12 col-md-6 col-xl-4">...</div>
<!-- Classic blog layout: sidebar and main content -->
<div class="col-md-3">Sidebar</div>
<div class="col-md-9">Main content</div>
The column classes work by specifying how many of the twelve columns each element should span. For example, a col-6 class spans six columns (half the width), whilst col-4 spans four columns (one third). In the classic blog layout, this means using col-3 for a sidebar (one quarter width) and col-9 for main content (three quarters width). Other common combinations include col-8 and col-4 (two thirds and one third), or col-2 and col-10 (one sixth and five sixths).
For consistent column widths, certain numbers divide cleanly: col-12 (full width, one across), col-6 (half width, two across), col-4 (one third width, three across), col-3 (one quarter width, four across), col-2 (one sixth width, six across) and col-1 (one twelfth width, twelve across). When using 2, 3, 4, 6 or 12 blocks with these classes, the grid divides evenly. However, with other numbers, challenges emerge. For instance, eleven blocks in three columns leaves two orphans, whilst seven blocks in two columns creates uneven rows.
Bootstrap Breakpoints Explained
Bootstrap 5 defines six responsive breakpoints for different device categories:
| Breakpoint | Screen Width | Class Modifier |
|---|---|---|
| Extra small | Below 576px | None (just col-*) |
| Small | 576px and above | col-sm-* |
| Medium | 768px and above | col-md-* |
| Large | 992px and above | col-lg-* |
| Extra large | 1200px and above | col-xl-* |
| Extra extra large | 1400px and above | col-xxl-* |
These breakpoints cascade upwards. A class like col-md-6 applies from 768 pixels and continues to apply at all larger breakpoints unless overridden by a more specific class like col-xl-4. This cascading behaviour allows responsive layouts to be built with minimal markup, where each breakpoint only needs to specify what changes, rather than repeating the entire layout definition.
Putting It Into Practice
When column widths are equal, the implementation uses Bootstrap grid classes with a three-tier responsive system so that each block receives consistent treatment, with padding, borders and hover effects. Here is some boilerplate code showing how this can be accomplished:
<div class="row g-4 mt-3">
<div class="col-12 col-md-6 col-xl-4">
<div class="h-100">
<h3 class="mb-3">Irish Encounters</h3>
<p style="text-align:center" class="mt-3">
<img class="w-100 rounded" src="...">
</p>
<p>From your first arrival to hidden ferry crossings...</p>
<p>
<a href="/travel/ireland" class="btn btn-secondary mt-3 mb-3 shadow-none stretch">
Go and Have a Look: 12 Articles to Savour
</a>
</p>
</div>
</div>
<!-- Repeat for 9 more destination blocks -->
<!-- Then 2 featured article blocks -->
</div>
Two Bootstrap utility classes proved particularly useful here. Firstly, the h-100 class sets the height to 100% of the parent container, ensuring all blocks in a row have equal height regardless of content length. Meanwhile, the w-100 class sets the width to 100% of the parent container, making images fill their containers whilst maintaining aspect ratio when combined with responsive image techniques. Together, these help create visual consistency across the grid.
The responsive behaviour works as follows for twelve blocks:
| Screen Width | Class Used | Number of Columns | Number of Rows |
|---|---|---|---|
| Below 768px | col-12 |
1 | 12 |
| 768px and above | col-md-6 |
2 | 6 |
| 1200px and above | col-xl-4 |
3 | 4 |
The g-4 class adds consistent guttering between blocks across all breakpoints and is part of Bootstrap's spacing utilities, where the number (4) corresponds to a spacing value from Bootstrap's spacer scale. To accomplish this, the class applies gap spacing both horizontally and vertically between grid items, creating visual separation without needing to add margins to individual elements. This ensures blocks do not sit flush against each other whilst maintaining consistent spacing throughout the layout.
Taking Stock
Bootstrap's twelve-column grid works cleanly for certain block counts (1, 2, 3, 4, 6 and 12). In contrast, other numbers create visual imbalance in multi-column layouts. For this reason, the grid system should inform content decisions early in the planning process. Ultimately, planning block counts around the grid creates more harmonious layouts than forcing arbitrary numbers into place.
In this case, twelve blocks divided cleanly into the three-column grid, where other numbers would have created orphans. Beyond solving the layout challenge, featured articles provided value by drawing attention to important content whilst resolving the constraints of the grid system. The key takeaway is that content planning and grid design work together rather than in opposition.
Related Reading
For further exploration of these concepts, the Bootstrap Grid System documentation provides comprehensive coverage of the twelve-column system and its responsive capabilities. The Flexbox utilities documentation covers alignment and spacing options that complement the grid system.
Running local Large Language Models on desktop computers and workstations with 8GB VRAM
Running large language models locally has shifted from being experimental to practical, but expectations need to match reality. A graphics card with 8 GB of VRAM can support local workflows for certain text tasks, though the results vary considerably depending on what you ask the models to do.
Understanding the Hardware Foundation
The Critical Role of VRAM
The central lesson is that VRAM is the engine of local performance on desktop systems. Whilst abundant system RAM helps avoid crashes and allows larger contexts, it cannot replace VRAM for throughput.
Models that fit in VRAM and keep most of their computation on the GPU respond promptly and maintain a steady pace. Those that overflow to system RAM or the CPU see noticeable slowdowns.
Hardware Limitations and Thresholds
On a desktop GPU with 8 GB of VRAM, this sets a practical ceiling. Models in the 7 billion to 14 billion parameter range fit comfortably enough to exploit GPU acceleration for typical contexts.
Much larger models tend to offload a significant portion of the work to the CPU. This shows up as pauses, lower token rates and lag when prompts become longer.
Monitoring GPU Utilisation
GPU utilisation is a reliable way to gauge whether a setup is efficient. When the GPU is consistently busy, generation is snappy and interactive use feels smooth.
A model like llama3.1:8b can run almost entirely on the GPU at a context length of 4,096 tokens. This translates into sustained responsiveness even with multi-paragraph prompts.
Model Selection and Performance
Choosing the Right Model Size
A frequent instinct is to reach for the largest model available, but size does not equal usefulness when running locally on a desktop or workstation. In practice, models between 7B and 14B parameters are what you can run on this class of hardware, though what they do well is more limited than benchmark scores might suggest.
What These Models Actually Do Well
Models in this range handle certain tasks competently. They can compress and reorganise information, expand brief notes into fuller text, and maintain a reasonably consistent voice across a draft. For straightforward summarisation of documents, reformatting content or generating variations on existing text, they perform adequately.
Where things become less reliable is with tasks that demand precision or structured output. Coding tasks illustrate this gap between benchmarks and practical use. Whilst llama3.1:8b scores 72.6% on the HumanEval benchmark (which tests basic algorithm problems), real-world coding tasks can expose deeper limitations.
Commit message generation, code documentation and anything requiring consistent formatting produce variable results. One attempt might give you exactly what you need, whilst the next produces verbose or poorly structured output.
The gap between solving algorithmic problems and producing well-formatted, professional code output is significant. This inconsistency is why larger local models like gpt-oss-20b (which requires around 16GB of memory) remain worth the wait despite being slower, even when the 8GB models respond more quickly.
Recommended Models for Different Tasks
Llama3.1:8b handles general drafting reasonably well and produces flowing output, though it can be verbose. Benchmark scores place it above average for its size, but real-world use reveals it is better suited to free-form writing than structured tasks.
Phi3:medium is positioned as stronger on reasoning and structured output. In practice, it can maintain logical order better than some alternatives, though the official documentation acknowledges quality of service limitations, particularly for anything beyond standard American English. User reports also indicate significant knowledge gaps and over-censorship that can affect practical use.
Gemma3 at 12B parameters produces polished prose and smooths rough drafts effectively when properly quantised. The Gemma 3 family offers models from 1B to 27B parameters with 128K context windows and multimodal capabilities in the larger sizes, though for 8GB VRAM systems you are limited to the 12B variant with quantisation. Google also offers Gemma 3n, which uses an efficient MatFormer architecture and Per-Layer Embedding to run on even more constrained hardware. These are primarily optimised for mobile and edge devices rather than desktop use.
Very large models remain less efficient on desktop hardware with 8 GB VRAM. Attempting to run them results in heavy CPU offloading, and the performance penalty can outweigh any quality improvement.
Memory Management and Configuration
Managing Context Length
Context length sits alongside model size as a decisive lever. Every extra token of context demands memory, so doubling the window is not a neutral choice.
At around 4,096 tokens, most of the well-matched models stay predominantly on the GPU and hold their speed. Push to 8,192 or beyond, and the memory footprint swells to the point where more of the computation ends up taking place on the CPU and in system RAM.
Ollama's Keep-Alive Feature
Ollama keeps already loaded models resident in VRAM for a short period after use so that the next call does not pay the penalty of a full reload. This is expected behaviour and is governed by a keep_alive parameter that can be adjusted to hold the model for longer if a burst of work is coming, or to release it sooner when conserving VRAM matters.
Practical Memory Strategies
Breaking long jobs into a series of smaller, well-scoped steps helps both speed and stability without constraining the quality of the end result. When writing an article, for instance, it can be more effective to work section by section rather than asking for the entire piece in one pass.
Optimising the Workflow
The Benefits of Streaming
Streaming changes the way output is experienced, rather than the content that is ultimately produced. Instead of waiting for a block of text to arrive all at once, words appear progressively in the terminal or application. This makes longer pieces easier to manage and revise on the fly.
Task-Specific Model Selection
Because each model has distinct strengths and weaknesses, matching the tool to the task matters. A fast, GPU-friendly model like llama3.1:8b works for general writing and quick drafting where perfect accuracy is not critical. Phi3:medium may handle structured content better, though it is worth testing against your specific use case rather than assuming it will deliver.
Understanding Limitations
It is important to be clear about what local models in this size range struggle with. They are weak at verifying facts, maintaining strict factual accuracy over extended passages, and providing up-to-date knowledge from external sources.
They also perform inconsistently on tasks requiring precise structure. Whilst they may pass coding benchmarks that test algorithmic problem-solving, practical coding tasks such as writing commit messages, generating consistent documentation or maintaining formatting standards can reveal deeper limitations. For these tasks, you may find yourself returning to larger local models despite preferring the speed of smaller ones.
Integration and Automation
Using Ollama's Python Interface
Ollama integrates into automated workflows on desktop systems. Its Python package allows calls from scripts to automate summarisation, article generation and polishing runs, with streaming enabled so that logs or interfaces can display progress as it happens. Parameters can be set to control context size, temperature and other behavioural settings, which helps maintain consistency across batches.
Building Production Pipelines
The same interface can be linked into website pipelines or content management tooling, making it straightforward to build a system that takes notes or outlines, expands them, revises the results and hands them off for publication, all locally on your workstation. The same keep_alive behaviour that aids interactive use also benefits automation, since frequently used models can remain in VRAM between steps to reduce start-up delays.
Recommended Configuration
Optimal Settings for 8 GB VRAM
For a desktop GPU with 8 GB of VRAM, an optimal configuration builds around models that remain GPU-efficient whilst delivering acceptable results for your specific tasks. Llama3.1:8b, phi3:medium and gemma3:12b are the models that fit this constraint when properly quantised, though you should test them against your actual workflows rather than relying on general recommendations.
Performance Monitoring
Keeping context windows around 4,096 tokens helps sustain GPU-heavy operation and consistent speeds, whilst streaming smooths the experience during longer outputs. Monitoring GPU utilisation provides an early warning if a job is drifting into a configuration that will trigger CPU fallbacks.
Planning for Resource Constraints
If a task does require more memory, it is worth planning for the associated slowdown rather than assuming that increasing system RAM or accepting a bigger model will compensate for the VRAM limit. Tuning keep_alive to the rhythm of work reduces the frequency of reloads during sessions and helps maintain responsiveness when running sequences of related prompts.
A Practical Content Creation Workflow
Multi-Stage Processing
This configuration supports a division of labour in content creation on desktop systems. You start with a compact model for rapid drafting, switch to a reasoning-focused option for structured expansions if needed, then finish with a model known for adding polish to refine tone and fluency. Insert verification steps between stages to confirm facts, dates and citations before moving on.
Because each stage is local, revisions maintain privacy, with minimal friction between idea and execution. When integrated with automation via Ollama's Python tools, the same pattern can run unattended for batches of articles or summaries, with human review focused on accuracy and editorial style.
In Summary
Desktop PCs and workstations with 8 GB of VRAM can support local LLM workflows for specific tasks, though you need realistic expectations about what these models can and cannot do reliably. They handle basic text generation and reformatting, though prone to hallucinations and misunderstandings. They struggle with precision tasks, structured output and anything requiring consistent formatting. Whilst they may score well on coding benchmarks that test algorithmic problem-solving, practical coding tasks can reveal deeper limitations.
The key is to select models that fit the VRAM envelope, keep context lengths within GPU-friendly bounds, and test them against your actual use cases. For tasks where local models prove inconsistent, such as generating commit messages or producing reliably structured output, larger local models like gpt-oss-20b (which requires around 16GB of memory) may still be worth the wait despite being slower. Local LLMs work best when you understand their limitations and use them for what they genuinely do well, rather than expecting them to replace more capable models across all tasks.
Additional Resources
- Ollama Official Documentation
- Ollama GitHub Repository
- Hugging Face model hub
- LM Studio for local model management
- Jan AI for local AI deployment
- Meta Llama 3.1 Model Card
- Microsoft Phi-3 Documentation
- Google Gemma 3 Overview
- Google Gemma 3n Overview
- Artificial Analysis for model benchmarks