Getting to know Jira, its workflows, test management capabilities and the need for governance
Developed by Atlassian and first released in 2002 as a straightforward bug and issue tracker aimed at software developers, Jira has since grown into a platform used for project management across a wide range of industries and disciplines. The name itself is a truncation of Gojira, the Japanese word for Godzilla, originating as an internal nickname used by Atlassian developers for Bugzilla, the bug-tracking tool they had previously relied upon.
A Family of Products, Each With a Purpose
The Jira ecosystem has expanded well beyond its original single offering, and it is worth understanding what each product is designed to do. Jira (formerly marketed as Jira Software, now unified with Jira Work Management) remains the flagship, built around agile project management with Scrum and Kanban boards at its core. Jira Service Management serves IT operations and service desk teams, handling ticketing and customer support workflows; it originated as Jira Service Desk in 2013, following Atlassian's discovery that nearly 40 per cent of their customers had already adapted the base product for service requests, and it was rebranded in 2020. At the enterprise level, Jira Align connects team delivery to strategic business goals, while Jira Product Discovery helps product teams capture feedback, prioritise ideas and build roadmaps. Together, these products span the full organisational hierarchy, from individual contributors up to executive portfolio management.
Core Features
Agile Boards and Backlog Management
Jira supports a range of agile methodologies, with two primary project templates available to teams. The Scrum template is designed for teams that deliver work in time-boxed sprints, providing backlog management, sprint planning and capacity tracking in a single view. The Kanban template, by contrast, is built around a continuous flow of work, helping teams visualise tasks as they move through each stage of a process without the constraint of fixed iterations. Both templates support custom configurations for teams whose ways of working do not map neatly to either model.
Reporting and Analytics
Jira's reporting suite provides visibility into project progress through various charts and metrics. The Burndown chart tracks remaining story points against the time left in a sprint, offering an indication of whether the team is on course to complete its committed work. The Burnup chart takes a complementary view, tracking how much work has been completed over time and making it straightforward to compare planned scope against actual delivery. These tools are useful for identifying patterns in team performance, though they are most informative when used consistently over several sprints rather than in isolation.
Custom Workflows
Teams can design workflows that reflect their own processes, defining the states an issue passes through and the transitions between them. Automation rules can be applied to handle repetitive steps without manual intervention, reducing administrative overhead on routine tasks. This flexibility is one of the more frequently cited reasons for adopting Jira, though it does require ongoing governance to prevent workflows from becoming inconsistent or unwieldy as teams and processes evolve.
Jira Query Language
Jira Query Language (JQL) provides a structured way to search and filter issues across projects, enabling teams to construct precise queries based on any combination of fields, statuses, assignees, dates and custom attributes. For organisations that invest time in learning it, JQL is a practical tool for building custom reports and dashboards. It is also the underlying mechanism for many of Jira's more advanced filtering and automation features.
Integration Options
Jira connects with a range of tools both within and outside the Atlassian ecosystem. Confluence handles documentation, Bitbucket manages code repositories and links commits directly to Jira issues, and Loom, acquired by Atlassian in 2023, adds asynchronous video communication. Third-party integrations, including Zoom and a broad catalogue of tools available through the Atlassian Marketplace, extend this further for teams with specific requirements.
Test Management With Xray
Jira does not include dedicated test management functionality by default, and teams that need to manage structured test cases alongside their development work typically turn to the Xray plugin, one of the most widely used additions in the Atlassian Marketplace. Xray operates as a native Jira application, meaning it adds new issue types directly to the Jira instance rather than sitting as a separate external tool. The issue types it introduces include Test, Test Set, Test Plan and Test Execution, all of which behave like standard Jira issues and can be searched, filtered and reported on using JQL.
A key capability is requirements traceability: Xray links test cases directly to the user stories and requirements they cover, and connects those in turn to any defects raised during execution. This gives teams a clear picture of test coverage and release readiness without having to leave Jira or reconcile data from separate systems. Test executions can be manual or automated, and Xray integrates with CI/CD toolchains (including Jenkins and Robot Framework) via a REST API, allowing automated test results to be published back into Jira and associated with the relevant requirements.
Xray also supports Behaviour-Driven Development (BDD), enabling teams to write tests in Gherkin syntax and manage them alongside their other Jira work. For organisations already using Jira as their central project management tool, Xray offers a practical route to bringing QA activities into the same workflow rather than maintaining a separate test management system.
Who is Jira Best Suited For?
Jira is generally considered most suitable for larger teams that require detailed control over workflows, reporting and resource allocation, and that have the capacity to dedicate administrative effort to the platform. Smaller teams or those without a dedicated Jira administrator may find the learning curve significant, particularly when configuring custom workflows or working with more advanced reporting features. Pricing is subscription-based, with tiers determined by user count and deployment model (cloud-hosted or self-managed), which means costs can increase substantially as an organisation grows.
Project Types: Tailoring Access to Needs
Jira divides its project spaces into two categories that serve different audiences. Team-managed projects offer simplified configuration for smaller, autonomous teams that want to get started without involving a Jira administrator. Company-managed projects grant administrators full control over customisation, permissions and settings, making them more appropriate for enterprises with complex requirements and multiple teams operating within the same instance. The two types can coexist within the same deployment, giving organisations the option to apply different governance models to different teams as their needs dictate.
Strengths and Limitations
Jira's scalability is one of its more consistent strengths, in terms of both the size of the user base it can support and the complexity of workflows it can accommodate. Its query functions give teams a precise way to interrogate project data, and its breadth of integrations means it can be connected to most standard development and collaboration toolchains.
A significant consideration for any Jira deployment is the degree of upfront decision-making it requires. Because the platform places few constraints on how it is configured, teams must establish their own conventions around workflow design, issue hierarchy, naming and permissions before adoption begins in earnest. Without that groundwork, it is straightforward for individual teams to configure Jira in incompatible ways, making cross-team reporting difficult and creating inconsistencies that become harder to unpick over time. Organisations that treat Jira as something to be governed, rather than simply installed, tend to get considerably more out of it.
The principal technical limitation is its dependence on the wider Atlassian ecosystem. Advanced portfolio planning, capacity forecasting and cross-programme dependency management typically require either a higher-tier plan or additional tooling. Advanced Roadmaps (now called Plans) are available natively within Jira Premium and Enterprise, providing cross-team timeline planning and scenario modelling. For capacity planning, budget tracking and timesheet management, many organisations turn to third-party Marketplace tools such as Tempo. Teams evaluating Jira should factor in both the cost of the appropriate licence tier and any supplementary tooling they are likely to need.
Where to Go From Here
Jira has grown considerably from the issue tracker it was when first released in 2002, and is now used by over 300,000 organisations worldwide. Its capabilities are broad, and its configurability makes it adaptable to a wide range of team structures and workflows. That same configurability, however, means the platform rewards investment in setup and ongoing administration, and organisations should assess whether they have the resources to realise that potential before committing. For those looking to explore further, Atlassian's official guides, its wider documentation, the support portal, the Atlassian Community and the developer documentation are useful starting points, and there are courses from an independent provider too.
Technology retail in North America: Five retailers worth knowing
The technology retail landscape in North America is shaped by a tension between convenience, expertise and competitive pricing. From sprawling big-box chains to specialist online stores, the sector contains a varied mix of established names and niche operators, each competing for customers who expect rapid delivery, accurate product information and reliable after-sales support. Five retailers stand out for the distinctly different approaches they take to serving that audience: Tech-America, Best Buy, Newegg, PC-Canada and Micro Center.
Tech-America
Tech-America presents itself as a direct-to-consumer online retailer covering a broad range of electronics and computer components. Its selling points include a large inventory and an emphasis on prompt shipping, with the site targeting a mix of hobbyists and small businesses. Questions have been raised about the company's legitimacy, with multiple consumer forums and review aggregators reflecting mixed opinions on its reliability and operational structure. Prospective customers are advised to research the retailer carefully before committing to a purchase, as third-party assessments remain inconclusive.
Best Buy
Best Buy is one of the most recognisable names in North American consumer electronics retail, and its history stretches back further than many of its customers might expect. The company was founded by Richard M. Schulze and James Wheeler in 1966 as an audio speciality store called Sound of Music, operating its first location in St. Paul, Minnesota. It was rebranded as Best Buy in 1983, at which point it had seven stores and around $10 million in annual sales, and it subsequently expanded its product range well beyond audio equipment to become a broad-based electronics retailer.
Today, Best Buy operates over 1,000 stores across the United States and Canada, combining physical retail with online sales in a model that the company describes as omnichannel. A key differentiator is its Geek Squad service division, which provides technical support, repairs and installation services across all store locations, and which has become a recognisable brand in its own right since being acquired by Best Buy in 2002. That combination of an extensive retail footprint and in-house technical services has allowed the company to retain a large and varied customer base that includes households, businesses and educational institutions.
Newegg
Newegg occupies a distinct position as a specialist online retailer focused primarily on computer hardware and components. Founded in 2001 by Fred Chang, a Taiwanese-American entrepreneur who had previously run ABS Computer Technologies, the company was established in California and initially targeted PC builders and enthusiasts who wanted detailed product information and user reviews alongside their purchases. The name itself was chosen to suggest new hope for e-commerce at a time when many dot-com businesses were struggling to survive.
Newegg operates a hybrid model that combines first-party sales with a marketplace for third-party sellers, expanding available inventory without the company needing to hold all stock itself. This approach has attracted a loyal community of technically minded buyers who value the depth of product listings on the platform. However, the marketplace model also introduces variability in seller quality, and some customers have noted inconsistencies in their experiences depending on which seller fulfilled their order. Newegg has been publicly listed on the Nasdaq under the ticker NEGG since May 2021, following a reverse merger with a Chinese special-purpose acquisition company.
PC-Canada
PC-Canada is a Waterloo, Ontario-based retailer that has served both individual consumers and business customers since its founding in 2003, making it one of Canada's longer-standing e-commerce technology retailers. The company offers a broad catalogue of IT products and components, and it holds an A+ rating from the Better Business Bureau, having been accredited since December 2015. Customer reviews present a more mixed picture, with some praising competitive pricing and fast shipping, while others have reported issues around order fulfilment and pricing changes after purchase. That gap between institutional accreditation and individual customer experience is a useful reminder that smaller regional retailers can face difficulties scaling consistently as their customer base grows.
Micro Center
Micro Center has taken a path that runs counter to the broader shift towards online-only retail, continuing to invest in physical stores and in-person expertise. The company currently operates 30 locations across the United States, with recent openings in Charlotte, Miami and Santa Clara adding to its footprint. Each store carries over 25,000 products and is staffed by associates who are recruited specifically for their technical knowledge, rather than general retail experience.
A notable feature of every Micro Center location is the Knowledge Bar, a dedicated in-store support desk offering diagnostics, repairs, authorised servicing for brands including Apple and Dell, and consultations for customers building their own PCs. The concept was introduced in 2007 and has since become central to the company's identity. Micro Center was ranked the number one tech retailer in the United States by PC Magazine in 2024, a recognition that reflects the premium its customers place on accessible, knowledgeable in-store service.
Closing Remarks
Each of these five retailers demonstrates a different answer to the same underlying question: what do technology buyers actually value? Tech-America and Newegg lean on the convenience and inventory breadth that online retail makes possible, while Best Buy and Micro Center make the case that physical presence and expert service remain compelling. PC-Canada illustrates the particular pressures facing regional players operating in a market where large international competitors set the expectations for pricing and delivery speed. As consumer habits continue to evolve, the retailers that balance adaptability with a clearly defined offering are likely to be the ones that endure.
Generating commit messages and summarising text locally with Ollama running on Linux
For generating GitHub commit messages, I use aicommit, which I have installed using Homebrew on macOS and on Linux. By default, this needs access to the OpenAI API using a token for identification. However, I noticed that API usage is heavier than when I summarise articles using Python scripting. In the interest of cutting the load and the associated cost, I began to look at locally run LLM options. Here, I discuss things mainly from a Linux point of view, particularly since I use Linux Mint for daily work.
Hardware Considerations
That led me to Ollama, which also has a local API in the mould of what you get from OpenAI. It also offers a Python interface, which has plenty of uses. This experimentation began on an iMac, where macOS can access all the available memory, offering flexibility when it comes to model selection. On a desktop PC or workstation, the architecture is different, which means that you are dependent on GPU processing for added speed. Should the load fall on the CPU, the lag in performance cannot be missed. The situation can be seen from this command while an LLM is loaded:
ollama ps
That discovery was made at the end of 2024, prompting me to do a system upgrade that only partially addressed the need, even if a quieter cooler case was part of the new machine. Before that, I had tried a new Nvidia GeForce RTX 4060 graphics card with 8 GB of VRAM. That continued in use, though the amount of onboard memory meant that larger models overflowed into system memory, bringing the CPU in use, still substantially slowing processing. Though there are some reasonable models like llama3.1:8b that will fit within 8 GB of VRAM, that has limitations that became apparent with use. Hallucinations were among those, and that also afflicted alternative options.
That led me to upgrade to a GeForce RTX 5060 Ti with 16 GB of VRAM, which meant that larger models could be used. Two of these have become my choices for different tasks: gpt-oss for GitHub commit messages and qwen3:14b for summarising blocks of text (albeit with Anthropic's API for when the output is not to my expectations, not that it happens often). Both fit of these within the available memory, allowing for GPU processing without any CPU involvement.
Generating Commit Messages
To use aicommit with Ollama, the command needs to be changed to use the Ollama API, and it is better to define a function like this:
run_aicommit() { env OPENAI_BASE_URL="http://localhost:11434/v1" OPENAI_API_KEY="ollama" AICOMMIT_MODEL="gpt-oss" /home/linuxbrew/.linuxbrew/bin/aicommit "$@"; }
This avoids having to alter the values of any global variables, with the env command setting up an ephemeral environment within which these are available. Here, using env may not be essential, even if it makes things clearer. The shell variable names should be self-explanatory given the names, and this way of doing things does not clash with any global variables that are set. Since aicommit was added using Homebrew, the full path is defined to avoid any ambiguity for the shell. At the end, "$@" passes any parameters or modifiers like 2>/dev/null, which redirects stderr output so that it does not appear when the function is being called. While you need to watch the volume of what is being passed to it, this approach works well and mostly produces sensible commit messages.
Text Summarisation
For text generation with a Python script, using streaming helps to keep everything in hand. Here is the core code:
chunks = []
for part in ollama.chat(
model=model,
messages=[{'role': 'user', 'content': prompt}],
options={'num_ctx': context, 'temperature': 0.2, 'top_p': 0.9},
stream=True,
):
chunks.append(part['message']['content'])
summary = re.sub(r'\s+', ' ', ''.join(chunks)).strip()
Above, a for loop iterates over each streamed chunk as it arrives, extracting the text content from part['message']['content'] and appending it to the chunks list. Once streaming is finished, ''.join(chunks) reassembles all the pieces into a single string. The re.sub(r'\s+', ' ', ...) call then collapses any intermediate sequences of whitespace characters (newlines, tabs, multiple spaces) down to a single space, and .strip() removes any leading or trailing whitespace, storing the cleaned result in summary.
Within the loop itself, an ollama.chat() call initiates an interaction with the specified model (defined as qwen3:14b earlier in the code), passing the user's prompt as a message. This is controlled by a few parameters, with num_ctx controlling the context window size and 4096 as the recommended limit to ensure that everything remains on the GPU. Defining a model temperature of 0.2 grounds the model to keep the output focussed and deterministic, while a top_p value of 0.9 applies nucleus sampling to filter the token pool. Setting stream=True means the model returns its response incrementally as a series of chunks, rather than waiting until generation is complete.
A Beneficial Outcome
Most of the time, local LLM usage suffices for my needs and reserves the use of remote models from the likes of OpenAI or Anthropic for when they add real value. The hardware outlay remains a sizeable investment, though, even if it adds significantly to one's personal privacy. For a long time, graphics cards have not interested me aside from basic functions like desktop display, making this a change from how I used to view such devices before the advent of generative AI.
A survey of commenting systems for static websites
This piece grew out of a practical problem. When building a Hugo website, I went looking for a way to add reader comments. The remotely hosted options I found were either subscription-based or visually intrusive in ways that clashed with the site design. Moving to the self-hosted alternatives brought a different set of difficulties: setup proved neither straightforward nor reliably successful, and after some time I concluded that going without comments was the more sensible outcome.
That experience is, it turns out, a common one. The commenting problem for static sites has no clean solution, and the landscape of available tools is wide enough to be disorienting. What follows is a survey of what is currently out there, covering federated, hosted and self-hosted approaches, so that others facing the same decision can at least make an informed choice about where to invest their time.
Federated Options
At one end of the spectrum sit the federated solutions, which take the most principled approach to data ownership. Federated systems such as Cactus Comments stand out by building on the Matrix open standard, a decentralised protocol for real-time communication governed by the Matrix.org Foundation. Because comments exist as rooms on the Matrix network, they are not siloed within any single server, and users can engage with discussions using an existing Matrix account on any compatible home server, or follow threads using any Matrix client of their choosing. Site owners, meanwhile, retain the flexibility to rely on the public Cactus Comments service or to run their own Matrix home server, avoiding third-party tracking and centralised control alike. The web client is LGPLv3 licensed and the backend service is AGPLv3 licensed, making the entire stack free and open source.
Solutions for Publishers and Media Outlets
For publishers and media organisations, Coral by Vox Media offers a well-established and feature-rich alternative. Originally founded in 2014 as a collaboration between the Mozilla Foundation, The New York Times and The Washington Post, with funding from the Knight Foundation, it moved to Vox Media in 2019 and was released as open-source software. It provides advanced moderation tools supported by AI technology, real-time comment alerts and in-depth customisation through its GraphQL API. Its capacity to integrate with existing user authentication systems makes it a compelling choice for organisations that wish to maintain editorial control without sacrificing community engagement. Coral is currently deployed across 30 countries and in 23 languages, a breadth of adoption that reflects its standing among publishers of all sizes. The team has recently expanded the product to include a live Q&A tool alongside the core commenting experience, and the open-source codebase means that organisations with the technical resources can self-host the entire platform.
A strong alternative for publishers who handle large discussion volumes is GraphComment, a hosted platform developed by the French company Semiologic. It takes a social-network-inspired approach, offering threaded discussions with real-time updates, relevance-based sorting, a reputation-based voting system that enables the community to assist with moderation, and a proprietary Bubble Flow interface that makes individual threads indexable by search engines. All data are stored on servers based in France, which will appeal to publishers with European data-residency requirements. Its client list includes Le Monde, France Info and Les Echos, giving it considerable credibility in the media sector.
Hosted Solutions: Ease of Setup and Performance
Hosted solutions cater to those who prioritise simplicity and page performance above all else. ReplyBox exemplifies this approach, describing itself as 15 times lighter than Disqus, with a design focused on clean aesthetics and fast page loads. It supports Markdown formatting, nested replies, comment upvotes, email notifications and social login via Google, and it comes with spam filtering through Akismet. A 14-day free trial is available with no payment required, and a WordPress plugin is offered for those already on that platform.
Remarkbox takes a similarly restrained approach. Founded in 2014 by Russell Ballestrini after he moved his own blog to a static site and found existing solutions too slow or ad-laden, it is open source, carries no advertising and performs no user tracking. Readers can leave comments without creating an account, using email verification to confirm their identity, and the platform operates on a pay-what-you-can basis that keeps it accessible to smaller sites. It supports Markdown with real-time comment previews and deeply nested replies, and its developer notes that comments that are served through the platform contribute to SEO by making user-generated content indexable by search engines.
The choice between hosted and self-hosted systems often hinges on the trade-off between convenience and control. Staticman was a notable option in this space, acting as a Node.js bridge that committed comment submissions as data files directly to a GitHub or GitLab repository. However, its website is no longer accessible, and the project has been effectively abandoned since around 2020, with its maintainers publicly confirming in early 2024 that neither they nor the original author have been active on it for some time and that no volunteer has stepped forward to take it over. Those with a need for similar functionality are directed by the project's own contributors towards Cloudflare Workers-based alternatives. Utterances remains a viable option in this category, using GitHub Issues as its backend so that all comment data stays within a repository the site owner already controls. It requires some technical setup, but rewards that effort with complete data ownership and no external dependencies.
Open-Source, Self-Hosted Options
For developers who value privacy and data sovereignty above the convenience of a hosted service, open-source and self-hosted options present a natural fit. Remark42 is an actively maintained project that supports threaded comments, social login, moderation tools and Telegram or email notifications. Written in Python and backed by a SQLite database, Isso has been available since 2013 and offers a straightforward deployment with a small resource footprint, together with anonymous commenting that requires no third-party authentication. Both projects reflect a broader preference among privacy-conscious developers for keeping comment data entirely under their own roof.
The Case of Disqus
Valued for its ease of integration and its social features, Disqus remains one of the most widely recognised hosted commenting platform. However, it comes with well-documented drawbacks. Disqus operates as both a commenting service and a marketing and data company, collecting browsing data via tracking scripts and sharing it with third-party advertising partners. In 2021, the Norwegian Data Protection Authority notified Disqus of its intention to issue an administrative fine of approximately 2.5 million euros for processing user data without valid consent under the General Data Protection Regulation. However, following Disqus's response, the authority's final decision in 2024 was to issue a formal reprimand rather than impose the financial penalty. The proceedings nonetheless drew renewed attention to the privacy implications of relying on the platform. Site owners who prefer the convenience of a hosted service without those trade-offs may find more suitable alternatives in Hyvor Talk or CommentBox, both of which are designed around privacy-first principles and minimal setup.
Bridging the Gap: Talkyard and Discourse
Functioning as both a commenting system and a full community forum, Talkyard occupies an interesting position in the landscape. It can be embedded on a blog in the same manner as a traditional commenting widget, yet it also supports standalone discussion boards, making it a viable option for content creators who anticipate their audience outgrowing a simple comment section.
It also happens that Discourse operates on a similar principle but at greater scale, providing a fully featured forum platform that can be embedded as a comment section on external pages. Co-founded by Jeff Atwood (also a co-founder of Stack Overflow), Robin Ward and Sam Saffron, it is an open-source project whose server side is built on Ruby on Rails with a PostgreSQL database and Redis cache, while the client side uses Ember.js. Both Talkyard and Discourse are available as hosted services or as self-hosted installations, and both carry open-source codebases for those who wish to inspect or extend them.
Self-Hosting Discourse With Cloudflare CDN
For those who wish to take the self-hosted route, Discourse distributes an official Docker image that considerably simplifies deployment. The process begins by cloning the official repository into /var/discourse and running the bundled setup tool, which prompts for a hostname, administrator email address and SMTP credentials. A Linux server with at least 2 GB of memory is required, and a SWAP partition should be enabled on machines with only 1 GB.
Pairing a self-hosted instance with Cloudflare as a global CDN is a practical choice, as Cloudflare provides CDN acceleration, DNS management and DDoS mitigation, with a free tier that suits most community deployments. When configuring SSL, the recommended approach is to select Full mode in the Cloudflare SSL/TLS dashboard and generate an origin certificate using the RSA key type for maximum compatibility. That certificate is then placed in /var/discourse/shared/standalone/ssl/, and the relevant Cloudflare and SSL templates are introduced into Discourse's app.yml configuration file.
One important point during initial DNS setup is to leave the Cloudflare proxy status set to DNS only until the Discourse configuration is complete and verified, switching it to Proxied only afterwards to avoid redirect errors during first deployment. Email setup is among the more demanding aspects of running Discourse, as the platform depends on it for user authentication and notifications. The notification_email setting and the disable_emails option both require attention after a fresh install or a migration restore. Once configuration is finalised, running ./launcher rebuild app from the /var/discourse directory completes the build, typically within ten minutes.
Plugins can be added at any time by specifying their Git repository URLs in the hooks section of app.yml and triggering a rebuild. Discourse creates weekly backups automatically, storing them locally under /var/discourse/shared/standalone/backups, and these can be synchronised offsite via rsync or uploaded automatically to Amazon S3 if credentials are configured in the admin panel.
At a Glance
| Solution | Type | Best For |
|---|---|---|
| Cactus Comments | Federated, open source | Privacy-centric sites |
| Coral | Open source, hosted or self-hosted | Publishers and newsrooms |
| GraphComment | Hosted | Enhanced engagement and SEO |
| ReplyBox | Hosted | Simple static sites |
| Remarkbox | Hosted, optional self-host | Speed and simplicity |
| Utterances | Repository-backed | Developer-owned data |
| Remark42 | Self-hosted, open source | Privacy and control |
| Isso | Self-hosted, open source | Minimal footprint |
| Hyvor Talk | Hosted | Privacy-focused ease of use |
| CommentBox | Hosted | Clean design, minimal setup |
| Talkyard | Hosted or self-hosted | Comments and forums combined |
| Discourse | Hosted or self-hosted | Rich discussion communities |
| Disqus | Hosted | Ease of integration (privacy caveats apply) |
Closing Thoughts
None of the options surveyed here is without compromise. The hosted services ask you to accept some degree of cost, design constraint or data trade-off. The self-hosted and repository-backed tools demand technical time that can outweigh the benefit for a small or personal site. The federated approach is principled but asks readers to have, or create, a Matrix account before they can participate. It is entirely reasonable to weigh all of that and, as I did, conclude that going without comments is the right call for now. The landscape does shift, and a solution that is cumbersome today may become more accessible as these projects mature. In the meantime, knowing what exists and where the friction lies is a reasonable place to start.
The Open Worldwide Application Security Project: A cornerstone of digital safety in an age of evolving cybersecurity threats
When Mark Curphey registered the owasp.org domain and announced the project on a security mailing list on the 9th of September 2001, there was no particular reason to expect that it would become one of the defining frameworks in the world of application security. Yet, OWASP, originally the Open Web Application Security Project, has done exactly that, growing from an informal community into a globally recognised nonprofit foundation that shapes how developers, security professionals and businesses think about the security of software. In February 2023, the board voted to update the name to the Open Worldwide Application Security Project, a change that better reflects its modern scope, which now extends beyond web applications to cover IoT, APIs and software security more broadly.
At its heart, OWASP operates on a straightforward principle: knowledge about software security should be free and openly accessible to everyone. The foundation became incorporated as a United States 501(c)(3) nonprofit charity on the 21st of April 2004, when Jeff Williams and Dave Wichers formalised the legal structure in Delaware. What began as an informal mailing list community grew into one of the most trusted independent voices in application security, underpinned by a community-driven model in which volunteers and corporate supporters alike contribute to a shared vision.
The OWASP Top 10
Of all OWASP's contributions, the OWASP Top 10 remains its most widely cited publication. First released in 2003, it is a standard awareness document representing broad consensus among security experts about the most critical risks facing web applications. The list is updated periodically, with a 2025 edition now published, following the 2021 edition.
The 2021 edition reorganised a number of longstanding categories to reflect how the threat landscape has shifted. Broken access control rose to the top position, reflecting its presence in 94 per cent of tested applications, while injection (which encompasses SQL injection and cross-site scripting, among others) fell to third place. Cryptographic failures, previously listed as sensitive data exposure, took second place. By organising risks into categories rather than exhaustive lists of individual vulnerabilities, the Top 10 provides a practical starting point for prioritising security efforts, and it is widely referenced in compliance frameworks and security policies as a baseline. It is, however, designed to be the beginning of a conversation about security rather than the final word.
Projects and Tools
Beyond the Top 10, OWASP maintains a substantial portfolio of open-source projects spanning tools, documentation and standards. Among the most widely used is OWASP ZAP (Zed Attack Proxy), a dynamic application security testing tool that helps developers and security professionals identify vulnerabilities in web applications. Originally created in 2010 by Simon Bennetts, ZAP operates as a proxy between a tester's browser and the target application, allowing it to intercept, inspect and manipulate HTTP traffic. It supports both passive scanning, which observes traffic without modifying it, and active scanning, which simulates real attacks against targets for which the tester has explicit authorisation.
The OWASP Testing Guide is another widely consulted resource, offering a comprehensive methodology for penetration testing web applications. The OWASP API Security Project addresses the distinct risks that face APIs, which have become an increasingly prominent attack surface, and OWASP also maintains a curated directory of API security tools for those working in this area. For teams managing web application firewalls, the OWASP ModSecurity Core Rule Set provides guidance on handling false positives, which is one of the more practically demanding aspects of deploying rule-based defences. OWASP SEDATED, a more specialised project, focuses on preventing sensitive data from being committed to source code repositories, addressing a problem that continues to affect development teams of all sizes. Projects are categorised by their maturity and quality, allowing users to distinguish between stable, production-ready tools and those that are still in active development, and this tiered approach helps organisations make informed decisions about which tools are appropriate for their needs.
Influence on Industry Practice
The reach of OWASP's guidance is considerable. Security teams use its materials to structure risk assessments and threat modelling exercises, while developers integrate its recommendations into code reviews and secure coding training. Auditors and regulators frequently reference OWASP standards during compliance checks, creating a shared vocabulary that helps bridge the gap between technical staff and leadership. This alignment has done much to normalise application security as a core part of the software development lifecycle, rather than a task bolted on after the fact.
OWASP's influence also extends into regulatory and standards environments. Frameworks such as PCI DSS reference the Top 10 as part of their requirements for web application security, lending it a degree of formal weight that few community-produced documents achieve. That said, OWASP is not a regulatory body and has no enforcement powers of its own.
Education and Community
Education remains a central part of OWASP's mission. The foundation runs hundreds of local chapters across the globe, providing forums for knowledge exchange at a local level, as well as global conferences such as Global AppSec that bring together practitioners from across the industry. All of OWASP's projects, tools, documentation and chapter activities are free and open to anyone with an interest in improving application security. This open model lowers barriers for those starting out in the field and fosters collaboration across academia, industry and open-source communities, creating an environment where expertise circulates freely and innovation is encouraged.
Limitations and Appropriate Use
OWASP is not without its limitations, and it is worth acknowledging these clearly. Because it is not a regulatory body, it cannot enforce compliance, and the quality of individual projects can vary considerably. The Top 10, in particular, is sometimes misread as a comprehensive checklist that, once ticked off, certifies an application as secure. It is not. It is an awareness document designed to highlight the most prevalent categories of risk, not to enumerate every possible vulnerability. Treating it as a complete audit framework rather than a starting point for more in-depth analysis is one of the most common mistakes organisations make when engaging with OWASP materials.
The OWASP Top 10 for Large Language Model Applications
As artificial intelligence has moved from research curiosity to production deployment at scale, OWASP has responded with a dedicated framework for the security risks unique to large language models. The OWASP Top 10 for Large Language Model Applications, maintained under the broader OWASP GenAI Security Project, was first published in 2023 as a community-driven effort to document vulnerabilities specific to LLM-powered applications. A 2025 edition has since been released, reflecting how quickly both the technology and the associated threat landscape have evolved.
The list shares the same philosophy as the web application Top 10, using categories to frame risk rather than enumerating every individual attack variant. Its 2025 edition identifies prompt injection as the leading concern, a class of vulnerability in which crafted inputs cause a model to behave in unintended ways, whether by ignoring instructions, leaking sensitive information or performing unauthorised actions. Other entries cover sensitive information disclosure, supply chain risks (including vulnerable or malicious components sourced from model repositories), data and model poisoning, improper output handling, excessive agency (where an LLM is granted more autonomy or permissions than its task requires) and unbounded consumption, which addresses the risk of uncontrolled resource usage leading to service disruption or unexpected cost. Two categories introduced in the 2025 edition, system prompt leakage and vector and embedding weaknesses, reflect lessons learned from real-world RAG deployments, where retrieval-augmented pipelines have introduced new attack surfaces that did not exist in earlier LLM architectures.
The LLM Top 10 is distinct from the web application Top 10 in an important respect: because the threat landscape for AI applications is evolving considerably faster than that of traditional web software, the list is updated more frequently and carries a higher degree of uncertainty about what constitutes best practice. It is best treated as a living reference rather than a settled standard, and organisations deploying LLM-powered applications would do well to monitor the GenAI Security Project's ongoing work on agentic AI security, which addresses the additional risks that arise when models are given the ability to take real-world actions autonomously.
An Ongoing Work
In an era defined by rapid technological change and an ever-expanding threat landscape, OWASP continues to occupy a distinctive and valuable position in the world of application security. Its freely available standards, practical tools and community-driven approach have made it an indispensable reference point for organisations and individuals working to build safer software. The foundation's work is a practical demonstration that security need not be a competitive advantage hoarded by a few, but a collective responsibility shared across the entire industry.
For developers, security engineers and organisations navigating the challenges of modern software development, OWASP represents both a toolkit and a philosophy: that improving the security of software is work best done together, openly and without barriers.
Blocking thin scrollbar styles in Thunderbird on Linux Mint
When you get a long email, you need to see your reading progress as you work your way through it. Then, the last thing that you need is to have someone specifying narrow scrollbars in the message HTML like this:
<html style="scrollbar-width: thin;">
This is what I with an email newsletter on AI Governance sent to me via Substack. Thankfully, that behaviour can be disabled in Thunderbird. While my experience was on Linux Mint, the same fix may work elsewhere. The first step is to navigate the menus to where you can alter the settings: "Hamburger Menu" > Settings > Scroll to the bottom > Click on the Config Editor button.
In the screen that opens, enter layout.css.scrollbar-width-thin.disabled in the search and press the return key. Should you get an entry (and I did), click on the arrows button to the right to change the default value of False to True. Should your search be fruitless, right click anywhere to get a context menu where you can click on New and then Boolean to create an entry for layout.css.scrollbar-width-thin.disabled, which you then set to True. Whichever way you have accomplished the task, restarting Thunderbird ensures that the setting applies.
If the default scrollbar thickness in Thunderbird is not to your liking, returning to the Config Editor will address that. Here, you need to search for or create widget.non-native-theme.scrollbar.size.override. Since this takes a numeric value, pick the appropriate type if you are creating a new entry. Since that was not needed in my case, I pressed the edit button, chose a larger number and clicked on the tick mark button to confirm it. The effect was seen straight and all was how I wanted it.
In the off chance that the above does not work for you, there is one more thing that you can try, and this is specific to Linux. It sends you to the command line, where you issue this command:
gsettings get org.gnome.desktop.interface overlay-scrolling
Should that return a value of true, follow the with this command to change the setting to false:
gsettings set org.gnome.desktop.interface overlay-scrolling false
After that, you need to log off and back on again for the update to take effect. Since I had no recourse to that, it may be the same for you too.
Building a modular Hugo website home page using block-driven front matter
Inspired by building a modular landing page on a Grav-powered subsite, I wondered about doing the same for a Hugo-powered public transport website that I have. It was part of an overall that I was giving it, with AI consultation running shotgun with the whole effort. The home page design was changed from a two-column design much like what was once typical of a blog, to a single column layout with two-column sections.
The now vertical structure consisted of numerous layers. First, there is an introduction with a hero image, which is followed by blocks briefly explaining what the individual sections are about. Below them, two further panels describe motivations and scope expansions. After those, there are two blocks displaying pithy details of recent public transport service developments before two final panels provide links to latest articles and links to other utility pages, respectively.
This was a conscious mix of different content types, with some nesting in the structure. Much of the content was described in page front matter, instead of where it usually goes. Without that flexibility, such a layout would not have been possible. All in all, this illustrates just how powerful Hugo is when it comes to constructing website layouts. The limits essentially are those of user experience and your imagination, and necessarily in that order.
On Hugo Home Pages
Building a home page in Hugo starts with understanding what content/_index.md actually represents. Unlike a regular article file, _index.md denotes a list page, which at the root of the content directory becomes the site's home page. This special role means Hugo treats it differently from a standard single page because the home is always a list page even when the design feels like a one-off.
Front matter in content/_index.md can steer how the page is rendered, though it remains entirely optional. If no front matter is present at all, Hugo still creates the home page at .Site.Home, draws the title from the site configuration, leaves the description empty unless it has been set globally, and renders any Markdown below the front matter via .Content. That minimal behaviour suits sites where the home layout is driven entirely by templates, and it is a common starting point for new projects.
How the Underlying Markdown File Looks
While this piece opens with a description of what was required and built, it is better to look at the real _index.md file. Illustrating the block-driven pattern in practical use, here is a portion of the file:
---
title: "Maximising the Possibilities of Public Transport"
layout: "home"
blocks:
- type: callout
text1: "Here, you will find practical, thoughtful insight..."
text2: "You can explore detailed route listings..."
image: "images/sam-Up56AzRX3uM-unsplash.jpg"
image_alt: "Transpennine Express train leaving Manchester Piccadilly train station"
- type: cards
heading: "Explore"
cols_lg: 6
items:
- title: "News & Musings"
text: "Read the latest articles on rail networks..."
url: "https://ontrainsandbuses.com/news-and-musings/"
- title: "News Snippets"
...
- type: callout
heading: "Motivation"
text2: "Since 2010, British public transport has endured severe challenges..."
image: "images/joseph-mama-aaQ_tJNBK4c-unsplash.jpg"
image_alt: "Buses in Leeds, England, U.K."
- type: callout
heading: "An Expanding Scope"
text2: "You will find content here drawn from Ireland..."
image: "images/snap-wander-RlQ0MK2InMw-unsplash.jpg"
image_alt: "TGV speeding through French countryside"
---
There are several things that are worth noting here. The title and layout: "home" fields appear at the top, with all structural content expressed as a blocks list beneath them. There is no Markdown body because the blocks supply all the visible content, and the file contains no layout logic of its own, only a description of what should appear and in what order. However, the lack of a Markdown body does pose a challenge for spelling and grammar checking using the LanguageTool extension in VSCode, which means that you need to ensure that proofreading needs to happen in a different way, such as using the editor that comes with the LanguageTool browser extension.
Template Selection and Lookup Order
Template selection is where Hugo's home page diverges most noticeably from regular sections. In Hugo v0.146.0, the template system was completely overhauled, and the lookup order for the home page kind now follows a straightforward sequence: layouts/home.html, then layouts/list.html, then layouts/all.html. Before that release, the conventional path was layouts/index.html first, falling back to layouts/_default/list.html, and the older form remains supported through backward-compatibility mapping. In every case, baseof.html is a wrapper rather than a page template in its own right, so it surrounds whichever content template is selected without substituting for one.
The choice of template can be guided further through front matter. Setting layout: "home" in content/_index.md, as in the example above, encourages Hugo to pick a template named home.html, while setting type: "home" enables more specific template resolution by namespace. These are useful options when the home page deserves its own template path without disturbing other list pages.
The Home Template in Practice
With the front matter established, the template that renders it is worth examining in its own right. It happens that the home.html for this site reads as follows:
<!DOCTYPE html>
{{- partial "head.html" . -}}
<body>
{{- partial "header.html" . -}}
<div class="container main" id="content">
<div class="row">
<h2 class="centre">{{ .Title }}</h2>
{{- partial "blocks/render.html" . -}}
</div>
{{- partial "recent-snippets-cards.html" . -}}
{{- partial "home-teasers.html" . -}}
{{ .Content }}
</div>
{{- partial "footer.html" . -}}
{{- partial "cc.html" . -}}
{{- partial "matomo.html" . -}}
</body>
</html>
This template is self-contained rather than wrapping a base template. It opens the full HTML document directly, calls head.html for everything inside the <head> element and header.html for site navigation, then establishes the main content container. Inside that container, .Title is output as an h2 heading, drawing from the title field in content/_index.md. The block dispatcher partial, blocks/render.html, immediately follows and is responsible for looping through .Params.blocks and rendering each entry in sequence, handling all the callout and cards blocks described in the front matter.
Below the blocks, two further partials render dynamic content independently of the front matter. recent-snippets-cards.html displays the two most recent news snippets as full-content cards, while home-teasers.html presents a compact linked list of recent musings alongside a weighted list of utility pages. After those, {{ .Content }} outputs any Markdown written below the front matter in content/_index.md, though in this case, the file has no body content, so nothing is rendered at that point. The template closes with footer.html, a cookie notice via cc.html and a Matomo analytics snippet.
Notice that this template does not use {{ define "main" }} and therefore does not rely on baseof.html at all. It owns the full document structure itself, which is a legitimate approach when the home page has a sufficiently distinct shape that sharing a base template would add complexity rather than reduce it.
The Block Dispatcher
The blocks/render.html partial is the engine that connects the front matter to the individual block templates. Its full content is brief but does considerable work:
{{ with .Params.blocks }}
{{ range . }}
{{ $type := .type | default "text" }}
{{ partial (printf "blocks/%s.html" $type) (dict "page" $ "block" .) }}
{{ end }}
{{ end }}
The with .Params.blocks guard means the entire loop is skipped cleanly if no blocks key is present in the front matter, so pages that do not use the system are unaffected. For each block in the list, the type field is read and passed through printf to build the partial path, so type: callout resolves to blocks/callout.html and type: cards resolves to blocks/cards.html. If a block has no type, the fallback is text, so a blocks/text.html partial would handle it. The dict call constructs a fresh context map passing both the current page (as page) and the raw block data (as block) into the partial, keeping the two concerns cleanly separated.
The Callout Blocks
The callout.html partial renders bordered, padded sections that can carry a heading, an image and up to five paragraphs of text. Used for the website introduction, motivation and expanded scope sections, its template is as follows:
{{ $b := .block }}
<section class="mt-4">
<div class="p-4 border rounded">
{{ with $b.heading }}<h3>{{ . }}</h3>{{ end }}
{{ with $b.image }}
<img
src="{{ . }}"
class="img-fluid w-100 rounded"
alt="{{ $b.image_alt | default "" }}">
{{ end }}
<div class="text-columns mt-4">
{{ with $b.text1 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text2 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text3 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text4 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text5 }}<p>{{ . }}</p>{{ end }}
</div>
</div>
</section>
The pattern here is consistent and deliberate. Every field is wrapped in a {{ with }} block, so fields absent from the front matter produce no output and no empty elements. The heading renders as an h3, sitting one level below the page's h2 title and maintaining a coherent document outline. The image uses img-fluid and w-100 alongside rounded, making it fully responsive and visually consistent with the bordered container. According to the Bootstrap documentation, img-fluid applies max-width: 100% and height: auto so the image scales with its parent, while w-100 ensures it fills the container width regardless of its intrinsic size. The image_alt field falls back to an empty string via | default "" rather than omitting the attribute entirely, which keeps the rendered HTML valid.
Text content sits inside a text-columns wrapper, which allows a stylesheet to apply a CSS multi-column layout to longer passages without altering the template. The numbered paragraph fields text1 through text5 reflect the varying depth of the callout blocks in the front matter: the introductory callout uses two paragraphs, while the Motivation callout uses four. Adding another paragraph field to a block requires only a new {{ with $b.text6 }} line in the partial and a matching text6 key in the front matter entry.
The Section Introduction Blocks
The cards.html partial renders a headed grid of linked blocks, with the column width at large viewports driven by a front matter parameter. This is used for the website section introductions and its template is as follows:
{{ $b := .block }}
{{ $colsLg := $b.cols_lg | default 4 }}
<section class="mt-4">
{{ with $b.heading }}<h3 class="h4 mb-3">{{ . }}</h3>{{ end }}
<div class="row">
{{ range $b.items }}
<div class="col-12 col-md-6 col-lg-{{ $colsLg }} mb-3">
<div class="card h-100 ps-2 pe-2 pt-2 pb-2">
<div class="card-body">
<h4 class="h5 card-title mt-1 mb-2">
<a href="{{ .url }}">{{ .title }}</a>
</h4>
{{ with .text }}<p class="card-text mb-0">{{ . }}</p>{{ end }}
</div>
</div>
</div>
{{ end }}
</div>
</section>
The cols_lg value defaults to 4 if not specified, which produces a three-column grid at large viewports using Bootstrap's twelve-column grid. The transport site's cards block sets cols_lg: 6, giving two columns at large viewports and making better use of the wider reading space for six substantial card descriptions. At medium viewports, the col-md-6 class produces two columns regardless of cols_lg, and col-12 ensures single-column stacking on small screens.
The heading uses the h4 utility class on an h3 element, pulling the visual size down one step while keeping the document outline correct, since the page already has an h2 title and h3 headings in the callout blocks. Each card title then uses h5 on an h4 for the same reason. The h-100 class on the card sets its height to one hundred percent of the column, so all cards in a row grow to match the tallest one and baselines align even when descriptions vary in length. The padding classes ps-2 pe-2 pt-2 pb-2 add a small inset without relying on custom CSS.
Brief Snippets of Recent Public Transport Developments
The recent-snippets-cards.html partial sits outside the blocks system and renders the most recent pair of short transport news posts as full-content cards. Here is its template:
<h3 class="h4 mt-4 mb-3">Recent Snippets</h3>
<div class="row">
{{ range ( first 2 ( where .Site.Pages "Type" "news-snippets" ) ) }}
<div class="col-12 col-md-6 mb-3">
<div class="card h-100">
<div class="card-body">
<h4 class="h6 card-title mt-1 mb-2">
{{ .Date.Format "15:04, January 2" }}<sup>{{ if eq (.Date.Format "2") "2" }}nd{{ else if eq (.Date.Format "2") "22" }}nd{{ else if eq (.Date.Format "2") "1" }}st{{ else if eq (.Date.Format "2") "21" }}st{{ else if eq (.Date.Format "2") "3" }}rd{{ else if eq (.Date.Format "2") "23" }}rd{{ else }}th{{ end }}</sup>, {{ .Date.Format "2006" }}
</h4>
<div class="snippet-content">
{{ .Content }}
</div>
</div>
</div>
</div>
{{ end }}
</div>
The where function filters .Site.Pages to the news-snippets content type, and first 2 takes only the two most recently created entries. Notably, this collection does not call .ByDate.Reverse before first, which means it relies on Hugo's default page ordering. Where precise newest-first ordering matters, chaining ByDate.Reverse before first makes the intent explicit and avoids surprises if the default ordering changes.
The date heading warrants attention. It formats the time as 15:04 for a 24-hour clock display, followed by the month name and day number, then appends an ordinal suffix using a chain of if and else if comparisons against the raw day string. The logic handles the four irregular cases (1st, 21st, 2nd, 22nd, 3rd and 23rd) before falling back to th for all other days. The suffix is wrapped in a <sup> element so it renders as a superscript. The year follows as a separate .Date.Format "2006" call, separated from the day by a comma. Each card renders the full .Content of the snippet rather than a summary, which suits short-form posts where the entire entry is worth showing on the home page.
Latest Musings and Utility Pages Blocks
The home-teasers.html partial renders a two-column row of linked lists, one for recent long-form articles and one for utility pages. Its template is as follows:
<div class="row mt-4">
<div class="col-12 col-md-6 mb-3">
<div class="card h-100">
<div class="card-body">
<h3 class="h5 card-title mb-3">Recent Musings</h3>
{{ range first 5 ((where .Site.RegularPages "Type" "news-and-musings").ByDate.Reverse) }}
<p class="mb-2">
<a href="{{ .Permalink }}">{{ .Title }}</a>
</p>
{{ end }}
</div>
</div>
</div>
<div class="col-12 col-md-6 mb-3">
<div class="card h-100">
<div class="card-body">
<h3 class="h5 card-title mb-3">Extras & Utilities</h3>
{{ $extras := where .Site.RegularPages "Type" "extras" }}
{{ $extras = where $extras "Title" "ne" "Thank You for Your Message!" }}
{{ $extras = where $extras "Title" "ne" "Whoops!" }}
{{ range $extras.ByWeight }}
<p class="mb-2">
<a href="{{ .Permalink }}">{{ .Title }}</a>
</p>
{{ end }}
</div>
</div>
</div>
</div>
The left column uses .Site.RegularPages rather than .Site.Pages to exclude list pages, taxonomy pages and other non-content pages from the results. The news-and-musings type is filtered, sorted with .ByDate.Reverse and then limited to five entries with first 5, producing a compact, current list of article titles. The heading uses h5 on an h3 for the same visual-scale reason seen in the cards blocks, and h-100 on each card ensures the two columns match in height at medium viewports and above.
The right column builds the extras list through three chained where calls. The first narrows to the extras content type, and the subsequent two filter out utility pages that should never appear in public navigation, specifically the form confirmation and error pages. The remaining pages are then sorted by ByWeight, which respects the weight value set in each page's front matter. Pages without a weight default to zero, so assigning small positive integers to the pages that should appear first gives stable, editorially controlled ordering without touching the template.
Diagnosing Template Choices
Diagnosing which template Hugo has chosen is more reliable with tooling than with guesswork. Running the development server with debug output reveals the selected templates in the terminal logs. Another quick technique is to place a visible marker in a candidate file and inspect the page source.
HTML comments are often stripped during minified builds, and Go template comments never reach the output, so an innocuous meta tag makes a better marker because a minifier will not remove it. If the marker does not appear after a rebuild, either the template being edited is not in use because another file higher in the lookup order is taking precedence, or a theme is providing a matching file without it being obvious.
Front Matter Beyond Layout
Front matter on the home page earns its place when it supplies values that make their way into head tags and structured sections, rather than when it tries to replicate layout logic. A brief description is valuable for metadata and social previews because many base templates output it as a meta description tag. Where a site uses social cards, parameters for images and titles can be added and consumed consistently.
Menu participation also remains available to the home page, with entries in front matter allowing the home to appear in navigation with a given weight. Less common but still useful fields include outputs, which can disable or configure output formats, and cascade, which can provide defaults to child pages when site-wide consistency matters. Build controls can influence whether a page is rendered or indexed, though these are rarely changed on a home page once the structure has settled.
Template Hygiene
Template hygiene pays off throughout this process. Whether the home page uses a self-contained template or wraps baseof.html, the principle is the same: each file should own a clearly bounded responsibility. The home template in the example above does this well, with head.html, header.html and footer.html each handling their own concerns, and the main content area occupied by the blocks dispatcher and the two dynamic partials. Column wrappers are easiest to manage when each partial opens and closes its own structure, rather than relying on a sibling to provide closures elsewhere.
That self-containment prevents subtle layout breakage and means that adding a new block type requires only a small partial in layouts/partials/blocks/ and a new entry in the front matter blocks list, with no changes to any existing template. Once the home page adopts this pattern, the need for CSS overrides recedes because the HTML shape finally expresses intent instead of fighting it.
Bootstrap Utility Classes in Summary
Understanding Bootstrap's utility classes rounds off the technique because these classes anchor the modular blocks without the need for custom CSS. h-100 sets height to one hundred percent and works well on cards inside a flex row so that their bottoms align across a grid, as seen in both the cards block and the home teasers. The h4, h5 and h6 utilities apply a different typographic scale to any element without changing the document outline, which is useful for keeping headings visually restrained while preserving accessibility. img-fluid provides responsive behaviour by constraining an image to its container width and maintaining aspect ratio, and w-100 makes an image or any element fill the container width even if its intrinsic size would let it stop short. Together, these classes produce predictable and adaptable blocks that feel consistent across all viewports.
Closing Remarks
The result of combining Hugo's list-page model for the home, a block-driven front matter design and Bootstrap's light-touch utilities is a home page that reads cleanly and remains easy to extend. New block types become a matter of adding a small partial and a new blocks entry, with the dispatcher handling the rest automatically. Dynamic sections such as recent snippets sit in dedicated partials called directly from the template, updating without any intervention in content/_index.md. Existing sections can be reordered without editing templates, shared structure remains in one place, and the need for brittle CSS customisation fades because the templates do the heavy lifting.
A final point returns to content/_index.md. Keeping front matter purposeful makes it valuable. A title, a layout directive and a blocks list that models the editorially controlled page structure are often enough, as we have seen in this example from my public transport website. More seldom-used fields such as outputs, cascade and build remain available should a site require them, but their restraint reflects the wider approach: let content describe structure, let templates handle layout and avoid unnecessary complexity.
Lessons learned during migrations to Grav CMS from Textpattern
After the most of four years since the arrival of Textpattern 4.8.8, Textpattern 4.9.0 was released. Since, I was running two subsites using the software, I tried one of them out with the new version. That broke an elderly navigation plugin that no longer appears in any repository, prompting a rollback, which was successful. Even so, it stirred some curiosity about alternatives to this veteran of the content management system world, which is pushing on for twenty-two years of age. That might have been just as well, given the subsequent release of Textpattern 4.9.1 because of two reported security issues, one of which affecting all preceding versions of the software.
Well before that came to light, there had been a chat session in a Gemini app on a mobile which travelling on a bus. This started with a simple question about alternatives to Textpattern. The ensuing interaction led me to choose Grav CMS after one other option turned out to involve a subscription charge; A personal website side hustle generating no revenue was not going to become a more expensive sideline than it already was, the same reasoning that stops me paying for WordPress plugins.
Exploring Known Options
Without any recourse to AI capability, I already had options. While WordPress was one of those that was well known to me, the organisation of the website was such that it would be challenging to get everything placed under one instance and I never got around to exploring the multisite capability in much depth. Either way, it would prove to involve quite an amount of restructuring, Even having multiple instances would mean an added amount of maintenance, though I do automate things heavily. The number of attack surfaces because of database dependence is another matter.
In the past, I have been a user of Drupal, though its complexity and the steepness of the associated learning curve meant that I never exploited it fully. Since those were pre-AI days, I wonder how things would differ now. Nevertheless, the need to make parts of a website fit in with each other was another challenge that I failed to overcome in those days. Thus, this content framework approach was not one that I wished to use again. In short, this is an enterprise-grade tool that may be above the level of personal web tinkering, and I never did use its capabilities to their full extent.
The move away from Drupal brought me to Hugo around four years ago. That too presents a learning curve, though its inherent flexibility meant that I could do anything that I want with it once I navigated its documentation and ironed out oversights using web engine searches. This static website generator is what rebuilds a public transport website, a history website comprised of writings by my late father together with a website for my freelancing company. There is no database involved, and you can avoid having any dynamic content creation machinery on the web servers too. Using Git, it is possible to facilitate content publishing from anywhere as well.
Why Grav?
Of the lot, Hugo had a lot going for it. The inherent flexibility would not obstruct getting things to fit with a website wide consistency of appearance, and there is nothing to stop one structuring things how they wanted. However, Grav has one key selling point in comparison: straightforward remote production of content without recourse to local builds being uploaded to a web server. That decouples things from needing one to propagate the build machinery across different locations.
Like Hugo, Grav had an active project behind it and a decent supply of plugins and an architecture that bested Textpattern and its usual languid release cycle. The similarity also extended as far as not having to buy into someone else's theme: any theming can be done from scratch for consistency of appearance across different parts of a website. In Grav's case, that means using the Twig PHP templating engine, another thing to learn and reminiscent of Textpattern's Textile as much as what Hugo itself has.
The centricity of Markdown files was another area of commonality, albeit with remote editing. If you are conversant with page files having a combination of YAML front matter and subsequent page content from Hugo, Grav will not seem so alien to you, even if it has a web interface for editing that front matter. This could help if you need to edit the files directly for any reason.
That is never to say that there were no things to learn, for there was plenty of that. For example, it has its own way of setting up modular pages, an idea that I was to retrofit back into a Hugo website afterwards. This means care with module naming as well as caching, editor choice and content collections, each with their own uniqueness that rewards some prior reading. A learning journey was in the offing, a not attractive part of the experience in any event.
Considerations
There have been a number of other articles published here regarding the major lessons learned during the transitions from Textpattern to Grav. Unlike previous experiences with Hugo, another part of this learning was the use of AI as part of any debugging. At times, there was a need to take things step by step, interacting with the AI instead of trying out a script that it had put my way. There are times when one's own context window gets overwhelmed by the flow of text, meaning that such behaviour needs to be taken in hand.
Another thing to watch is that human consultation of the official documentation is not neglected in a quest for speed that lets the AI do that for you; after all, this machinery is fallible; nothing we ever bring into being is without its flaws. Grav itself also comes from a human enterprise that usefully includes its own Discord community. The GitHub repository was not something to which I had recourse, even if the Admin plugin interface has prompts for reporting issues on there. Here, I provide a synopsis of the points to watch that may add to the help provided elsewhere.
Choosing an Editor
By default, Grav Admin uses CodeMirror as its content editor. While CodeMirror is well-suited to editing code, offering syntax highlighting, bracket matching and multiple cursors, it renders its editing surface in a way that standard browser extension APIs cannot reach. Grammar checkers and spell-check extensions such as LanguageTool rely on native editable elements to detect text, and CodeMirror does not use these. The result is that browser-based writing tools produce no output in Grav Admin at all, which is a confirmed architectural incompatibility rather than a configuration issue documented in the LanguageTool issue tracker.
This can be addressed by replacing CodeMirror using the TinyMCE Editor Integration plugin, installable directly from the Admin plugin interface, which brings a familiar style of editor that browser extensions can access normally. Thus, LanguageTool functionality is restored, the writing workflow stays inside Grav Admin and the change requires only a small amount of configuration to prevent TinyMCE from interfering with Markdown files in undesirable ways. Before coming across the TinyMCE Editor plugin, I was seriously toying with the local editing option centred around a Git-based workflow. Here, using VS Code with the LanguageTool extension like I do for Hugo websites remained a strong possibility. The plugin means that the need to do this is not as pressing as it otherwise might be.
None of this appears to edit Twig templates and other configuration files unless one makes use of the Editor plugin. My brief dalliance with this revealed a clunky interface and interference with the appearance of the website, something that I never appreciated when I saw it with Drupal. Thus, the plugin was quickly removed, and I do not miss it. As it happened, editing and creating files over an SSH connection with a lightweight terminal editor worked well enough for me during the setup phase anyway. If I wanted a nicer editing experience, then a Git-based approach would allow local editing in VSCode before pushing the files back onto the server.
Grav Caching
Unlike WordPress, which requires plugins to do so, Grav maintains its own internal cache for compiled pages and assets. Learning to work with it is part of understanding the platform: changes to CSS, JavaScript and other static assets are served from this cache until it is refreshed. That can be accomplished using the admin panel or by removing the contents of the cache directory directly. Once this becomes second nature, it adds very little overhead to the development process.
Modular Pages
On one of the Textpattern subsites, I had set up the landing page in a modular fashion. This carried over to Grav, which has its own way of handling modular pages. There, the modular page system assembles a single page from the files found within a collection of child folders, each presenting a self-contained content block with its own folder, Markdown file and template.
All modules render together under a single URL; they are non-routable, meaning visitors cannot access them directly. When the parent folder contains a modular.md file, the name tells Grav to use the modular.html.twig template and whose front matter defines which modules to include and in what order.
Module folders are identified by an underscore at the start of their name, and numeric prefixes control the display sequence. The prefix must come before the underscore: _01.main is the correct form. For a home page with many sections this structure scales naturally, with folder names such as 01._title, 04._ireland or 13._practicalities-inspiration making the page architecture immediately readable from the file system alone.
Each module's Markdown filename determines which template renders it: a file named text.md looks for text.html.twig in the theme's modular templates folder. The parent modular.md assembles the modules using @self.modular to collect them, with a custom order list giving precise control over sequence. Once the folder naming convention and the template matching relationship are clear, the system is very workable.
Building Navigation
Given that the original impetus for leaving Textpattern was a broken navigation plugin, ensuring that Grav could replicate the required menu behaviour was a matter of some importance. Grav takes a different approach to navigation from database-driven systems, deriving its menu structure directly from the content directory tree using folder naming conventions and front matter flags rather than a dedicated menu editor.
Top-level navigation is driven by numerically prefixed subfolders within the content directory (pages), so a structure such as 01.home, 02.about and 03.blog yields an ordered working menu automatically. Visibility can be fine-tuned without renaming folders by setting visible: true or visible: false in a page's YAML front matter, and menu labels can be shortened for navigation purposes using the menu: field while retaining a fuller title for the page itself.
The primary navigation loop iterates over the visible children of the pages tree and uses the active and activeChild flags on each page object to highlight the current location, whether the visitor is on a given top-level page directly or somewhere within its subtree. A secondary menu for the current section is assembled by first identifying the active top-level page and then rendering its visible children as a list. Testing for activeChild as well as active in the secondary menu is important, as omitting it means that visitors to grandchild pages see no item highlighted at all. The approach differs from what was possible with Textpattern, where a single composite menu could drill down through the full hierarchy, but displaying one menu for pages within a given section alongside another showing the other sections proves to be a workable and context-sensitive alternative.
Setting Up RSS Feeds
Because Grav does not support the generation of RSS feeds out of the box, it needs a plugin and some extra configuration. The latter means that you need to get your head around the Grav concept of a collection because without it, you will not see anything in your feed. In contrast, database-driven platforms like WordPress or Drupal push out the content by default, which may mean that you are surprised when you first come across how Grav needs you to specify the collections explicitly.
There are two details that make performing configuration of a feed straightforward once understood. The first is that Grav routes do not match physical folder paths: a folder named 03.deliberations on disk is referenced in configuration as /deliberations, since the numeric prefix controls navigation ordering but does not appear in the route, that is the actual web page address. The second is the choice between @page.children, which collects only the immediate children of a folder, and @page.descendants, which collects recursively through all subdirectories. The collection definition belongs in the feed page's front matter, specifying where the content lives, how it should be ordered and in which direction.
Where All This Led
Once I got everything set in place, the end results were pleasing, with much learned along the way. Web page responsiveness was excellent, an experience enhanced by the caching of files. In the above discussion, I hardly mentioned the transition of existing content. For one subsite, this was manual because the scale was smaller, and the Admin plugin's interface made everything straightforward such that all was in place after a few hours of work. In the case of the other, the task was bigger, so I fell on an option used for a WordPress to Hugo migration: Python scripting. That greatly reduced the required effort, allowing me to focus on other things like setting up a modular landing page. The whole migration took around two weeks, all during time outside my client work. There are other places where I can use Grav, which surely will add to what I already have learned. My dalliance with Textpattern is feeling a little like history now.
Resolving a Linux Mint and Windows keyboard shortcut conflict encountered when using SAS Enterprise Guide in a remote Citrix session
Here is a gotcha, slight though it is, that caught me when working in SAS Enterprise Guide on a Windows system to which I was connecting from Linux via Citrix. What I wanted to do was use the keyboard shortcut CTRL + SHIFT + U to convert text to upper case in the program editor, only for it to produce a black square and nothing else.
What I was encountering was a clash in keyboard shortcut assignments. On Linux Mint, CTRL + SHIFT + U activates Unicode character input mode. The black square was there for me to enter a hexadecimal code to add a character that my keyboard would not facilitate in normal circumstances. While the facility clearly has its uses, it was getting in my way and a solution had to be found.
Taking the simple route, I changed the keyboard shortcut to avoid the clash. Though others may want to go further than this, that was enough for me. At the command line, I issued the following command so that I could accomplish this:
ibus-setup
In the application screen that appeared, I navigated to the Emoji tab. To the right of the Unicode code point box, I clicked on the button with four dots. That led me to another dialogue box where I could change the modifier keys. Thus, I unchecked the box for SHIFT and ticked the one for SUPER (the Windows key on many keyboards these days) instead, before clicking on the OK button to confirm the setting. With that completed, I closed the IBus Preferences screen.

Now, I had CTRL + SUPER + U instead of CTRL + SHIFT + U. This meant that the CTRL + SHIFT + U in Enterprise Guide worked exactly as I expected it to do. A baffling situation had been resolved to leave me working without intrusion.
Building context-sensitive navigation in Grav CMS with Twig
If you are migrating a web portal because a new CMS release broke a site navigation menu plugin, then you are going to make sure that something similar is there in the new version. The initial upset was caused by Textpattern 4.9.0, the move to which resulted in a hasty rollback. A subsequent AI interaction brought Grav CMS into the fray, where menus can be built using the Twig templating language.
Along the way, there was one noticeable difference: a composite menu with hierarchy that drilled down to the pages in a selected section was not quite as possible in Grav. Nevertheless, displaying one menu for pages in a given section along with another showing the other sections is hardly a dealbreaker as far as I am concerned, especially when things are context-sensitive anyway.
This may because Grav derives its navigation directly from the content directory tree using folder naming conventions and front matter flags, all unlike database-driven systems that rely on a dedicated menu editor. After all, you are working with files that expose page state, and not queries of database tables.
The Pages Folder
At the heart of the system is the pages folder. Grav looks at the top level of this directory to determine the primary navigation, and any subfolder that begins with a numeric prefix is treated as a visible page whose position in the menu is set by that number. A structure such as pages/01.home, pages/02.about, pages/03.blog and pages/04.contact immediately yields a working menu in the order you expect. Because this approach is driven by the file system, reordering pages is as simple as renaming the folders, with no additional configuration required.
Visibility can also be controlled without renaming if you prefer to keep folders unnumbered. Each page has a Markdown file containing YAML front matter (commonly named default.md), and adding visible: true to that front matter ensures the page appears in navigation. Setting visible: false hides it. Both approaches work across a site, though the numeric prefix convention remains the most straightforward way to manage ordering and visibility together.
Customising Menu Labels
Menu text defaults to the page title, which suits most cases well. There are times when you want a shorter label in the navigation while keeping a fuller title for the page itself, and the front matter field menu: makes that possible. Writing menu: Blog on the blog page means the menu displays "Blog" even if the page title reads "Company Blog and News". This keeps navigation crisp without sacrificing descriptive titles for search engines and content clarity.
The Main Menu Loop
The primary navigation iterates over pages.children.visible and prints a link for each top-level page. The active and activeChild flags on each page object let you mark the current location: active matches the page the visitor is on, while activeChild is true on any parent whose descendant is currently being viewed. Testing both together means a top-level item is highlighted, whether the visitor is on that page directly or anywhere beneath it:
<ul id="mainmenu" class="section_list">
{% for p in pages.children.visible %}
<li class="{{ (p.active or p.activeChild) ? 'active_class' : '' }}">
<a href="{{ p.url }}">{{ p.menu }}</a>
</li>
{% endfor %}
</ul>
This loop picks up any changes to the page tree automatically, with no further configuration required.
Context-Sensitive Sidebar Headings
Before the navigation blocks, the sidebar can show a contextual heading depending on where the visitor is. On the home page, a page.home check provides one heading, and a route comparison handles a specific page such as /search:
{% if page.home %}
<h4 class="mt-4 mb-4">Fancy Some Exploration?</h4>
{% endif %}
{% if page.route == '/search' %}
<h4 class="mt-4 mb-4">Fancy Some More Exploration?</h4>
{% endif %}
These headings appear independently of the secondary navigation block, so they display even when there is no active section with children to list below them.
The Secondary Menu
When a visitor is inside a section that has visible child pages, a secondary menu listing those children is more useful than a dropdown. The approach is to find the active top-level page, referred to here as the owner, by looping through pages.children.visible and checking the same active and activeChild flags:
{% set owner = null %}
{% for top in pages.children.visible %}
{% if top.active or top.activeChild %}
{% set owner = top %}
{% endif %}
{% endfor %}
Once owner is found, its menu label can be used as a section heading and its visible children rendered as a list. Importantly, each child item should test child.active or child.activeChild rather than child.active alone. Without activeChild, a visitor to a grandchild page would see no item highlighted in the secondary nav at all:
{% if owner and owner.children.visible.count > 0 %}
<h4 class="mt-4 mb-4">{{ owner.menu }}</h4>
<ul id="secondary-nav" class="section_list">
{% for child in owner.children.visible %}
<li class="{{ (child.active or child.activeChild) ? 'active_class' : '' }}">
<a href="{{ child.url }}">{{ child.menu }}</a>
</li>
{% endfor %}
</ul>
<h4 class="mt-4 mb-4">Looking Elsewhere?</h4>
{% endif %}
The entire block is conditional on owner existing and having visible children, so it does not render at all on the home page, the search page or any other top-level page without subsections.
Common Troubleshooting Points
There are a few subtleties worth bearing in mind. The most frequent cause of trouble is looping over the wrong collection: using page.children.visible instead of pages.children.visible in the owner-detection loop places you inside the current page's subtree, so nothing will flag as active or activeChild correctly. A second issue affects secondary nav items specifically: using only child.active means a visitor to a grandchild page sees no item highlighted because none of the listed children is the current page. Adding or child.activeChild to the condition resolves this. Clearing the Grav cache is also a worthwhile step during development because stale output can make correct template changes appear to have no effect.
Closing Remarks
In summary, you have learned how Grav assembles navigation from its page tree, how to detect the active section using active and activeChild flags, how to display a secondary menu only when a section has visible children, and how to show context-sensitive headings for specific pages. The result is a sidebar that maintains itself automatically as pages are added or reorganised, with no manual menu configuration required.