TOPIC: OPEN FORMATS
Altering table and hyperlink tags for single Grav articles using HTML post-processing
This year, there have been a few entries on here regarding Grav because of my moving parts of my website estate to that content management system, first from Textpattern and latterly from WordPress. Once the second activity was completed, I then added an article on German public holidays elsewhere. That brought me to the topic of this piece: ensuring that some Markdown was rendered as required.
There were two parts to this: the styling of tables and the actions of hyperlinks. Each needs to be performed in a page template when all HTML has been initially rendered. Further processing then makes the required changes. Since this is a page template and not a partial template and not a partial template, you need to import a master template like this:
{% extends 'partials/base.html.twig' %}
Then, you go to the next stage, defining the content block within {% block content %}...{% endblock %} Twig tags:
{% set content = page.content
|replace({'<table>': '<table class="table mt-5 mb-5">'})
|replace({'<a href="http': '<a target="_blank" rel="noopener noreferrer" href="http'})
%}
The above reads in the page content (page.content) and does some text replacement operations. The first of these changes <table> to <table class="table mt-5 mb-5">, while the second replaces <a href="http with <a target="_blank" rel="noopener noreferrer" href="http. While my content was a mix of Markdown and HTML, depending on the article, the latter operation appeared to standardise every link.
Once the text replacement has been completed, the next step is to output the processed HTML like this:
{{ content|raw }}
This last line sits outside the {% block content %}...{% endblock %} block; coming after it, in fact. To send the processed output to the generated web page, you need to ensure that you are referring to the right variable, the local one called content and not page.content. The raw filter also is essential here to ensure that nothing is rendered into HTML when the raw HTML itself is what is needed.
All of this effort ensures that straightforward Markdown can be used in content, while Grav does some extra work in the background to ensure that all is rendered without extra intervention. While there may need to be a certain level of standardisation to make this all work well, I find that it does what is needed, albeit in a different manner from shortcode approach that you find in Hugo.
Customising the nano editor using a personal configuration file stored in your home directory
For a long time, I had not realised that the nano editor could be customised, and a look at /etc/nanorc on a Linux system will show what is possible. However, editing that file will not yield such permanent alterations, given the vagaries of system and package updates. Thus, the option of having a .nanorc file in your home directory has its uses. Here then are some settings that you can specify in this file to make the user-friendly editor even more useful:
set softwrap
By default, nano does not wrap long lines. For a time, I overlooked this, only for its use as a website content editor to change that. Adding this setting will wrap the long line to save some scrolling time and aid in getting a fuller picture of its contents. There is breaklonglines too, even though that adds hard breaks, which means that you get carriage returns added to your file, not always a desirable outcome.
set atblanks
To get the line wrapping to use spaces as a delimiter, define this setting. After that, you will not want to see words being broken up by line breaks.
set linenumbers
Many editors have line numbers which help to navigate files. Although nano has a shortcut for going to a particular line, line numbers are not set by default. This setting sets that to rights.
set indicator
Following on from the above, adding a bar on the right-hand side with the appearance of a scroll bar seen in other applications has its uses for seeing where you are in a file. That can help with orientation.
set nonewlines
By default, nano adds an extra blank line at the bottom of any file that it edits. While this may have uses for display using the cat command when an extra line avoids messing up where the command line prompts appear and having a ready location to add content at the end of a file, it always has looked odd to me. This setting turns off that behaviour to make things work like they do elsewhere.
set tabstospaces
In many editors, there is an option to turn tabs into spaces (SAS Enterprise Guide and entimICE are two examples that come to my mind as I write these words), and this will do the same within nano. That could be useful when making everything consistent within a file, especially after copying in code from elsewhere.
set tabsize 4
A recent discussion with colleagues at work revealed that we all indent code a little differently. The numbers of spaces had become the major differentiator, and the client had no standard for this. While four would be my choices, others have two, which is where this setting is helpful when it is used with the tabstospaces one described above.
This list is but a subset of what is on offer, and that is why the file mentioned at the start is well worth perusing. For all too long, I had not realised what was possible until editing of Markdown files caused me to wonder if nano could be made even better than it was when the default settings were active.
Some R packages to explore as you find your feet with the language
Here are some commonly used R packages and other tools that are pervasive, along with others that I have encountered while getting started with the language, itself becoming pervasive in my line of business. The collection grew organically as my explorations proceeded, and reflects what I was trying out during my acclimatisation.
General
Here are two general packages to get things started, with one of them being unavoidable in the R world. The other is more advanced, possibly offering more to package developers.
You cannot use R without knowing about this collection of packages. In many ways, they form a mini-language of their own, drawing some criticism from those who reckon that base R functionality covers a sufficient gamut anyway. Nevertheless, there is so much here that will get you going with data wrangling and visualisation that it is worth knowing what is possible. The complaints may come from your not needing to use anything else for these purposes.
This R package enables developers to convert existing R functions into web API endpoints by adding roxygen2-like comment annotations to their code. Once annotated, functions can handle HTTP GET and POST requests, accept query string or JSON parameters and return outputs such as plain values or rendered plots. The package is available on CRAN as a stable release, with a development version hosted on GitHub. For deployment, it integrates with DigitalOcean through a companion package called {plumberDeploy}, and also supports Posit Connect, PM2 and Docker as hosting options. Related projects in the same space include OpenCPU, which is designed for hosting R APIs in scientific research contexts, and the now-discontinued jug package, which took a more programmatic approach to API construction.
Data Preparation
You simply cannot avoid working with data during any analysis or reporting work. While there is a learning curve if you are used to other languages, there is little doubt that R is well-endowed when it comes to performing these tasks. Here are some packages that extend base R capabilities and might even add some extra user-friendliness along the way.
The {forcats} package in R provides functions to manage categorical variables by reordering factor levels, collapsing infrequent values and adjusting their sequence based on frequency or other variables. It includes tools such as reordering by another variable, grouping rare categories into 'other' and modifying level order manually, which are useful for data analysis and visualisation workflows. Designed as part of the tidyverse, it integrates with other packages to streamline tasks like counting and plotting categorical data, enhancing clarity and efficiency in handling factors within R.
Around this time last year, I remember completing a LinkedIn course on a set of good practices known as tidy data, where each variable occupies a column, each observation a row and each value a single cell. This package is designed to help users restructure data so it follows those rules. It provides tools for reshaping data between long and wide formats, handling nested lists, splitting or combining columns, managing missing values and layering or flattening grouped data.
Installation options include the {tidyverse} collection, standalone installation, or the development version from GitHub. The package succeeds earlier reshaping tools like {reshape2} and {reshape}, offering a focused approach to tidying data rather than general reshaping or aggregation.
Having a long track record of working with SAS, {haven} with its abilities to read and write data files from statistical software such as SAS, SPSS and Stata, leveraging the ReadStat library, arouses my interest. Handily, it supports a range of file formats, including SAS transport and data files, SPSS system and older portable files and Stata data files up to version 15, converting these into tibbles with enhanced printing capabilities. Value labels are preserved as a labelled class, allowing conversion to factors, while dates and times are transformed into standard R classes.
While there are other approaches to working with databases using R, {RMariaDB} provides a database interface and driver for MariaDB, designed to fully comply with the DBI specification and serve as a replacement for the older {RMySQL} package. It supports connecting to databases using configuration files, executing queries, reading and writing data tables and managing results in chunks. Installation options include binary packages from CRAN or development versions from GitHub, with additional dependencies such as MariaDB Connector/C or libmysqlclient required for Linux and macOS systems. Configuration is typically handled through a MariaDB-specific file, and the package includes acknowledgments for contributions from various developers and organisations.
For many people, the pandemic may be a fading memory, yet it offered its chances for learning R, not least because there was a use case with more than a hint of personal interest about it. Here is a library making it easier to get hold of the data, with some added pre-processing too. Memories of how I needed to wrangle what was published by various sources make me appreciate just how vital it is to have harmonised data for analysis work.
Table Production
While many appear to graphical presentation of results to their tabular display, R does have its options here too. In recent times, the options have improved, particularly of the pharmaverse initiative. Here is a selection of what I found during my explorations.
Part of the {officeverse} along with {officedown}, {Flextable}, {Rvg} and {mschart}, the {officer} R package enables users to create and modify Word and PowerPoint documents directly from R, allowing the insertion of images, tables and formatted content, as well as the import of document content into data frames. It supports the generation of RTF files and integrates with other packages for advanced features such as vector graphics and native office charts. Installation options include CRAN and GitHub, with community resources available for assistance and contributions. The package facilitates the manipulation of document elements like paragraphs, tables and section breaks and provides tools for exporting and importing content between R and office formats, alongside functions for managing slide layouts and embedded objects in presentations.
If you work in clinical research like I do, the need to produce data tabulations is a non-negotiable requirement. That is how this package came to be developed and the pharmaverse of which it is part has numerous other options, should you need to look at using one of those. The flavour of RTF produced here is the Microsoft Word variety, which did not look as well in LibreOffice Writer when I last looked at the results with that open-source alternative. Otherwise, the results look well to many eyes.
Here is a package that enhances data presentation by applying customisable formatting to vectors and data frames, supporting formats such as percentages, currency and accounting. Available on GitHub and CRAN, it integrates with dynamic document tools like {knitr} and {rmarkdown} to produce visually distinct tables, with features including gradient colour scales, conditional styling and icon-based representations. It automatically converts to {htmlwidgets} in interactive environments and is licensed under MIT, enabling flexible use in both static and interactive data displays.
The {reactable} package for R provides interactive data tables built on the React Table library, offering features such as sorting, filtering, pagination, grouping with aggregation, virtual scrolling for large datasets and support for custom rendering through R or JavaScript. It integrates seamlessly into R Markdown documents and Shiny applications, enabling the use of HTML widgets and conditional styling. Installation options include CRAN and GitHub, with examples demonstrating its application across various datasets and scenarios. The package supports major web browsers and is licensed under MIT, designed for developers seeking dynamic data presentation tools within the R ecosystem.
Particularly useful in dynamic web applications like Shiny, the {DT} package in R provides a means of rendering interactive HTML tables by building on the DataTables JavaScript library. It supports features including sorting, searching, pagination and advanced filtering, with numeric, date and time columns using range-based sliders whilst factor and character columns rely on search boxes or dropdowns. Filtering operates on the client side by default, though server-side processing is also available. JavaScript callbacks can be injected after initialisation to manipulate table behaviour, such as enabling automatic page navigation or adding child rows to display additional detail. HTML content is escaped by default as a safeguard against cross-site scripting attacks, with the option to adjust this on a per-column basis. Whilst the package integrates with Shiny applications, attention is needed around scrolling and slider positioning to prevent layout problems. Overall, the package is well suited to exploratory data analysis and the building of interactive dashboards.
The {gt} package in R enables users to create well-structured tables with a variety of formatting options, starting from data frames or tibbles and incorporating elements such as headers, footers and customised column labels. It supports output in HTML, LaTeX and RTF formats and includes example datasets for experimentation. The package prioritises simplicity for common tasks while offering advanced functions for detailed customisation, with installation available via CRAN or GitHub. Users can access resources like documentation, community forums and example projects to explore its capabilities, and it is supported by a range of related packages that extend its functionality.
Enabling users to produce publication-ready outputs with minimal code, the {gtsummary} package offers a streamlined approach to generating analytical and summary tables in R. It automates the summarisation of data frames, regression models and other datasets, identifying variable types and calculating relevant statistics, including measures of data incompleteness. Customisation options allow for formatting, merging and styling tables to suit specific needs, while integration with packages such as {broom} and {gt} facilitates seamless incorporation into R Markdown workflows. The package supports the creation of side-by-side regression tables and provides tools for exporting results as images, HTML, Word, or LaTeX files, enhancing flexibility for reporting and sharing findings.
Here is an R package designed to generate LaTeX and HTML tables with a modern, user-friendly interface, offering extensive control over styling, formatting, alignment and layout. It supports features such as custom borders, padding, background colours and cell spanning across rows or columns, with tables modifiable using standard R subsetting or dplyr functions. Examples demonstrate its use for creating simple tables, applying conditional formatting and producing regression output with statistical details. The package also facilitates quick export to formats like PDF, DOCX, HTML and XLSX. Installation options include CRAN, R-Universe and GitHub, while the name reflects its origins as an enhanced version of the {xtable} package. The logo was generated using the package itself, and the background design draws inspiration from Piet Mondrian’s artwork.
Figure Generation
R has such a reputation for graphical presentations that it is cited as a strong reason to explore what the ecosystem has to offer. While base R itself is not shabby when it comes to creating graphs and charts, these packages will extend things by quite a way. In fact, the first on this list is near enough pervasive.
Though its default formatting does not appeal to me, the myriad of options makes this a very flexible tool, albeit at the expense of some code verbosity. Multi-panel plots are not among its strengths, which may send you elsewhere for that need.
Focusing on features not included in the core library, the {ggforce} package extends {ggplot2} by offering additional tools to enhance data visualisation. Designed to complement the primary role of {ggplot2} in exploratory data analysis, it provides a range of geoms, stats and other components that are well-documented and implemented, aiming to support more complex and custom plot compositions. Available for installation via CRAN or GitHub, the package includes a variety of functionalities described in detail on its associated website, though specific examples are not included here.
Developed by Claus O. Wilke for internal use in his lab, {cowplot} is an R package designed to help with the creation of publication-quality figures built on top of {ggplot2}. It provides a set of themes, tools for aligning and arranging plots into compound figures and functions for annotating plots or combining them with images. The package can be installed directly from CRAN or as a development version via GitHub, and it has seen widespread use in the book Fundamentals of Data Visualisation.
The {sjPlot} package provides a range of tools for visualising data and statistical results commonly used in social science research, including frequency tables, histograms, box plots, regression models, mixed effects models, PCA, correlation matrices and cluster analyses. It supports installation via CRAN for stable releases or through GitHub for development versions, with documentation and examples available online. The package is licensed under GPL-3 and developed by Daniel Lüdecke, offering functions to create visualisations such as scatter plots, Likert scales and interaction effect plots, along with tools for constructing index variables and presenting statistical outputs in tabular formats.
By offering a centralised approach to theming and enabling automatic adaptation of plot styles within Shiny applications, the {thematic} package simplifies the styling of R graphics, including {ggplot2}, {lattice} and base R plots, R Markdown documents and RStudio. It allows users to apply consistent visual themes across different plotting systems, with auto-theming in Shiny and R Markdown relying on CSS and {bslib} themes, respectively. Installation requires specific versions of dependent packages such as {shiny} and {rmarkdown}, while custom fonts benefit from {showtext} or {ragg}. Users can set global defaults for background, foreground and accent colours, as well as fonts, which can be overridden with plot-specific theme adjustments. The package also defines default colour scales for qualitative and sequential data and integrates with tools like bslib to import Google Fonts, enhancing visual consistency across different environments and user interfaces.
Publishing Tools
The R ecosystem goes beyond mere graphical and tabular display production to offer means for taking things much further, often offering platforms for publishing your work. These can be used locally too, so there is no need to entrust everything to a third-party provider. The uses are endless for what is available, and it appears that Posit has used this to help with building documentation and training too.
What you have here is one of those distinguishing facilities of the R ecosystem, particularly for those wanting to share their analysis work with more than a hint of reproducibility. The tool combines narrative text and code to generate various outputs, supporting multiple programming languages and formats such as HTML, PDF and dashboards. It enables users to produce reports, presentations and interactive applications, with options for publishing and scheduling through platforms like RStudio Connect, facilitating collaboration and distribution of results in professional settings.
Distill for R Markdown is a tool designed to streamline the creation of technical documents, offering features such as code folding, syntax highlighting and theming. It builds on existing frameworks like Pandoc, MathJax and D3, enabling the production of dynamic, interactive content. Users can customise the appearance with CSS and incorporate appendices for supplementary information. The tool acknowledges the contributions of developers who created foundational libraries, ensuring accessibility and functionality for a wide audience. Its design prioritises clarity, allowing authors to focus on presenting results rather than underlying code, while maintaining flexibility for those who wish to include detailed explanations.
For a while, this was one of R's unique selling points, and remains as compelling a reason to use the language even when Python has got its own version of the package. Enabling the creation of interactive web applications for data analysis without requiring web development expertise allows users to build interfaces that let others explore data through dynamic visualisations and filters. Here is a simple example: an app that generates scatter plots with adjustable variables, species filters and marginal plots, hosted either on personal servers or through a dedicated hosting service.
The {bslib} R package offers a modern user interface toolkit for Shiny and R Markdown applications, leveraging Bootstrap to enable the creation of customisable dashboards and interactive theming. It supports the use of updated Bootstrap and Bootswatch versions while maintaining compatibility with existing defaults, and provides tools for real-time visual adjustments. Installation is available through CRAN, with example previews demonstrating its capabilities.
Enabling users to manipulate and validate data within a spreadsheet-like interface, the {rhandsontable} package introduces an interactive data grid for R. It supports features such as custom cell rendering, validation rules and integration with Shiny applications. When used in Shiny, the widget requires explicit conversion of data using the hot_to_r function, as updates may not be immediately reflected in reactive contexts. Examples demonstrate its application in various scenarios, including date editing, financial calculations and dynamic visualisations linked to charts. The package also accommodates bookmarks in Shiny apps with specific handling. Users are encouraged to report issues or contribute improvements, with guidance provided for those seeking to expand its functionality. The development team welcomes feedback to refine the tool further, ensuring it aligns with evolving user needs.
{xaringanExtra} offers a range of enhancements and extensions for creating and presenting slides with xaringan, enabling features such as adding an overview tile view, making slides editable, broadcasting in real time, incorporating animations, embedding live video feeds and applying custom styles. It allows users to selectively activate individual tools or load multiple features simultaneously through a single function call, supporting tasks like adding banners, enabling code copying, fitting slides to screen dimensions and integrating utility toolkits. The package is available for installation via CRAN or GitHub, providing flexibility for developers and presenters seeking to expand the functionality of their slides.
Online R programming books that are worth bookmarking
As part of making content more useful following its reorganisation, numerous articles on the R statistical computing language have appeared on here. All of those have taken a more narrative form. With this collation of online books on the R language, I take a different approach. What you find below is a collection of links with associated descriptions. While narrative accounts can be very useful, there is something handy about running one's eye down a compilation as well. Many entries have a corresponding print edition, some of which are not cheap to buy, which makes me wonder about the economics of posting the content online as well, though it can help with getting feedback during book preparation.
We start with this comprehensive collection of over 400 free and affordable resources related to the R programming language, organised into categories such as data science, statistics, machine learning and specific fields like economics and life sciences. In many ways, it is a superset of what you find below and complements this collection with many other finds. The fact that it is a living collection makes it even more useful.
R Programming for Data Science
Here is an introduction to the R programming language, focusing on its application in data science. It covers foundational topics such as installation, data manipulation, function writing, debugging and code optimisation, alongside advanced concepts like parallel computation and data analysis case studies. The text includes practical guidance on handling data structures, using packages such as {dplyr} and {readr} as well as working with dates, times and regular expressions. Additional sections address control structures, scoping rules and profiling techniques, while the author also discusses resources for staying updated through a podcast and accessing e-book versions for ongoing revisions.
Designed for individuals with no prior coding experience, the book provides an introduction to programming in R while using practical examples to teach fundamental concepts such as data manipulation, function creation and the use of R's environment system. It is structured around hands-on projects, including simulations of weighted dice, playing cards and a slot machine, alongside explanations of core programming principles like objects, notation, loops and performance optimisation. Additional sections cover installation, package management, data handling and debugging techniques. While the book is written using RMarkdown and published under a Creative Commons licence, a physical edition is available through O’Reilly.
What you have here is one of several books written by Hadley Wickham. This one is published in its second edition as part of Chapman and Hall's R Series and is aimed primarily at R users who want to deepen their programming skills and understanding of the language, though it is also useful for programmers migrating from other languages. The book covers a broad range of topics organised into sections on foundations, functional programming, object-oriented programming, metaprogramming and techniques, with the latter including debugging, performance measurement and rewriting R code in C++.
Unlike Paul Teetor's separately published R Cookbook, the Cookbook for R was created by Winston Chang. It offers solutions to common tasks and problems in data analysis, covering topics such as basic operations, numbers, strings, formulas, data input and output, data manipulation, statistical analysis, graphs, scripts and functions, and tools for experiments.
The second edition of R for Data Science by Hadley Wickham, Mine Çetinkaya-Rundel and Garrett Grolemund offers a structured approach to learning data science with R, covering essential skills such as data visualisation, transformation, import, programming and communication. Organised into chapters that explore workflows, data manipulation techniques and tools like Quarto for reproducible research, the book emphasises practical applications and best practices for handling data effectively.
The R Graphics Cookbook, 2nd edition, offers a comprehensive guide to creating visualisations in R, structured into chapters that cover foundational skills such as installing and using packages, loading data from various formats and exploring datasets through basic plots. It progresses to detailed techniques for constructing bar graphs, line graphs, scatter plots and histograms, alongside methods for customising axes, annotations, themes and legends.
The book also addresses advanced topics like colour application, faceting data into subplots, generating specialised graphs such as network diagrams and heat maps and preparing data for visualisation through reshaping and summarising. Additional sections focus on refining graphical outputs for presentation, including exporting to different file formats and adjusting visual elements for clarity and aesthetics, while an appendix provides an overview of the {ggplot2} system.
R Markdown: The Definitive Guide
Published by Chapman & Hall/CRC, R Markdown: The Definitive Guide by Yihui Xie, J.J. Allaire and Garrett Grolemund covers the R Markdown document format, which has been in use since 2012 and is built on the knitr and Pandoc tools. The format allows users to embed code within Markdown documents and compile the results into a range of output formats including PDF, HTML and Word. The guide covers a broad scope of practical applications, from creating presentations, dashboards, journal articles and books to building interactive applications and generating blogs, reflecting how the ecosystem has matured since the {rmarkdown} package was first released in 2014.
A key principle running throughout is that Markdown's deliberately limited feature set is a strength rather than a drawback, encouraging authors to focus on content rather than complex typesetting. Despite this simplicity, the format remains highly customisable through tools such as Pandoc templates, LaTeX and CSS. Documents produced in R Markdown are also notably portable, as their straightforward syntax makes conversion between output formats more reliable, and because results are generated dynamically from code rather than entered manually, they are far more reproducible than those produced through conventional copy-and-paste methods.
The R Markdown Cookbook is a practical guide designed to help users enhance their ability to create dynamic documents by combining analysis and reporting. It covers essential topics such as installation, document structure, formatting options and output formats like LaTeX, HTML and Word, while also addressing advanced features such as customisations, chunk options and integration with other programming languages. The book provides step-by-step solutions to common tasks, drawing on examples from online resources and community discussions to offer clear, actionable advice for both new and experienced users seeking to improve their workflow and explore the full potential of R Markdown.
This book provides a practical guide to using R Markdown for scientists, developed from a three-hour workshop and designed to evolve as a living resource. It covers essential topics such as setting up R Markdown documents, integrating with RStudio for efficient workflows, exporting outputs to formats like PDF, HTML and Word, managing figures and tables with dynamic references and captions, incorporating mathematical equations, handling bibliographies with citations and style adjustments, troubleshooting common issues and exploring advanced R Markdown extensions.
bookdown: Authoring Books and Technical Documents with R Markdown
Here is a guide to using the {bookdown} package, which extends R Markdown to facilitate the creation of books and technical documents. It covers Markdown syntax, integration of R code, formatting options for HTML, LaTeX and e-book outputs and features such as cross-referencing, custom blocks and theming. The package supports both multipage and single-document outputs, and its applications extend beyond traditional books to include course materials, manuals and other structured content. The work includes practical examples, publishing workflows and details on customisation, alongside information about licensing and the availability of a printed version.
[blogdown]: Creating Websites with R Markdown
Though the authors note that some information may be outdated due to recent updates to Hugo and the {blogdown} package, and they direct readers to additional resources for the latest features and changes, this book still provides a guide to building static websites using R Markdown and the Hugo static site generator, emphasising the advantages of this approach for creating reproducible, portable content. It covers installation, configuration, deployment options such as Netlify and GitHub Pages, migration from platforms like WordPress and advanced topics including custom layouts and version control as well as practical examples, workflow recommendations and discussions on themes, content management and technical aspects of website development.
[pagedown]: Create Paged HTML Documents for Printing from R Markdown
The R package {pagedown} enables users to create paged HTML documents suitable for printing to PDF, using R Markdown combined with a JavaScript library called paged.js, that later of which implements W3C specifications for paged media. While tools like LaTeX and Microsoft Word have traditionally dominated PDF production, pagedown offers an alternative approach through HTML and CSS, supporting a range of document types including resumes, posters, business cards, letters, theses and journal articles.
Documents can be converted to PDF via Google Chrome, Microsoft Edge or Chromium, either manually or through the chrome_print() function, with additional support for server-based, CI/CD pipeline and Docker-based workflows. The package provides customisable CSS stylesheets, a CSS overriding mechanism for adjusting fonts and page properties, and various formatting features such as lists of tables and figures, abbreviations, footnotes, line numbering, page references, cover images, running headers, chapter prefixes and page breaks. Previewing paged documents requires a local or remote web server, and the layout is sensitive to browser zoom levels, with 100% zoom recommended for the most accurate output.
Dynamic Documents with R and knitr
Developed by Yihui Xie and inspired by the earlier {Sweave} package, {knitr} is an R package designed for dynamic report generation that consolidates the functionality of numerous other add-on packages into a single, cohesive tool. It supports multiple input languages, including R, Python and shell scripts, as well as multiple output markup languages such as LaTeX, HTML, Markdown, AsciiDoc and reStructuredText. The package operates on a principle of transparency, giving users full control over how input and output are handled, and runs R code in a manner consistent with how it would behave in a standard R terminal.
Among its notable features are built-in caching, automatic code formatting via the {formatR} package, support for more than 20 graphics devices and flexible options for managing plots within documents. It also allows advanced users to define custom hooks and regular expressions to extend and tailor its behaviour further. The package is affiliated with the Foundation for Open Access Statistics, a nonprofit organisation promoting free software, open access publishing and reproducible research in statistics.
Mastering Shiny is a comprehensive guide to developing web applications using R, focusing on the Shiny framework designed for data scientists. It introduces core concepts such as user interface design, reactive programming and dynamic content generation, while also exploring advanced topics like performance optimisation, security and modular app development. The book covers practical applications across industries, from academic teaching tools to real-time analytics dashboards, and aims to equip readers with the skills to build scalable, maintainable applications. It includes detailed chapters on workflow, layout, visualisation and user interaction, alongside case studies and technical best practices.
Engineering Production-Grade Shiny Apps
This is aimed at developers and team managers who already possess a working knowledge of the Shiny framework for R and wish to advance beyond the basics toward building robust, production-ready applications. Rather than covering introductory Shiny concepts or post-deployment concerns, the book focuses on the intermediate ground between those two stages, addressing project management, workflow, code structure and optimisation.
It introduces the {golem} package as a central framework and guides readers through a five-step workflow covering design, prototyping, building, strengthening and deployment, with additional chapters on optimisation techniques including R code performance, JavaScript integration and CSS. The book is structured to serve both those with project management responsibilities and those focused on technical development, acknowledging that in many small teams these roles are carried out by the same individual.
Outstanding User Interfaces with Shiny
Written by David Granjon and published in 2022, Outstanding User Interfaces with Shiny is a book aimed at filling the gap between beginner and advanced Shiny developers, covering how to deeply customise and enhance Shiny applications to the point where they become indistinguishable from classic web applications. The book spans a wide range of topics, including working with HTML and CSS, integrating JavaScript, building Bootstrap dashboard templates, mobile development and the use of React, providing a comprehensive resource that consolidates knowledge and experience previously scattered across the Shiny developer community.
Now in its second edition, R Packages by Hadley Wickham and Jennifer Bryan is a freely available online guide that teaches readers how to develop packages in R. A package is the core unit of shareable and reproducible R code, typically comprising reusable functions, documentation explaining how to use them and sample data. The book guides readers through the entire process of package development, covering areas such as package structure, metadata, dependencies, testing, documentation and distribution, including how to release a package to CRAN. The authors encourage a gradual approach, noting that an imperfect first version is perfectly acceptable provided each subsequent version improves on the last.
Written by Javier Luraschi, Kevin Kuo and Edgar Ruiz, Mastering Spark with R is a comprehensive guide designed to take readers from little or no familiarity with Apache Spark or R through to proficiency in large-scale data science. The book covers a broad range of topics, including data analysis, modelling, pipelines, cluster management, connections, data handling, performance tuning, extensions, distributed computing, streaming and contributing to the Spark ecosystem.
Happy Git and GitHub for the useR
Here is a practical guide written by Jenny Bryan and contributors, aimed primarily at R users involved in data analysis or package development. It covers the installation and configuration of Git alongside GitHub, the development of key workflows for common tasks and the integration of these tools into day-to-day work with R and R Markdown. The guide is structured to take readers from initial setup through to more advanced daily workflows, with particular attention paid to how Git and GitHub serve the needs of data science rather than pure software development.
Written by John Coene and intended for release as part of the CRC Press R series, JavaScript for R explore how the R programming language and JavaScript can be used together to enhance data science workflows. Rather than teaching JavaScript as a standalone language, the book demonstrates how a limited working knowledge of it can meaningfully extend what R developers can achieve, particularly through the integration of external JavaScript libraries.
The book covers a broad range of topics, progressing from foundational concepts through to data visualisation using the {htmlwidgets} package, bidirectional communication with Shiny, JavaScript-powered computations via the V8 engine and Node.js and the use of modern JavaScript tools such as Vue, React and webpack alongside R. Practical examples are woven throughout, including the building of interactive visualisations, custom Shiny inputs and outputs, image classification and machine learning operations, with all accompanying code made publicly available on GitHub.
This guide addresses challenges faced by developers of R packages that interact with web resources, offering strategies to create reliable unit tests despite dependencies on internet connectivity, authentication and external service availability. It explores tools such as {vcr}, {webmockr}, {httptest} and {webfakes}, which enable mocking and recording HTTP requests to ensure consistent testing environments, reduce reliance on live data and improve test reliability. The text also covers advanced topics like handling errors, securing tests and ensuring compatibility with CRAN and Bioconductor, while emphasising best practices for maintaining test robustness and contributor-friendly workflows. Funded by rOpenSci and the R Consortium, the resource aims to support developers in building more resilient and maintainable R packages through structured testing approaches.
The Shiny AWS Book is an online resource designed to teach data scientists how to deploy, host and maintain Shiny web applications using cloud infrastructure. Addressing a common gap in data science education, it guides readers through a range of DevOps technologies including AWS, Docker, Git, NGINX and open-source Shiny Server, covering everything from server setup and cost management to networking, security and custom configuration.
{ggplot2}: Elegant Graphics for Data Analysis
The third edition of {ggplot2}: Elegant Graphics for Data Analysis provides an in-depth exploration of the Grammar of Graphics framework, focusing on the theoretical foundations and detailed implementation of the ggplot2 package rather than offering step-by-step instructions for specific visualisations. Written by Hadley Wickham, Danielle Navarro and Thomas Lin Pedersen, the book is presented as an online work-in-progress, with content structured across sections such as layers, scales, coordinate systems and advanced programming topics. It aims to equip readers with the knowledge to customise plots according to their needs, rather than serving as a direct guide for creating predefined graphics.
YaRrr! The Pirate’s Guide to R
Written by Nathaniel D. Phillips, this is a beginner-oriented guide to learning the R programming language from the ground up, covering everything from installation and basic navigation of the RStudio environment through to more advanced topics such as data manipulation, statistical analysis and custom function writing. The guide progresses logically through foundational concepts including scalars, vectors, matrices and dataframes before moving into practical areas such as hypothesis testing, regression, ANOVA and Bayesian statistics. Visualisation is given considerable attention across dedicated chapters on plotting, while later sections address loops, debugging and managing data from a variety of file formats. Each chapter includes practical exercises to reinforce learning, and the book concludes with a solutions section for reference.
Data Visualisation: A Practical Introduction
Data Visualisation: A Practical Introduction is a forthcoming second edition from Princeton University Press, written by Kieran Healy and due for release in March 2026, which teaches readers how to explore, understand and present data using the R programming language and the {ggplot2} library. The book aims to bridge the gap between works that discuss visualisation principles without teaching the underlying tools and those that provide code recipes without explaining the reasoning behind them, instead combining both practical instruction and conceptual grounding.
Revised and updated throughout to reflect developments in R and {ggplot2}, the second edition places greater emphasis on data wrangling, introduces updated and new datasets, and substantially rewrites several chapters, particularly those covering statistical models and map-drawing. Readers are guided through building plots progressively, from basic scatter plots to complex layered graphics, with the expectation that by the end they will be able to reproduce nearly every figure in the book and understand the principles that inform each choice.
The book also addresses the growing role of large language models in coding workflows, arguing that genuine understanding of what one is doing remains essential regardless of the tools available. It is suitable for complete beginners, those with some prior R experience, and instructors looking for a course companion, and requires the installation of R, RStudio and a number of supporting packages before work can begin.
Shaping SAS output using ODS Style Definitions as well as SAS Formats
Working with SAS output involves two related but distinct concerns: how results look, and how values are displayed. The material here covers both sides of that equation. On one hand, the DEFINE STYLE statement in PROC TEMPLATE provides a way to create and customise ODS styles for destinations that support the STYLE= option. On the other, SAS formats determine how character, numeric, date and time values are written in output. Taken together, these features shape both presentation and readability, which is why it is useful to understand them in the same discussion.
The DEFINE STYLE Statement
The DEFINE STYLE statement is the foundation for creating a stand-alone style. Its syntax allows a style to be stored in a template store and to include inherited behaviour, notes, imported CSS and individual style element definitions. A style definition begins with DEFINE STYLE followed by a style path (or in the special case of Base.Template.Style, it is that name itself), and it must end with an END statement. That final END is not optional, as it is a hard requirement. Within the body of the style, statements such as PARENT=, NOTES, CLASS, IMPORT and STYLE determine how the style behaves and what it contains.
Style Paths and the STORE= Option
The style path identifies where a style is stored. It consists of one or more names separated by periods, with each name representing a directory in a template store. PROC TEMPLATE writes the style to the first writeable template store in the current path unless a STORE= option directs it elsewhere. The STORE=libref.template-store option specifies a particular template store, and if that template store does not already exist, SAS creates it automatically. One important point is that the syntax of the STORE= option does not become part of the compiled template, so it affects where the style is saved rather than the internal definition itself.
Base.Template.Style
A notable special case is Base.Template.Style. This creates a style that becomes the parent of all styles that do not explicitly specify a parent, and once created it is automatically applied to output until it is specifically removed from the item store. That convenience comes with a clear caution: the SAS-supplied Base.Template.Style contains inheritance information relied upon by many styles, and if that inheritance structure is not preserved, some style elements might not appear in output. The safer route is therefore to start from the existing Base.Template.Style, write it to an external file and edit its contents rather than constructing a replacement from scratch. There is also a restriction: if PARENT= is specified, it must refer to a style other than Base.Template.Style.
Inheritance and the PARENT= Statement
Inheritance is central to how ODS styles work. The PARENT= statement specifies the style from which the current style inherits its style elements, style attributes and statements. The style path named in PARENT= is looked up in the first readable template store in the current path, and unless the current style overrides something, everything in the parent style carries through. SAS ships with several styles that can be used as a base, including styles.default, styles.beige, styles.brick, styles.brown, styles.d3d, styles.minimal, styles.printer and styles.statdoc. This inheritance model makes style creation more manageable because most new styles are refinements of existing ones rather than fully independent definitions.
The NOTES Statement
For documentation inside the style itself, the NOTES statement provides a place to store descriptive text. This differs from a SAS comment because the text becomes part of the compiled style template and can be viewed with the SOURCE statement. That makes NOTES useful for recording what a style is for, what it changes, or any implementation detail worth preserving alongside the template. In a shared environment, that sort of embedded documentation can be more durable than comments kept in a separate program file.
The CLASS Statement
The CLASS statement creates a style element from a like-named style element. In practical terms, it duplicates an existing element of the same name and applies modifications. The three statements class fonts;, style fonts from fonts; and style fonts from _self_; are equivalent, making CLASS a convenience form for a common pattern. It takes one or more style element names, optional descriptive text and optional attribute specifications. If the same attribute is specified more than once, the last value given is the one SAS uses, and that rule is worth keeping in mind when reading or maintaining larger templates.
The STYLE Statement
The STYLE statement is more general and is the main mechanism for creating or modifying one or more style elements. It can define new elements, override inherited ones, or absorb attributes from an existing element by using the FROM option. When a new style element overrides one that is a parent of other elements, all of its descendants (including those inherited from parent styles) also inherit the new attributes, which is one of the reasons why small changes can have broad visual effects in output. Style elements within a single STYLE statement must be separated by commas.
The distinction between using FROM and not using it is particularly important. If a like-named style element already exists in the child style, and it is not created with FROM, the child version overrides the parent version entirely. If it is created with FROM, the attributes from the parent style element are absorbed into the child style element. Without FROM, an attribute defined in a like-named style element in the parent is not inherited unless it is explicitly specified again. With FROM, inherited attributes remain in play and can then be modified selectively, and this is the practical difference between replacement and extension.
The _SELF_ keyword is a shorthand within the STYLE statement, specifying that each named style element should inherit from an existing style element of the same name. It is most useful when specifying multiple style elements in one statement. For example, the single statement style data, data1, dataempty from _self_ / color = red backgroundcolor = black; is exactly equivalent to writing separate STYLE statements for data, data1 and dataempty individually. Where the same attribute appears more than once among multiple identical style element names, the last value specified is used. PROC TEMPLATE looks first in the current style for the named style element when resolving a FROM reference, and only looks in the parent style if the element is not found there.
Style Attributes
Style attributes follow the general form style-attribute-name=<|>style-attribute-value. Standard attribute names from the documented list are written without quotation marks, while user-defined attribute names must be enclosed in quotation marks. The vertical bar (|) symbol prevents the style attribute from being inherited by any child style elements, allowing a template author to control precisely how far a change spreads through the inheritance tree. Text associated with a STYLE statement also becomes part of the compiled template (much like NOTES), which can help explain why a specific element is defined in a particular way.
The IMPORT Statement and CSS
The IMPORT statement bridges CSS and ODS styles by importing Cascading Style Sheet information from a file into the style. The file specification can be an external file path, a fileref or a URL, and once imported, SAS converts the CSS code into style attributes and style elements that can be used by PROC TEMPLATE. There are requirements of which you need to be aware: the CSS file must be written in the same type of CSS that the ODS HTML statement produces, and only class names that match ODS style element names are supported, with no IDs and no context-based selectors permitted. If needed, the CSS that ODS creates can be examined with the STYLESHEET= option, or by viewing the HTML source and inspecting the code at the top of the file.
Media types add another layer to the IMPORT statement. The syntax allows up to ten media types to be specified, separated by commas, corresponding to how output will be rendered on screen, paper, with a speech synthesiser or with a braille device, for example. CSS code outside any media block is always included, and the media type option additionally imports the section of a CSS file intended only for a specific media type. If no media type is specified in the ODS statement, but media types exist in the CSS file, ODS uses the Screen media type by default. If multiple media types are specified, all of their style information is applied, though if duplicate style information appears in different media blocks, the styles from the last media block are used.
The REPLACE Statement
One statement that no longer belongs in current practice is REPLACE. The SAS documentation states plainly that it is no longer supported and that STYLE or CLASS should be used instead to create and modify style elements. That is a useful reminder when reading older code, as REPLACE appears in legacy templates and conference papers that predate its deprecation.
The ODS Style Element Catalogue
To make sense of style customisation, it helps to understand the wider catalogue of ODS style elements. These elements are organised by function, and many are abstract, meaning they exist for inheritance purposes rather than direct rendering. Abstract elements are not explicitly used in ODS output and will not appear in destinations that generate a style sheet.
Miscellaneous and Document Elements
A broad abstract element, Container, controls all container-oriented elements and sits near the top of several inheritance chains. Document-related elements such as Document, Body, Frame, Contents and Pages control the overall presentation of output files, including page background and margins, with Body, Frame, Contents and Pages all inheriting from Document. Several further miscellaneous elements handle specific rendering concerns: Continued controls the continued flag when a table breaks across a page (paginated destinations only), ExtendedPage handles the message displayed when a page will not fit (Printer destination only), PageNo controls page numbers for paginated destinations and Parskip controls the space between tables. UserText controls the ODS TEXT= style and inherits from Note. The StartUpFunction and ShutDownFunction elements add JavaScript functions to HTML output that execute on page load and page exit, respectively, and PrePage controls the ODS RTF/MEASURED PREPAGE= style.
Date Elements
Date-related elements include Date (an abstract element controlling how date fields look), BodyDate (which controls the date field in the Contents file and inherits from ContentsDate) and PagesDate (which controls the date field in the Pages file and inherits from Date).
Contents and Pages Elements
Contents and pages files are influenced by a substantial group of elements. IndexItem is an abstract element controlling list items and folders for both files. ContentFolder controls folders in the Contents file, and ByContentFolder controls byline folders there, inheriting from ContentFolder. ContentItem controls items in the Contents file and PagesItem controls items in the Pages file, both inheriting from IndexItem. The abstract element Index covers miscellaneous Contents and Pages components, and from it inherit IndexProcName, ContentProcName, ContentProcLabel, PagesProcName and PagesProcLabel, which handle procedure names and labels in each file. IndexTitle and ContentTitle control the titles of the Contents and Pages files; in styles.default, ContentTitle contains a PRETEXT= attribute that prints the text "Table of Contents". IndexAction and FolderAction determine what happens on mouse-over events for folders and items (HTML only). SysTitleAndFooterContainer controls the container for system page titles and footers, and is generally used to add borders around a title.
Titles, Footers and Related Elements
Titles and footers are handled by the abstract element TitlesAndFooters, which controls system page title and footer text. SystemTitle inherits from it and chains through SystemTitle2 up to SystemTitle10, with each inheriting from the one before. The footer series follows the same pattern from SystemFooter through SystemFooter2 to SystemFooter10. TitleAndNoteContainer controls the container for procedure-defined titles and notes, inheriting from Container. ProcTitle controls procedure title text and inherits from TitlesAndFooters, with ProcTitleFixed handling procedure title text that requests a fixed font.
Bylines
BylineContainer controls the container for the byline (generally used to add borders) and inherits from Container. Byline controls byline text and inherits from TitlesAndFooters.
Notes, Warnings and Errors
Notes, warnings and errors each consist of two pieces: a banner area and a content area. The abstract element Note controls the container for note banners and note contents, and inherits from Container. The banner elements (NoteBanner, WarnBanner, ErrorBanner and FatalBanner) generally use the PRETEXT= attribute to print the banner label. Each has a corresponding content element (NoteContent, WarnContent, ErrorContent and FatalContent), and fixed-font variants exist for note, warning and error content (NoteContentFixed, WarnContentFixed and ErrorContentFixed). All of these elements inherit from Note.
Table Elements
Elements governing table output form a substantial hierarchy. Output is an abstract element that controls basic output forms, including borders (via FRAME=, RULES= and individual border control attributes), cell spacing, cell padding and background colour, inheriting from Container. Table controls overall table style and inherits from Output, as does Batch (which controls batch mode output). Three further abstract elements are specific to RTF output: TableHeaderContainer (which places and controls the box around all column headings), TableFooterContainer (which does the same for column footers) and ColumnGroup (which controls the box around groups of columns).
Data Cell Elements
Cell is an abstract element that controls data, header and footer cells, inheriting from Container. Data cells are controlled by Data (the default style for data cells), DataFixed (for data cells requesting a fixed font), DataEmpty (for empty data cells), DataEmphasis (for emphasised data cells), DataEmphasisFixed (for emphasised data cells requesting a fixed font), DataStrong (for strong, more emphasised data cells) and DataStrongFixed. All inherit from Cell or from one another in a chain.
Header and Footer Cell Elements
Header and footer cells are governed by HeadersAndFooters, an abstract element inheriting from Cell. Headers include Header, HeaderFixed, HeaderEmpty, HeaderEmphasis, HeaderEmphasisFixed, HeaderStrong and HeaderStrongFixed. Row headers follow a parallel set: RowHeader, RowHeaderFixed, RowHeaderEmpty, RowHeaderEmphasis, RowHeaderEmphasisFixed, RowHeaderStrong and RowHeaderStrongFixed. Footers mirror the same pattern through Footer, FooterFixed, FooterEmpty, FooterEmphasis, FooterEmphasisFixed, FooterStrong and FooterStrongFixed, with row footers following suit via RowFooter and its variants. PROC TABULATE captions are separately covered by the abstract element Caption (which inherits from HeadersAndFooters), BeforeCaption and AfterCaption.
SAS Formats
While styles affect appearance, formats affect representation. SAS organises formats into four categories: Character, Date and Time, ISO 8601 and Numeric. Formats that support national languages are documented separately in the SAS National Language Support reference, and storing user-defined formats is an important consideration when those formats are associated with variables in permanent SAS data sets shared with others.
Character Formats
Character formats cover both simple display and conversion tasks. $CHARw. and $w. write standard character data, while $QUOTEw. encloses values in double quotation marks. $UPCASEw. converts character data to uppercase, and $MSGCASEw. writes uppercase output when the MSGCASE system option is in effect. Several formats transform character data into alternative encodings or representations: $ASCIIw. converts to ASCII, $EBCDICw. converts to EBCDIC, $HEXw. converts to hexadecimal, $BINARYw. converts to binary and $OCTALw. converts to octal. Others alter ordering or length handling: $REVERJw. writes character data in reverse order and preserves blanks, $REVERSw. writes it in reverse and left-aligns it, and $VARYINGw. writes character data of varying length. $BASE64Xw. converts character data into ASCII text using Base 64 encoding.
Date and Time Formats
Date and time formats are especially broad. Traditional date formats include DATEw. (writing values as ddmmmyy or ddmmmyyyy), DDMMYYw. and DDMMYYxw. (day-month-year with various separators), MMDDYYw. and MMDDYYxw. (month-day-year), YYMMDDw. and YYMMDDxw. (year-month-day), MONYYw. (month and year), MONNAMEw. (month name), DOWNAMEw. (day of week name), WEEKDATEw. and WEEKDATXw. (day of week and date in different orderings) and WORDDATEw. and WORDDATXw. (month name with day and year in different orderings). Quarter and year formats include QTRw., QTRRw. (Roman numerals), YEARw., YYQw., YYQxw., YYQRw. and YYQRxw.. Week number formats include WEEKUw., WEEKVw. and WEEKWw., each using a different numbering algorithm.
Year-month combination formats include YYMMw., YYMMxw., YYMONw., MMYYw. and MMYYxw.. DAYw. writes the day of the month and WEEKDAYw. writes the day of the week as a number. Time and date time formats include TIMEw.d, TIMEAMPMw.d, TODw.d, HHMMw.d, HOURw.d, MMSSw.d, DATETIMEw.d and DATEAMPMw.d. Formats that take a date time value and write only part of it include DTDATEw., DTMONYYw., DTWKDATXw., DTYEARw. and DTYYQCw.. Julian date formats include JULDAYw. (Julian day of the year), JULIANw. (Julian date in yyddd or yyyyddd), PDJULGw. (packed Julian in hexadecimal yyyydddF for IBM) and PDJULIw. (packed Julian in hexadecimal ccyydddF for IBM).
The $N8601 character formats also appear within the Date and Time category. $N8601Bw.d and $N8601BAw.d both write ISO 8601 duration, date time and interval forms using basic notations. $N8601Ew.d and $N8601EAw.d use extended notations. $N8601EHw.d uses extended notation with a hyphen for omitted components, $N8601EXw.d uses an x in place of each digit of an omitted component, $N8601Hw.d drops omitted components in duration values and uses a hyphen for omitted date time components, and $N8601Xw.d drops omitted duration components and uses an x for each digit of an omitted date time component.
ISO 8601 Formats
The ISO 8601 category covers the same $N8601 character formats listed above, together with the B8601 (basic notation) and E8601 (extended notation) families of numeric formats. Basic formats include B8601DAw. (date as yyyymmdd), B8601DNw. (date from a date time value as yyyymmdd), B8601DTw.d (date time as yyyymmddThhmmssffffff), B8601DZw. (date time in UTC with time zone offset as yyyymmddThhmmss+|-hhmm), B8601LZw. (local time with UTC offset as hhmmss+|-hhmm), B8601TMw.d (time as hhmmssffff) and B8601TZw. (time adjusted to UTC as hhmmss+|-hhmm). Extended formats follow the same structure: E8601DAw. (date as yyyy-mm-dd), E8601DNw., E8601DTw.d, E8601DZw., E8601LZw., E8601TMw.d and E8601TZw.d, each using hyphen and colon delimiters to separate date and time components. These formats are important where standards compliance, machine readability or time zone clarity matter.
Numeric Formats
Numeric formats address general presentation, technical encoding and domain-specific output. BESTw. lets SAS choose the best notation, w.d writes standard numeric data one digit per byte and Zw.d adds leading zeroes. BESTDw.p lines up decimal places for values of similar magnitude and prints integers without decimals. Dw.p does the same over a potentially wider range of values, and Ew. writes values in scientific notation.
Financial and punctuation-sensitive displays are handled by COMMAw.d (comma every three digits, period for decimal), COMMAXw.d (period every three digits, comma for decimal), NUMXw.d (comma in place of the decimal point), DOLLARw.d, DOLLARXw.d, PERCENTw.d, PERCENTNw.d (using a minus sign for negative values) and NEGPARENw.d (negative values in parentheses). Integer and binary formats include IBw.d (native integer binary including negative values), IBRw.d (integer binary in Intel and DEC formats), PIBw.d (positive integer binary), PIBRw.d (positive integer binary in Intel and DEC formats) and RBw.d (real binary floating-point). Floating-point formats include FLOATw.d (native single-precision) and IEEEw.d. FRACTw. converts values to fractions.
Encoding formats include HEXw. (hexadecimal), BINARYw. (binary), OCTALw. (octal), PDw.d (packed decimal), PKw.d (unsigned packed decimal) and ZDw.d (zoned decimal). IBM mainframe formats form their own group: S370FFw.d (standard numeric), S370FIBw.d (integer binary including negative values), S370FIBUw.d (unsigned integer binary), S370FPDw.d (packed decimal), S370FPDUw.d (unsigned packed decimal), S370FPIBw.d (positive integer binary), S370FRBw.d (real binary floating-point), S370FZDw.d (zoned decimal), S370FZDLw.d (zoned decimal leading sign), S370FZDSw.d (zoned decimal separate leading sign), S370FZDTw.d (zoned decimal separate trailing sign) and S370FZDUw.d (unsigned zoned decimal). VAXRBw.d writes real binary data in VMS format and VMSZNw.d generates VMS and OpenText COBOL zoned numeric data.
Readable formats include ROMANw. (Roman numerals), WORDSw. (values as words) and WORDFw. (values as words with fractions shown numerically). The SSNw. format writes Social Security numbers and PVALUEw.d writes p-values.
Combining ODS Styles and Formats for Cleaner SAS Output
The connection between style definitions and formats is straightforward, even if the details are substantial. Styles determine the visual structure of ODS output through inheritance, element definitions and optional CSS imports, while formats determine how the values inside that output are written. A report can therefore be shaped at two levels at once: the appearance of titles, tables, notes and cells through DEFINE STYLE, and the textual form of dates, times, percentages, identifiers and other values through the SAS format system. Understanding both gives a clearer picture of how SAS turns data into output that is both functional and legible.
Building a modular Hugo website home page using block-driven front matter
Inspired by building a modular landing page on a Grav-powered subsite, I wondered about doing the same for a Hugo-powered public transport website that I have. It was part of an overall that I was giving it, with AI consultation running shotgun with the whole effort. The home page design was changed from a two-column design much like what was once typical of a blog, to a single column layout with two-column sections.
The now vertical structure consisted of numerous layers. First, there is an introduction with a hero image, which is followed by blocks briefly explaining what the individual sections are about. Below them, two further panels describe motivations and scope expansions. After those, there are two blocks displaying pithy details of recent public transport service developments before two final panels provide links to latest articles and links to other utility pages, respectively.
This was a conscious mix of different content types, with some nesting in the structure. Much of the content was described in page front matter, instead of where it usually goes. Without that flexibility, such a layout would not have been possible. All in all, this illustrates just how powerful Hugo is when it comes to constructing website layouts. The limits essentially are those of user experience and your imagination, and necessarily in that order.
On Hugo Home Pages
Building a home page in Hugo starts with understanding what content/_index.md actually represents. Unlike a regular article file, _index.md denotes a list page, which at the root of the content directory becomes the site's home page. This special role means Hugo treats it differently from a standard single page because the home is always a list page even when the design feels like a one-off.
Front matter in content/_index.md can steer how the page is rendered, though it remains entirely optional. If no front matter is present at all, Hugo still creates the home page at .Site.Home, draws the title from the site configuration, leaves the description empty unless it has been set globally, and renders any Markdown below the front matter via .Content. That minimal behaviour suits sites where the home layout is driven entirely by templates, and it is a common starting point for new projects.
How the Underlying Markdown File Looks
While this piece opens with a description of what was required and built, it is better to look at the real _index.md file. Illustrating the block-driven pattern in practical use, here is a portion of the file:
---
title: "Maximising the Possibilities of Public Transport"
layout: "home"
blocks:
- type: callout
text1: "Here, you will find practical, thoughtful insight..."
text2: "You can explore detailed route listings..."
image: "images/sam-Up56AzRX3uM-unsplash.jpg"
image_alt: "Transpennine Express train leaving Manchester Piccadilly train station"
- type: cards
heading: "Explore"
cols_lg: 6
items:
- title: "News & Musings"
text: "Read the latest articles on rail networks..."
url: "https://ontrainsandbuses.com/news-and-musings/"
- title: "News Snippets"
...
- type: callout
heading: "Motivation"
text2: "Since 2010, British public transport has endured severe challenges..."
image: "images/joseph-mama-aaQ_tJNBK4c-unsplash.jpg"
image_alt: "Buses in Leeds, England, U.K."
- type: callout
heading: "An Expanding Scope"
text2: "You will find content here drawn from Ireland..."
image: "images/snap-wander-RlQ0MK2InMw-unsplash.jpg"
image_alt: "TGV speeding through French countryside"
---
There are several things that are worth noting here. The title and layout: "home" fields appear at the top, with all structural content expressed as a blocks list beneath them. There is no Markdown body because the blocks supply all the visible content, and the file contains no layout logic of its own, only a description of what should appear and in what order. However, the lack of a Markdown body does pose a challenge for spelling and grammar checking using the LanguageTool extension in VSCode, which means that you need to ensure that proofreading needs to happen in a different way, such as using the editor that comes with the LanguageTool browser extension.
Template Selection and Lookup Order
Template selection is where Hugo's home page diverges most noticeably from regular sections. In Hugo v0.146.0, the template system was completely overhauled, and the lookup order for the home page kind now follows a straightforward sequence: layouts/home.html, then layouts/list.html, then layouts/all.html. Before that release, the conventional path was layouts/index.html first, falling back to layouts/_default/list.html, and the older form remains supported through backward-compatibility mapping. In every case, baseof.html is a wrapper rather than a page template in its own right, so it surrounds whichever content template is selected without substituting for one.
The choice of template can be guided further through front matter. Setting layout: "home" in content/_index.md, as in the example above, encourages Hugo to pick a template named home.html, while setting type: "home" enables more specific template resolution by namespace. These are useful options when the home page deserves its own template path without disturbing other list pages.
The Home Template in Practice
With the front matter established, the template that renders it is worth examining in its own right. It happens that the home.html for this site reads as follows:
<!DOCTYPE html>
{{- partial "head.html" . -}}
<body>
{{- partial "header.html" . -}}
<div class="container main" id="content">
<div class="row">
<h2 class="centre">{{ .Title }}</h2>
{{- partial "blocks/render.html" . -}}
</div>
{{- partial "recent-snippets-cards.html" . -}}
{{- partial "home-teasers.html" . -}}
{{ .Content }}
</div>
{{- partial "footer.html" . -}}
{{- partial "cc.html" . -}}
{{- partial "matomo.html" . -}}
</body>
</html>
This template is self-contained rather than wrapping a base template. It opens the full HTML document directly, calls head.html for everything inside the <head> element and header.html for site navigation, then establishes the main content container. Inside that container, .Title is output as an h2 heading, drawing from the title field in content/_index.md. The block dispatcher partial, blocks/render.html, immediately follows and is responsible for looping through .Params.blocks and rendering each entry in sequence, handling all the callout and cards blocks described in the front matter.
Below the blocks, two further partials render dynamic content independently of the front matter. recent-snippets-cards.html displays the two most recent news snippets as full-content cards, while home-teasers.html presents a compact linked list of recent musings alongside a weighted list of utility pages. After those, {{ .Content }} outputs any Markdown written below the front matter in content/_index.md, though in this case, the file has no body content, so nothing is rendered at that point. The template closes with footer.html, a cookie notice via cc.html and a Matomo analytics snippet.
Notice that this template does not use {{ define "main" }} and therefore does not rely on baseof.html at all. It owns the full document structure itself, which is a legitimate approach when the home page has a sufficiently distinct shape that sharing a base template would add complexity rather than reduce it.
The Block Dispatcher
The blocks/render.html partial is the engine that connects the front matter to the individual block templates. Its full content is brief but does considerable work:
{{ with .Params.blocks }}
{{ range . }}
{{ $type := .type | default "text" }}
{{ partial (printf "blocks/%s.html" $type) (dict "page" $ "block" .) }}
{{ end }}
{{ end }}
The with .Params.blocks guard means the entire loop is skipped cleanly if no blocks key is present in the front matter, so pages that do not use the system are unaffected. For each block in the list, the type field is read and passed through printf to build the partial path, so type: callout resolves to blocks/callout.html and type: cards resolves to blocks/cards.html. If a block has no type, the fallback is text, so a blocks/text.html partial would handle it. The dict call constructs a fresh context map passing both the current page (as page) and the raw block data (as block) into the partial, keeping the two concerns cleanly separated.
The Callout Blocks
The callout.html partial renders bordered, padded sections that can carry a heading, an image and up to five paragraphs of text. Used for the website introduction, motivation and expanded scope sections, its template is as follows:
{{ $b := .block }}
<section class="mt-4">
<div class="p-4 border rounded">
{{ with $b.heading }}<h3>{{ . }}</h3>{{ end }}
{{ with $b.image }}
<img
src="{{ . }}"
class="img-fluid w-100 rounded"
alt="{{ $b.image_alt | default "" }}">
{{ end }}
<div class="text-columns mt-4">
{{ with $b.text1 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text2 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text3 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text4 }}<p>{{ . }}</p>{{ end }}
{{ with $b.text5 }}<p>{{ . }}</p>{{ end }}
</div>
</div>
</section>
The pattern here is consistent and deliberate. Every field is wrapped in a {{ with }} block, so fields absent from the front matter produce no output and no empty elements. The heading renders as an h3, sitting one level below the page's h2 title and maintaining a coherent document outline. The image uses img-fluid and w-100 alongside rounded, making it fully responsive and visually consistent with the bordered container. According to the Bootstrap documentation, img-fluid applies max-width: 100% and height: auto so the image scales with its parent, while w-100 ensures it fills the container width regardless of its intrinsic size. The image_alt field falls back to an empty string via | default "" rather than omitting the attribute entirely, which keeps the rendered HTML valid.
Text content sits inside a text-columns wrapper, which allows a stylesheet to apply a CSS multi-column layout to longer passages without altering the template. The numbered paragraph fields text1 through text5 reflect the varying depth of the callout blocks in the front matter: the introductory callout uses two paragraphs, while the Motivation callout uses four. Adding another paragraph field to a block requires only a new {{ with $b.text6 }} line in the partial and a matching text6 key in the front matter entry.
The Section Introduction Blocks
The cards.html partial renders a headed grid of linked blocks, with the column width at large viewports driven by a front matter parameter. This is used for the website section introductions and its template is as follows:
{{ $b := .block }}
{{ $colsLg := $b.cols_lg | default 4 }}
<section class="mt-4">
{{ with $b.heading }}<h3 class="h4 mb-3">{{ . }}</h3>{{ end }}
<div class="row">
{{ range $b.items }}
<div class="col-12 col-md-6 col-lg-{{ $colsLg }} mb-3">
<div class="card h-100 ps-2 pe-2 pt-2 pb-2">
<div class="card-body">
<h4 class="h5 card-title mt-1 mb-2">
<a href="{{ .url }}">{{ .title }}</a>
</h4>
{{ with .text }}<p class="card-text mb-0">{{ . }}</p>{{ end }}
</div>
</div>
</div>
{{ end }}
</div>
</section>
The cols_lg value defaults to 4 if not specified, which produces a three-column grid at large viewports using Bootstrap's twelve-column grid. The transport site's cards block sets cols_lg: 6, giving two columns at large viewports and making better use of the wider reading space for six substantial card descriptions. At medium viewports, the col-md-6 class produces two columns regardless of cols_lg, and col-12 ensures single-column stacking on small screens.
The heading uses the h4 utility class on an h3 element, pulling the visual size down one step while keeping the document outline correct, since the page already has an h2 title and h3 headings in the callout blocks. Each card title then uses h5 on an h4 for the same reason. The h-100 class on the card sets its height to one hundred percent of the column, so all cards in a row grow to match the tallest one and baselines align even when descriptions vary in length. The padding classes ps-2 pe-2 pt-2 pb-2 add a small inset without relying on custom CSS.
Brief Snippets of Recent Public Transport Developments
The recent-snippets-cards.html partial sits outside the blocks system and renders the most recent pair of short transport news posts as full-content cards. Here is its template:
<h3 class="h4 mt-4 mb-3">Recent Snippets</h3>
<div class="row">
{{ range ( first 2 ( where .Site.Pages "Type" "news-snippets" ) ) }}
<div class="col-12 col-md-6 mb-3">
<div class="card h-100">
<div class="card-body">
<h4 class="h6 card-title mt-1 mb-2">
{{ .Date.Format "15:04, January 2" }}<sup>{{ if eq (.Date.Format "2") "2" }}nd{{ else if eq (.Date.Format "2") "22" }}nd{{ else if eq (.Date.Format "2") "1" }}st{{ else if eq (.Date.Format "2") "21" }}st{{ else if eq (.Date.Format "2") "3" }}rd{{ else if eq (.Date.Format "2") "23" }}rd{{ else }}th{{ end }}</sup>, {{ .Date.Format "2006" }}
</h4>
<div class="snippet-content">
{{ .Content }}
</div>
</div>
</div>
</div>
{{ end }}
</div>
The where function filters .Site.Pages to the news-snippets content type, and first 2 takes only the two most recently created entries. Notably, this collection does not call .ByDate.Reverse before first, which means it relies on Hugo's default page ordering. Where precise newest-first ordering matters, chaining ByDate.Reverse before first makes the intent explicit and avoids surprises if the default ordering changes.
The date heading warrants attention. It formats the time as 15:04 for a 24-hour clock display, followed by the month name and day number, then appends an ordinal suffix using a chain of if and else if comparisons against the raw day string. The logic handles the four irregular cases (1st, 21st, 2nd, 22nd, 3rd and 23rd) before falling back to th for all other days. The suffix is wrapped in a <sup> element so it renders as a superscript. The year follows as a separate .Date.Format "2006" call, separated from the day by a comma. Each card renders the full .Content of the snippet rather than a summary, which suits short-form posts where the entire entry is worth showing on the home page.
Latest Musings and Utility Pages Blocks
The home-teasers.html partial renders a two-column row of linked lists, one for recent long-form articles and one for utility pages. Its template is as follows:
<div class="row mt-4">
<div class="col-12 col-md-6 mb-3">
<div class="card h-100">
<div class="card-body">
<h3 class="h5 card-title mb-3">Recent Musings</h3>
{{ range first 5 ((where .Site.RegularPages "Type" "news-and-musings").ByDate.Reverse) }}
<p class="mb-2">
<a href="{{ .Permalink }}">{{ .Title }}</a>
</p>
{{ end }}
</div>
</div>
</div>
<div class="col-12 col-md-6 mb-3">
<div class="card h-100">
<div class="card-body">
<h3 class="h5 card-title mb-3">Extras & Utilities</h3>
{{ $extras := where .Site.RegularPages "Type" "extras" }}
{{ $extras = where $extras "Title" "ne" "Thank You for Your Message!" }}
{{ $extras = where $extras "Title" "ne" "Whoops!" }}
{{ range $extras.ByWeight }}
<p class="mb-2">
<a href="{{ .Permalink }}">{{ .Title }}</a>
</p>
{{ end }}
</div>
</div>
</div>
</div>
The left column uses .Site.RegularPages rather than .Site.Pages to exclude list pages, taxonomy pages and other non-content pages from the results. The news-and-musings type is filtered, sorted with .ByDate.Reverse and then limited to five entries with first 5, producing a compact, current list of article titles. The heading uses h5 on an h3 for the same visual-scale reason seen in the cards blocks, and h-100 on each card ensures the two columns match in height at medium viewports and above.
The right column builds the extras list through three chained where calls. The first narrows to the extras content type, and the subsequent two filter out utility pages that should never appear in public navigation, specifically the form confirmation and error pages. The remaining pages are then sorted by ByWeight, which respects the weight value set in each page's front matter. Pages without a weight default to zero, so assigning small positive integers to the pages that should appear first gives stable, editorially controlled ordering without touching the template.
Diagnosing Template Choices
Diagnosing which template Hugo has chosen is more reliable with tooling than with guesswork. Running the development server with debug output reveals the selected templates in the terminal logs. Another quick technique is to place a visible marker in a candidate file and inspect the page source.
HTML comments are often stripped during minified builds, and Go template comments never reach the output, so an innocuous meta tag makes a better marker because a minifier will not remove it. If the marker does not appear after a rebuild, either the template being edited is not in use because another file higher in the lookup order is taking precedence, or a theme is providing a matching file without it being obvious.
Front Matter Beyond Layout
Front matter on the home page earns its place when it supplies values that make their way into head tags and structured sections, rather than when it tries to replicate layout logic. A brief description is valuable for metadata and social previews because many base templates output it as a meta description tag. Where a site uses social cards, parameters for images and titles can be added and consumed consistently.
Menu participation also remains available to the home page, with entries in front matter allowing the home to appear in navigation with a given weight. Less common but still useful fields include outputs, which can disable or configure output formats, and cascade, which can provide defaults to child pages when site-wide consistency matters. Build controls can influence whether a page is rendered or indexed, though these are rarely changed on a home page once the structure has settled.
Template Hygiene
Template hygiene pays off throughout this process. Whether the home page uses a self-contained template or wraps baseof.html, the principle is the same: each file should own a clearly bounded responsibility. The home template in the example above does this well, with head.html, header.html and footer.html each handling their own concerns, and the main content area occupied by the blocks dispatcher and the two dynamic partials. Column wrappers are easiest to manage when each partial opens and closes its own structure, rather than relying on a sibling to provide closures elsewhere.
That self-containment prevents subtle layout breakage and means that adding a new block type requires only a small partial in layouts/partials/blocks/ and a new entry in the front matter blocks list, with no changes to any existing template. Once the home page adopts this pattern, the need for CSS overrides recedes because the HTML shape finally expresses intent instead of fighting it.
Bootstrap Utility Classes in Summary
Understanding Bootstrap's utility classes rounds off the technique because these classes anchor the modular blocks without the need for custom CSS. h-100 sets height to one hundred percent and works well on cards inside a flex row so that their bottoms align across a grid, as seen in both the cards block and the home teasers. The h4, h5 and h6 utilities apply a different typographic scale to any element without changing the document outline, which is useful for keeping headings visually restrained while preserving accessibility. img-fluid provides responsive behaviour by constraining an image to its container width and maintaining aspect ratio, and w-100 makes an image or any element fill the container width even if its intrinsic size would let it stop short. Together, these classes produce predictable and adaptable blocks that feel consistent across all viewports.
Closing Remarks
The result of combining Hugo's list-page model for the home, a block-driven front matter design and Bootstrap's light-touch utilities is a home page that reads cleanly and remains easy to extend. New block types become a matter of adding a small partial and a new blocks entry, with the dispatcher handling the rest automatically. Dynamic sections such as recent snippets sit in dedicated partials called directly from the template, updating without any intervention in content/_index.md. Existing sections can be reordered without editing templates, shared structure remains in one place, and the need for brittle CSS customisation fades because the templates do the heavy lifting.
A final point returns to content/_index.md. Keeping front matter purposeful makes it valuable. A title, a layout directive and a blocks list that models the editorially controlled page structure are often enough, as we have seen in this example from my public transport website. More seldom-used fields such as outputs, cascade and build remain available should a site require them, but their restraint reflects the wider approach: let content describe structure, let templates handle layout and avoid unnecessary complexity.
Rendering Markdown in WordPress without plugins by using Parsedown
Much of what is generated using GenAI as articles is output as Markdown, meaning that you need to convert the content when using it in a WordPress website. Naturally, this kind of thing should be done with care to ensure that you are the creator and that it is not all the work of a machine; orchestration is fine, regurgitation does that add that much. Naturally, fact checking is another need as well.
Writing plain Markdown has secured its own following as well, with WordPress plugins switching over the editor to facilitate such a mode of editing. When I tried Markup Markdown, I found it restrictive when it came to working with images within the text, and it needed a workaround for getting links to open in new browser tabs as well. Thus, I got rid of it to realise that it had not converted any Markdown as I expected, only to provide rendering at post or page display time. Rather than attempting to update the affected text, I decided to see if another solution could be found.
This took me to Parsedown, which proved to be handy for accomplishing what I needed once I had everything set in place. First, that meant cloning its GitHub repo onto the web server. Next, I created a directory called includes under that of my theme. Into there, I copied Parsedown.php to that location. When all was done, I ensured that file and directory ownership were assigned to www-data to avoid execution issues.
Then, I could set to updating the functions.php file. The first line to get added there included the parser file:
require_once get_template_directory() . '/includes/Parsedown.php';
After that, I found that I needed to disable the WordPress rendering machinery because that got in the way of Markdown rendering:
remove_filter('the_content', 'wpautop');
remove_filter('the_content', 'wptexturize');
The last step was to add a filter that parsed the Markdown and passed its output to WordPress rendering to do the rest as usual. This was a simple affair until I needed to deal with code snippets in pre and code tags. Hopefully, the included comments tell you much of what is happening. A possible exception is $matches[0]which itself is an array of entire <pre>...</pre> blocks including the containing tags, with $i => $block doing a $key (not the same variable as in the code, by the way) => $value lookup of the values in the array nesting.
add_filter('the_content', function($content) {
// Prepare a store for placeholders
$placeholders = [];
// 1. Extract pre blocks (including nested code) and replace with safe placeholders
preg_match_all('//si', $content, $pre_matches);
foreach ($pre_matches[0] as $i => $block) {
$key = "§PREBLOCK{$i}§";
$placeholders[$key] = $block;
$content = str_replace($block, $key, $content);
}
// 2. Extract standalone code blocks (not inside pre)
preg_match_all('/).*?<\/code>/si', $content, $code_matches);
foreach ($code_matches[0] as $i => $block) {
$key = "§CODEBLOCK{$i}§";
$placeholders[$key] = $block;
$content = str_replace($block, $key, $content);
}
// 3. Run Parsedown on the remaining content
$Parsedown = new Parsedown();
$content = $Parsedown->text($content);
// 4. Restore both pre and code placeholders
foreach ($placeholders as $key => $block) {
$content = str_replace($key, $block, $content);
}
// 5. Apply paragraph formatting
return wpautop($content);
}, 12);
All of this avoided dealing with extra plugins to produce the required result. Handily, I still use the Classic Editor, which makes this work a lot more easily. There still is a Markdown import plugin that I am tempted to remove as well to streamline things. That can wait, though. It best not add any more of them any way, not least avoid clashes between them and what is now in the theme.
Moves to Hugo
What amazes me is how things can become more complicated over time. As long as you knew HTML, CSS and JavaScript, building a website was not as onerous as long as web browsers played ball with it. Since then, things have got easier to use but more complex at the same time. One example is WordPress: in the early days, themes were much simpler than they are now. The web also has got more insecure over time, and that adds to complexity as well. It sometimes feels as if there is a choice to make between ease of use and simplicity.
It is against that background that I reassessed the technology that I was using on my public transport and Irish history websites. The former used WordPress, while the latter used Drupal. The irony was that the simpler website was using the more complex platform, so the act of going simpler probably was not before time. Alternatives to WordPress were being surveyed for the first of the pair, but none had quite the flexibility, pervasiveness and ease of use that WordPress offers.
There is another approach that has been gaining notice recently. One part of this is the use of Markdown for web publishing. This is a simple and distraction-free plain text format that can be transformed into something more readable. It sees usage in blogs hosted on GitHub, but also facilitates the generation of static websites. The clutter is absent for those who have no need of the Gutenberg Editor on WordPress.
With the content written in Markdown, it can be fed to a static website generator like Hugo. Using defined templates and fixed assets like CSS together with images and other static files, it can slot the content into HTML files very speedily since it is written in the Go programming language. Once you get acclimatised, there are no folder structures that cannot be used, so you get full flexibility in how you build out your website. Sitemaps and RSS feeds can be built at the same time, both using the same input as the HTML files.
In a nutshell, it automates what once needed manual effort used a code editor or a visual web page editor. The use of HTML snippets and layouts means that there is no necessity for hand-coding content, like there was at the start of the web. It also helps that Bootstrap can be built in using Node, so that gives a basis for any styling. Then, SCSS can take care of things, giving even more automation.
Given that there is no database involved in any of this, the required information has to be stored somewhere, and neither the Markdown content nor the layout files contain all that is needed. The main site configuration is defined in a single TOML file, and you can have a single one of these for every publishing destination; I have development and production servers, which makes this a very handy feature. Otherwise, every Markdown file needs a YAML header where titles, template references, publishing status and other similar information gets defined. The layouts then are linked to their components, and control logic and other advanced functionality can be added too.
Because static files are being created, it does mean that site searching and commenting, or contact pages cannot work like they would on a dynamic web platform. Often, external services are plugged in using JavaScript. One that I use for contact forms is Forminit. Then, Zapier has had its uses in using the RSS feed to tweet site updates on Twitter when new content gets added. Though I made different choices, Disqus can be used for comments and Algolia for site searching. Generally, though, you can find yourself needing to pay, particularly if you need to remove advertising or gain advanced features.
Some commenting service providers offer open source self-hosted options, but I found these difficult to set up and ended up not offering commenting at all. That was after I tried out Cactus Comments only to find that it was not discriminating between pages, so it showed the same comments everywhere. There are numerous alternatives like Remark42, Hyvor Talk, Commento, FastComments, Utterances, Isso, Mouthful, Muut and HyperComments but trying them all out was too time-consuming for what commenting was worth to me. It also explains why some static websites even send readers to Twitter if they have something to say, though I have not followed this way of working.
For searching, I added a JavaScript/JSON self-hosted component to the transport website, and it works well. However, it adds to the size of what a browser needs to download. That is not a major issue for desktop browsers, but the situation with mobile browsers is such that it has a sizeable effect. Testing with PageSpeed and Lighthouse highlighted this, even if I left things as they are. The solution works well in any case.
One thing that I have yet to work out is how to edit or add content while away from home. Editing files using an SSH connection is as much a possibility as setting up a Hugo publishing setup on a laptop. After that, there is the question of using a tablet or phone, since content management systems make everything web based. These are points that I have yet to explore.
As is natural with a code-based solution, there is a learning curve with Hugo. Reading a book provided some orientation, and looking on the web resolved many conundrums. There is good documentation on the project website, while forum discussions turn up on many a web search. Following any research, there was next to nothing that could not be done in some way.
Migration of content takes some forethought and took quite a bit of time, though there was an opportunity to carry some housekeeping as well. The history website was small, so copying and pasting sufficed. For the transport website, I used Python to convert what was on the database into Markdown files before refining the result. That provided some automation, but left a lot of work to be done afterwards.
The results were satisfactory, and I like the associated simplicity and efficiency. That Hugo works so fast means that it can handle large websites, so it is scalable. The new Markdown method for content production is not problematical so far apart from the need to make it more portable, and it helps that I found a setup that works for me. This also avoids any potential dealbreakers that continued development of publishing platforms like WordPress or Drupal could bring. For the former, I hope to remain with the Classic Editor indefinitely, but now have another option in case things go too far.
Improving a website contact form
On another website, I have had a contact form, but it was missing some functionality. For instance, it stored the input in files on a web server instead of emailing them. That was fixed more easily than expected using the PHP mail function. Even so, it remains useful to survey corresponding documentation on the W3Schools website.
The other changes affected the way the form looked to a visitor. There was a reset button, and that was removed on finding that such things are out of favour these days. Thinking again, there hardly was any need for it any way.
Newer additions that came with HTML5 had their place too. Including user hints using the placeholder attribute should add some user-friendliness, although I have avoided experimenting with browser-powered input validation for now. Use of the required attribute has its uses for telling a visitor that they have forgotten something, but I need to check how that is handled in CSS more thoroughly before I go with that since there are new :required, :optional, :valid and :invalid pseudoclasses that can be used to help.
It appears that there is much more to learn about setting up forms since I last checked. This is perhaps a hint that a few books need reading as part of catching with how things are done these days. There also is something new to learn.
Easier to print?
One matter that really came to light was how well or not the pages on here and on my hill walking and photography website came out on the printed page. After spotting a WordPress Codex article and with an eye on improving things, I have made a distinction between screen and print stylesheets. The code in the XHTML looks like this:
<link rel="stylesheet" href="/style.css" type="text/css" media="screen" />
<link rel="stylesheet" href="/style_print.css" type="text/css" media="print" />
The media attribute seems to be respected by the browsers that I have been using for testing (latest versions of Firefox, MSIE and Opera) so it then was a matter of using CSS to control what was shown and how it was displayed. Extraneous items like sidebars were excluded from the printed page in favour of the real content that visitors would be wanting anyway, and everything else was made as monochrome as possible, with images being the only things to escape. After all, people don't want to be wasting paper and ink in these cash strained times, and there's no need to have any more colour than necessary either. Then, there's the distraction caused by non-functioning hyperlinks that has inspired the sharing of some wisdom on A List Apart. Returning to my implementation, please let me know in the comments what you think of what I have done on here and if there remains any room for improvement.