Technology Tales

Notes drawn from experiences in consumer and enterprise technology

TOPIC: CLINICAL DATA INTERCHANGE STANDARDS CONSORTIUM

Open Source Tools for Pharmaceutical Clinical Data Reporting, Analysis & Regulatory Submissions

25th March 2026

There was a time when SAS was the predominant technology for clinical data reporting, analysis and submission work in the pharmaceutical industry. Within the last decade, open-source alternatives have gained a lot of traction, and the {pharmaverse} initiative has arisen from this. The range of packages ranges from dataset creation (SDTM and ADaM) to output production, with utilities for test data and submission activities. All in all, there is quite a range here. The effort also is a marked change from each company working by itself to sharing and collaborating with others. Here then is the outcome of their endeavours.

{admiral}

Designed as an open-source, modular R toolbox, the {admiral} package assists in the creation of ADaM datasets through reusable functions and utilities tailored for pharmaceutical data analysis. Core packages handle general ADaM derivations whilst therapeutic area-specific extensions address more specialised needs, with a structured release schedule divided into two phases. Usability, simplicity and readability are central priorities, supported by comprehensive documentation, vignettes and example scripts. Community contributions and collaboration are actively encouraged, with the aim of fostering a shared, industry-wide approach to ADaM development in R. Related packages for test data and metadata manipulation complement the main toolkit, alongside a commitment to consistent coding practices and accessible code.

{aNCA}

Maintained by contributors from F. Hoffmann-La Roche AG, {aNCA} is an open-source R Shiny application that makes Non-Compartmental Analysis (NCA) accessible to scientists working with clinical and pre-clinical pharmacokinetic datasets. Users can upload their own data, apply pre-processing filters and run NCA with configurable options including half-life calculation rules, manual slope selection and user-defined AUC intervals. Results are explorable through interactive box plots, scatter plots and summary statistics tables, and can be exported in `PP` and `ADPP` dataset domains alongside a reproducible R script. Analysis settings can be saved and reloaded for continuity across sessions. Installation is available from CRAN via a standard install command, from GitHub using the `pak` package manager, or by cloning the repository directly for those wishing to contribute.

{autoslider.core}

The {autoslider.core} package generates standard table templates commonly used in Study Results Endorsement Plans. Its principal purpose is to reduce duplicated effort between statisticians and programmers when creating slides. Available on CRAN, the package can be installed either through the standard installation method or directly from GitHub for the latest development version.

{cards}

Supporting the CDISC Analysis Results Standard, the {cards} package facilitates the creation of analysis results data sets that enhance automation, reproducibility and consistency in clinical research. Structured data sets for statistical summaries are generated to enable tasks such as quality control, pre-calculating statistics for reports and combining results across studies. Tools for creating, modifying and analysing these data sets are provided, with the {cardx} extension offering additional functions for statistical tests and models. Installation is available through CRAN or GitHub, with resources including documentation and community contributions.

{cardx}

Extending the {cards} package, {cardx} facilitates the creation of Analysis Results Data Objects (ARD's) in R by leveraging utility functions from {cards} and statistical methods from packages such as {stats} and {emmeans}. These ARD's enable the generation of tables and visualisations for regulatory submissions, support quality control checks by storing both results and parameters, and allow for reproducible analyses through the inclusion of function inputs. Installation options include CRAN and GitHub, with examples demonstrating its use in t-tests and regression models. External statistical library dependencies are not enforced by the package, requiring explicit references in code for tools like {renv} to track them.

{chevron}

A collection of high-level functions for generating standardised outputs in clinical trials reporting, {chevron} covers a broad range of output types including tables for safety summaries, adverse events, demographics, ECG results, laboratory findings, medical history, response data, time-to-event analyses and vital signs, as well as listings and graphs such as Kaplan-Meier and mean plots. Straightforward implementation with limited parameterisation is a defining characteristic of the package. It is available on CRAN, with a development version accessible via GitHub, and those requiring greater flexibility are directed to the related {tern} package and its associated catalogue.

{clinify}

Built on the {flextable} and {officer} packages, {clinify} streamlines the creation of clinical tables, listings and figures whilst addressing challenges such as adherence to organisational reporting standards, the need for flexibility across different clients and the importance of reusable configurations. Compatibility with existing tools is a key priority, ensuring that its features do not interfere with the core functionalities of {flextable} or {officer}, whilst enabling tasks like dynamic page breaks, grouped headers and customisable formatting. Complex documents such as Word files with consistent layouts and tailored elements like footnotes and titles can be produced with reduced effort by building on these established frameworks.

{connector}

Offering a unified interface for establishing connections to various data sources, the {connector} package covers file systems and databases through a central configuration file that maintains consistent references across project scripts and facilitates switching between data sources. Functions such as connector_fs() for file system access and connector_dbi() for database connections are provided, with additional expansion packages enabling integration with specific platforms like Databricks and SharePoint. Installation is available via CRAN or GitHub, and usage involves defining a YAML configuration file to specify connection details that can then be initialised and utilised to interact with data sources. Operations including reading, writing and listing content are supported, with methods for managing connections and handling data in formats like parquet.

{covtracer}

Linking test traces to package code and documentation using coverage data from {covr}, the {covtracer} package enables the creation of a traceability matrix that maps tests to specific documented functions. Installation is via remotes from GitHub with specific dependencies, and configuration of {covr} is required to record tests alongside coverage traces. Untested behaviours can be identified and the direct testing of functions assessed, providing insights into test coverage and software validation. The example workflow demonstrates generating a matrix to show which tests evaluate code related to documented behaviours, highlighting gaps in test coverage.

{datacutr}

An open-source solution for applying data cuts to SDTM datasets within R, the {datacutr} package is designed to support pharmaceutical data analysis workflows. Available via CRAN or GitHub, it offers options for different types of cuts tailored to specific SDTM domains. Supplemental qualifiers are assumed to be merged with their parent domain before processing, allowing users flexibility in defining cut types such as patient, date, or domain-specific cuts. Documentation, contribution guidelines and community support through platforms like Slack and GitHub provide further assistance.

{datasetjson}

Facilitating the creation and manipulation of CDISC Dataset JSON formatted datasets, the {datasetjson} R package enables users to generate structured data files by applying metadata attributes to data frames. Metadata such as file paths, study identifiers and system details can be incorporated into dataset objects and written to disk or returned as JSON text. Reading JSON files back into data frames is also supported, with metadata preserved as attributes for use in analysis. The package currently supports version 1.1.0 of the Dataset JSON standard and is available via CRAN or GitHub.

{dataviewR}

An interactive data viewer for R, {dataviewR} enhances data exploration through a Shiny-based interface that enables users to examine data frames and tibbles with tools for filtering, column selection and generating reproducible {dplyr} code. Viewing multiple datasets simultaneously is supported, and the tool provides metadata insights alongside features for importing and exporting data, all within a responsive and user-friendly design. By combining intuitive navigation with automated code generation, the package aims to streamline data analysis workflows and improve the efficiency of dataset manipulation and documentation.

{docorator}

Generating formatted documents by adding headers, footers and page numbers to displays such as tables and figures, {docorator} exports outputs as PDF or RTF files. Accepted inputs include tables created with the {gt} package, figures generated using {ggplot2}, or paths to existing PNG files, and users can customise document elements like titles and footers. The package can be installed from CRAN or via GitHub, and its use involves creating a display object with specified formatting options before rendering the output. LaTeX libraries are required for PDF generation.

{envsetup}

Providing a configuration system for managing R project environments, the {envsetup} package enables adaptation to different deployment stages such as development, testing and production without altering code. YAML files are used to define paths for data and output directories, and R scripts are automatically sourced from specified locations to reduce the need for manual configuration changes. This approach supports consistent code usage across environments whilst allowing flexibility in environment-specific settings, streamlining workflows for projects requiring multiple deployment contexts.

{ggsurvfit}

Simplifying the creation of survival analysis visualisations using {ggplot2}, the {ggsurvfit} package offers tools to generate publication-ready figures with features such as confidence intervals, risk tables and quantile markers. Seamless integration with {ggplot2} functions allows for extensive customisation of plot elements whilst maintaining alignment between graphical components and annotations. Competing risks analysis is supported through `ggcuminc()`, and specific functions such as Surv_CNSR() handle CDISC ADaM `ADTTE` data by adjusting event coding conventions to prevent errors. Installation options are available via CRAN or GitHub, with examples and further resources accessible through its documentation and community links.

{gridify}

Addressing challenges in creating consistent and customisable graphical arrangements for figures and tables, the {gridify} package leverages the base {grid} package to facilitate the addition of headers, footers, captions and other contextual elements through predefined or custom layouts. Multiple input types are supported, including {ggplot2}, {flextable} and base R plots, and the workflow involves generating an object, selecting a layout and using functions to populate text elements before rendering the final output. Installation options include CRAN and GitHub, with examples demonstrating its application in enhancing tables with metadata and formatting. Uniformity across different projects is promoted, reducing manual adjustments and aligning visual elements consistently.

{gtsummary}

Offering a streamlined approach to generating publication-quality analytical and summary tables in R, the {gtsummary} package enables users to summarise datasets, regression models and other statistical outputs with minimal code. Variable types are identified automatically, relevant descriptive statistics computed and measures of data incompleteness included, whilst customisation of table formatting such as adjusting labels, adding p-values or merging tables for comparative analysis is also supported. Integration with packages like {broom} and {gt} facilitates the creation of visually appealing tables, and results can be exported to multiple formats including HTML, Word and LaTeX, making the package suitable for reproducible reporting in academic and professional contexts.

{logrx}

Supporting logging in clinical programming environments, the {logrx} package generates detailed logs for R scripts, ensuring code execution is traceable and reproducible. An overview of script execution and the associated environment is provided, enabling users to recreate conditions for verification or further analysis. Available on CRAN, installation is possible via standard methods or from its development repository, offering flexibility for both file-based and scripted usage. Structured logging tailored to the specific requirements of clinical applications is the defining characteristic of the package, with simplicity and minimal intrusion in coding workflows maintained throughout.

{metacore}

Providing a standardised framework for managing metadata within R sessions, the {metacore} package is particularly suited to clinical trial data analysis. Metadata is organised into six interconnected tables covering dataset specifications, variable details, value definitions, derivations, code lists and supplemental information, ensuring consistency and ease of access. By centralising metadata in a structured, immutable format, the package facilitates the development of tools that can leverage this information across different workflows, reducing the need for redundant data structures. Reading metadata from various sources, including Define-XML 2.0, is also supported.

{metatools}

Working with {metacore} objects, {metatools} enables users to build datasets, enhance columns in existing datasets and validate data against metadata specifications. Installation is available from CRAN or via GitHub. Core functionality includes pulling columns from existing datasets, creating new categorical variables, converting columns to factors and running checks to verify that data conforms to control terminology and that all expected variables are present.

{pharmaRTF}

Developed to address gaps in RTF output capabilities within R, {pharmaRTF} is a package for pharmaceutical industry programmers who produce RTF documents for clinical trial data analysis. Whilst the {huxtable} package offers extensive RTF styling and formatting options, it lacks the ability to set document properties such as page size and orientation, repeat column headers across pages, or create multi-level titles and footnotes within document headers and footers. These limitations are resolved by {pharmaRTF}, which wraps around {huxtable} tables to provide document property controls, proper multipage display and title and footnote management within headers and footers. Two core objects form the basis of the package: rtf_doc for document-wide attributes and hf_line for creating individual title and footnote lines, each carrying formatting properties such as alignment, font and bold or italic styling. Default output files use Courier New at 12-point size, Letter page dimensions in landscape orientation with one-inch margins, though all of these can be adjusted through property functions. The package is available on CRAN and supports both a {tidyverse} piping style and a more traditional assignment-based coding approach.

{pharmaverseadam}

Serving as a repository for ADaM test datasets generated by executing templates from related packages such as {admiral} and its extensions, the {pharmaverseadam} package automates dataset creation through a script that installs required packages, runs templates and saves results. Metadata is managed centrally in an XLSX file to ensure consistency in documentation, and updates occur regularly or ad-hoc when templates change. Documentation is generated automatically from metadata and saved as `.R` files, and the package includes contributions from multiple developers with examples provided for each dataset. Preparing metadata, updating configuration files for new therapeutic areas and executing a script to generate datasets and documentation ensures alignment with the latest versions of dependent packages. Installation is available via CRAN or GitHub.

{pharmaverseraw}

Providing raw datasets to support the creation of SDTM datasets, the {pharmaverseraw} package includes examples that are independent of specific electronic data capture systems or data standards such as CDASH. Datasets are named using SDTM domain identifiers with the suffix _raw, and installation options include CRAN or direct GitHub access. Updates involve contributing via GitHub issues, generating new or modified datasets through standalone R scripts stored in the data-raw folder, and ensuring generated files are saved in the data folder as .rda files with consistent naming. Documentation is maintained in R/*.R files, and changes require updating `NAMESPACE` and `.Rd` files using devtools::document.

{pharmaversesdtm}

A collection of test datasets formatted according to the SDTM standard, the {pharmaversesdtm} package is designed for use within the pharmaverse family of packages. Datasets applicable across therapeutic areas, such as `DM` and VS, are included alongside those specific to particular areas, like `RS` and `OE`. Available via CRAN and GitHub, the package provides installation instructions for both stable and development versions, with test data sourced from the CDISC pilot project and ad-hoc datasets generated by the {admiral} team. Naming conventions distinguish between general and therapeutic area-specific categories, with examples such as dm for general use and rs_onco for oncology-specific data. Updates involve creating or modifying R scripts in the data-raw folder, generating `.rda` files and updating metadata in a central JSON file to automate documentation and maintain consistency, including specifying dataset details like labels, descriptions and therapeutic areas.

{pkglite} (R)

Converting R package source code into text files and reconstructing package structures from those files, {pkglite} enables the exchange and management of R packages as plain text. Single or multiple packages can be processed through functions that collate, pack and unpack files, with installation options available via CRAN or GitHub. The tool adheres to a defined format for text files and includes documentation for generating specifications and managing file collections.

{pkglite} (Python)

An open-source framework licensed under the MIT licence, {pkglite} for Python, allows source projects written in any programming language to be packed into portable files and restored to their original directory structure. Installation is available via PyPI or as a development version cloned from GitHub, and the package can also be run without installation using `uvx`. A command line interface is provided in addition to the Python API, which can be installed globally using `pipx`.

{rhino}

Streamlining the development of high-quality, enterprise-grade Shiny applications, {rhino} integrates software engineering best practices, modular code structures and robust testing frameworks. Scalable architecture is supported through modularisation, code quality is enhanced with unit and end-to-end testing, and automation is facilitated via tools for project setup, continuous integration and dependency management. Comprehensive documentation is divided into tutorials, explanations and guides, with examples and resources available for learning.

{risk.assessr}

Evaluating the reliability and security of R packages during validation, the {risk.assessr} package analyses maintenance, documentation and dependencies through metrics such as R CMD check results, unit test coverage and dependency assessments. A traceability matrix linking functions to tests is generated, and risk profiles are based on predefined thresholds including documentation completeness, licence type and code coverage. The tool supports installation from GitHub or CRAN, processes local package files or `renv.lock` dependencies and offers detailed outputs such as risk analysis, dependency lists and reverse dependency information. Advanced features include identifying potential issues in suggested package dependencies and generating HTML reports for risk evaluation, with applications in clinical trial workflows and package validation processes.

{riskassessment}

Built on the {riskmetric} framework, the {riskassessment} application offers a user-friendly interface for evaluating the risk of using R packages within regulated industries, assessing development practices, documentation and sustainability. Non-technical users can review {riskmetric} outputs, add personalised comments, categorise packages into risk levels, generate reports and store assessments securely, with features such as user authentication and role-based access. Alignment with validation principles outlined by the R Validation Hub supports decision-making in regulated settings, though deeper software inspection may be required in some cases. Deployment is possible using tools like Shiny Server or Posit Connect, with installation options including GitHub and local configuration via {renv}.

{riskmetric}

Providing a framework for evaluating the quality of R packages, the {riskmetric} package assesses development practices, documentation, community engagement and sustainability through a series of metrics. Currently operating in a maintenance-only phase, further development is focused on a new tool called {val.metre}. The workflow involves retrieving package information, assessing it against predefined criteria and generating a risk score, with installation available from CRAN or GitHub. An associated application, {riskassessment}, offers a user interface for organisations to review and manage package risk assessments, store metrics and apply organisational rules.

{rlistings}

Designed to create and display formatted listings with a focus on ASCII rendering for tables and regulatory-ready outputs, the {rlistings} R package relies on the {formatters} package for formatting infrastructure. Requirements such as flexible pagination, multiple output formats and repeated key columns informed its development. Available on CRAN and GitHub, the package is under active development and includes features such as adjustable column widths, alignment and support for titles and footnotes.

{rtables}

Tailored for generating submission-ready tables for health authority review, the {rtables} R package creates and displays complex tables with advanced formatting and output options that support regulatory requirements for clinical trial data presentation. Separation of data values from their visualisation is enabled, multiple values can be included within cells, and flexible tabulation and formatting capabilities are provided, including cell spans, rounding and alignment. Output formats include HTML, ASCII, LaTeX, PDF and PowerPoint, with additional formats under development. Also, the package incorporates features such as pagination, distinction between data names and labels for CDISC standards and support for titles and footnotes. Installation is available via CRAN or GitHub, with ongoing community support and training resources.

{rtflite}

A lightweight Python library focused on precise formatting of production-quality tables and figures, {rtflite} is designed for composing RTF documents. Installation is available via PyPI or directly from its GitHub repository, with optional dependencies available to enable DOCX assembly support and RTF-to-PDF or RTF-to-DOCX conversion via LibreOffice.

{sdtm.oak}

Offering a modular, open-source solution for generating CDISC SDTM datasets, the {sdtm.oak} R package is designed to work across different electronic data capture systems and data standards. Industry challenges related to inconsistent raw data structures and varying data collection practices are addressed through reusable algorithms that map raw datasets to SDTM domains, with current capabilities covering Findings, Events and Intervention classes. Future developments aim to expand domain support, introduce metadata-driven code generation and enhance automation potential, though sponsor-specific metadata management tasks are not yet handled by the package. Available on CRAN and GitHub, development is ongoing with refinements based on user feedback and evolving SDTM requirements.

{sdtmchecks}

Providing functions to detect common data issues in SDTM datasets, the {sdtmchecks} package is designed to be broadly applicable and useful for analysis. Installation is available from CRAN or via GitHub, with development versions accessible through specific repositories, and users are not required to specify SDTM versions. A range of data check functions stored as R scripts is included, and contributions are encouraged that maintain flexibility across different data standards.

{siera}

Facilitating the generation of Analysis Results Datasets (ARD's) by processing Analysis Results Standard (ARS) metadata, the {siera} package works with parameters such as analysis sets, groupings, data subsets and methods. Metadata is typically provided in JSON format and used to create R scripts automatically that, when executed with corresponding ADaM datasets, produce ARD's in a structured format. The package can be installed from CRAN or GitHub, and its primary function, `readARS`, requires an ARS file, an output directory and access to relevant ADaM data. The CDISC Analysis Results Standard underpins this process, promoting automation and consistency in analysis outcomes.

{teal}

An open-source, Shiny-based interactive framework for exploratory data analysis, {teal} is developed as part of the pharmaverse ecosystem and maintained by F. Hoffmann-La Roche AG alongside a broad community of contributors. Analytical applications are built by combining supported data types, including CDISC clinical trial data, independent or relational datasets and `MultiAssayExperiment` objects, with modular analytical components known as teal modules. These modules can be drawn from dedicated packages covering general data exploration, clinical reporting and multi-omics analysis and define the specific analyses presented within an application. A suite of companion packages handles logging, reproducibility, data loading, filtering, reporting and transformation. The package is available on CRAN and is under active development, with community support provided through the {pharmaverse} Slack workspace.

{tern}

Supporting clinical trial reporting through a broad range of analysis functions, the {tern} R package offers data visualisation capabilities including line plots, Kaplan-Meier plots, forest plots, waterfall plots and Bland-Altman plots. Statistical model fit summaries for logistic and Cox regression are also provided, along with numerous analysis and summary table functions. Many of these outputs can be integrated into interactive Teal Shiny applications via the {teal.modules.clinical} package.

{tfrmt}

Offering a structured approach to defining and applying formatting rules for data displays in clinical trials, the {tfrmt} package streamlines the creation of mock displays, aligns with industry-standard Analysis Results Data (ARD) formats and integrates formatting tasks into the programming workflow to reduce manual effort and rework. Metadata is leveraged to automate styling and layout, enabling standardised formatting with minimal code, supporting quality control before final output and facilitating the reuse of datasets across different table types. Built on the {gt} package, the tool provides a flexible interface for generating tables and mock-ups, allowing users to focus on data interpretation rather than repetitive formatting tasks.

{tfrmtbuilder}

A tool for defining display-related metadata to streamline the creation and modification of table formats, the {tfrmtbuilder} package supports workflows such as generating tables from scratch, using templates or editing existing ones. Features include a toggle to switch between mock and real data, options to load or create datasets, tools for mapping and formatting data and the ability to export results as JSON, HTML or PNG. Designed for use in study planning and analysis phases, the package allows users to manage table structures efficiently.

{tidyCDISC}

An open-source R Shiny application, {tidyCDISC} is designed to help clinical personnel explore and analyse ADaM-standard data sets without writing any code. Customised clinical tables can be generated through a point-and-click interface, trends across patient populations examined using dynamic figures and individual patient profiles explored in detail. A broad range of users is served, from clinical heads with no programming background to statisticians and statistical programmers, with reported time savings of around 95% for routine trial analysis tasks. The app accepts only `sas7bdat` files conforming to CDISC ADaM standards and includes a feature to export reproducible R scripts from its table generator. A demo version is available without installation using CDISC pilot data, whilst uploading study data requires installing the package from CRAN or via GitHub.

{tidytlg}

Facilitating the creation of tables, listings and graphs using the {tidyverse} framework, the {tidytlg} package offers two approaches: a functional method involving custom scripts for each output and a metadata-driven method that leverages column and table metadata to generate results automatically. Tools for data analysis, including frequency tables and univariate statistics, are included alongside support for exporting outputs to formatted documents.

{Tplyr}

Simplifying the creation of clinical data summaries by breaking down complex tables into reusable layers, {Tplyr} allows users to focus on presentation rather than repetitive data processing. The conceptual approach of {dplyr} is mirrored but applied to common clinical table types, such as counting event-based variables, generating descriptive statistics for continuous data and categorising numerical ranges. Metadata is included with each summary produced to ensure traceability from raw data to final output, and user-acceptance testing documentation is provided to support its use in regulated environments. Installation options are available via CRAN or GitHub, accompanied by detailed vignettes covering features like layer templates, metadata extension and styled table outputs.

{valtools}

Streamlining the validation of R packages used in clinical research and drug development, {valtools} offers templates and functions to support tasks such as setting up validation frameworks, managing requirements and test cases and generating reports. Developed by the R Package Validation Framework PHUSE Working Group, the package integrates with standard development tools and provides functions prefixed with `vt` to facilitate structured validation processes including infrastructure setup, documentation creation and automated checks. Generating validation reports, scraping metadata from validation configurations and executing validation workflows through temporary installations or existing packages are all supported.

{whirl}

Facilitating the execution of scripts in batch mode whilst generating detailed logs that meet regulatory requirements, the {whirl} package produces logs including script status, execution timestamps, environment details, package versions and environmental variables, presented in a structured HTML format. Individual or multiple scripts can be run simultaneously, with parallel processing enabled through specified worker counts. A configuration file allows scripts to be executed in sequential steps, ensuring dependencies are respected, and the package produces individual logs for each script alongside a summary log and a tibble summarising execution outcomes. Installation options include CRAN and GitHub, with documentation available for customisation and advanced usage.

{xportr}

Assisting clinical programmers in preparing CDISC compliant XPT files for clinical data sets, the {xportr} package associates metadata with R data frames, performs validation checks and converts data into transportable SAS v5 XPT format. Tools are included to define variable types, set appropriate lengths, apply labels, format data, reorder variables and assign dataset labels, ensuring adherence to standards such as variable naming conventions, character length limits and the absence of non-ASCII characters. A practical example demonstrates how to use a specification file to apply these transformations to an ADSL dataset, ultimately generating a compliant XPT file.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.