Technology Tales

Notes drawn from experiences in consumer and enterprise technology

TOPIC: JAPAN

From summary statistics to published reports with R, LaTeX and TinyTeX

19th March 2026

For anyone working across LaTeX, R Markdown and data analysis in R, there comes a point where separate tools begin to converge. Data has to be summarised, those summaries have to be turned into presentable tables and the finished result has to compile into a report that looks appropriate for its audience rather than a console dump. These notes follow that sequence, moving from the practical business of summarising data in R through to tabulation and then on to the publishing infrastructure that makes clean PDF and Word output possible.

Summarising Data with {dplyr}

The starting point for many analyses is a quick exploration of the data at hand. One useful example uses the anorexia dataset from the {MASS} package together with {dplyr}. The dataset contains weight change data for young female anorexia patients, divided into three treatment groups: Cont for the control group, CBT for cognitive behavioural treatment and FT for family treatment.

The basic manipulation starts by loading {MASS} and {dplyr}, then using filter() to create separate subsets for each treatment group. From there, mutate() adds a wtDelta column defined as Postwt - Prewt, giving the weight change for each patient. group_by(Treat) prepares the data for grouped summaries, and arrange(wtDelta) sorts within treatment groups. The notes then show how {dplyr}'s pipe operator, %>%, makes the workflow more readable by chaining these operations. The final summary table uses summarize() to compute the number of observations, the mean weight change and the standard deviation within each treatment group. The reported values are count 29, average weight change 3.006897 and standard deviation 7.308504 for CBT, count 26, average weight change -0.450000 and standard deviation 7.988705 for Cont and count 17, average weight change 7.264706 and standard deviation 7.157421 for FT.

That example is not presented as a complete statistical analysis. Instead, it serves as a quick exploratory route into the data, with the wording remaining appropriately cautious and noting that this is only a glance and not a rigorous analysis.

Choosing an R Package for Descriptive Summaries

The question of how best to summarise data opens up a broader comparison of R packages for descriptive statistics. A useful review sets out a common set of needs: a count of observations, the number and types of fields, transparent handling of missing data and sensible statistics that depend on the data type. Numeric variables call for measures such as mean, median, range and standard deviation, perhaps with percentiles. Categorical variables call for counts of levels and some sense of which categories dominate.

Base R's summary() does some of this reasonably well. It distinguishes categorical from numeric variables and reports distributions or numeric summaries accordingly, while also highlighting missing values. Yet, it does not show an overall record count, lacks standard deviation and is not especially tidy or ready for tools such as kable. Several contributed packages aim to improve on that. Hmisc::describe() gives counts of variables and observations, handles both categorical and numerical data and reports missing values clearly, showing the highest and lowest five values for numeric data instead of a simple range. pastecs::stat.desc() is more focused on numeric variables and provides confidence intervals, standard errors and optional normality tests. psych::describe() includes categorical variables but converts them to numeric codes by default before describing them, which the package documentation itself advises should be interpreted cautiously. psych::describeBy() extends this approach to grouped summaries and can return a matrix form with mat = TRUE.

Among the packages reviewed, {skimr} receives especially strong attention for balancing readability and downstream usefulness. skim() reports record and variable counts clearly, separates variables by type and includes missing data and standard summaries in an accessible layout. It also works with group_by() from {dplyr}, making grouped summaries straightforward to produce. More importantly for analytical workflows, the skim output can be treated as a tidy data frame in which each combination of variable and statistic is represented in long form, meaning the results can be filtered, transformed and plotted with standard tidyverse tools such as {ggplot2}.

{summarytools} is presented as another strong option, though with a distinction between its functions. descr() handles numeric variables and can be converted to a data frame for use with kable, while dfSummary() works across entire data frames and produces an especially polished summary. At the time of the original notes, dfSummary() was considered slow. The package author subsequently traced the issue, as documented in the same review, to an excessive number of histogram breaks being generated for variables with large values, imposing a limit to resolve it. The package also supports output through view(dfSummary(data)), which yields an attractive HTML-style summary.

Grouped Summary Table Packages

Once the data has been summarised, the next step is turning those summaries into formal tables. A detailed comparison covers a number of packages specifically designed for this purpose: {arsenal}, {qwraps2}, {Amisc}, {table1}, {tangram}, {furniture}, {tableone}, {compareGroups} and {Gmisc}. {arsenal} is described as highly functional and flexible, with tableby() able to create grouped tables in only a few lines and then be customised through control objects that specify tests, display statistics, labels and missing value treatment. {qwraps2} offers a lot of flexibility through nested lists of summary specifications, though at the cost of more code. {Amisc} can produce grouped tables and works with pander::pandoc.table(), but is noted as not being on CRAN. {table1} creates attractive tables with minimal code, though its treatment of missing values may not suit every use case. {tangram} produces visually appealing HTML output and allows custom rows such as missing counts to be inserted manually, although only HTML output is supported. {furniture} and {tableone} both support grouped table creation, but {tableone} in particular is notable because it is widely used in biomedical research for baseline characteristics tables.

The {tableone} package deserves separate mention because it is designed to summarise continuous and categorical variables in one table, a common need in medical papers. As the package introduction explains, CreateTableOne() can be used on an entire dataset or on a selected subset of variables, with factorVars specifying variables that are coded numerically but should be treated as categorical. The package can display all levels for categorical variables, report missing values via summary() and switch selected continuous variables to non-normal summaries using medians and interquartile ranges instead of means and standard deviations. For grouped comparisons, it prints p-values by default and can switch to non-parametric tests or Fisher's exact test where needed. Standardised mean differences can also be shown. Output can be captured as a matrix and written to CSV for editing in Excel or Word.

Styling and Exporting Tables

With tables constructed, the focus shifts to how they are presented and exported. As Hao Zhu's conference slides explain, the {kableExtra} package builds on knitr::kable() and provides a grammar-like approach to adding styling layers, importing the pipe %>% symbol from {magrittr} so that formatting functions can be added in the same way that layers are added in {ggplot2}. It supports themes such as kable_paper, kable_classic, kable_minimal and kable_material, as well as options for striping, hover effects, condensed layouts, fixed headers, grouped rows and columns, footnotes, scroll boxes and inline plots.

Table output is often the visible end of an analysis, and a broader review of R table packages covers a range of approaches that go well beyond the default output. In R Markdown, packages such as {gt}, {kableExtra}, {formattable}, {DT}, {reactable}, {reactablefmtr} and {flextable} all offer richer possibilities. Some are aimed mainly at HTML output, others at Word. {DT} in particular supports highly customised interactive tables with searching, filtering and cell styling through more advanced R and HTML code. {flextable} is highlighted as the strongest option when knitting to Word, given that the other packages are primarily designed for HTML.

For users working in Word-heavy settings, older but still practical workflows remain relevant too. One approach is simply to write tables to comma-separated text files and then paste and convert the content into a Word table. Another route is through {arsenal}'s write2 functions, designed as an alternative to SAS ODS. The convenience functions write2word(), write2html() and write2pdf() accept a wide range of objects: tableby, modelsum, freqlist and comparedf from {arsenal} itself, as well as knitr::kable(), xtable::xtable() and pander::pander_return() output. One notable constraint is that {xtable} is incompatible with write2word(). Beyond single tables, the functions accept a list of objects so that multiple tables, headers, paragraphs and even raw HTML or LaTeX can all be combined into a single output document. A yaml() helper adds a YAML header to the output, and a code.chunk() helper embeds executable R code chunks, while the generic write2() function handles formats beyond the three convenience wrappers, such as RTF.

The Publishing Infrastructure: CTAN and Its Mirrors

Producing PDF output from R Markdown depends on a working LaTeX installation, and the backbone of that ecosystem is CTAN, the Comprehensive TeX Archive Network. CTAN is the main archive for TeX and LaTeX packages and is supported by a large collection of mirrors spread around the world. The purpose of this distributed system is straightforward: users are encouraged to fetch files from a site that is close to them in network terms, which reduces load and tends to improve speed.

That global spread is extensive. The CTAN mirror list organises sites alphabetically by continent and then by country, with active sites listed across Africa, Asia, Europe, North America, Oceania and South America. Africa includes mirrors in South Africa and Morocco. Asia has particularly wide coverage, with many mirrors in China as well as sites in Korea, Hong Kong, India, Indonesia, Japan, Singapore, Taiwan, Saudi Arabia and Thailand. Europe is especially rich in mirrors, with hosts in Denmark, Germany, Spain, France, Italy, the Netherlands, Norway, Poland, Portugal, Romania, Switzerland, Finland, Sweden, the United Kingdom, Austria, Greece, Bulgaria and Russia. North America includes Canada, Costa Rica and the United States, while Oceania covers Australia and South America includes Brazil and Chile.

The details matter because different mirrors expose different protocols. While many support HTTPS, some also offer HTTP, FTP or rsync. CTAN provides a mirror multiplexer to make the common case simpler: pointing a browser to https://mirrors.ctan.org/ results in automatic redirection to a mirror in or near the user's country. There is one caveat. The multiplexer always redirects to an HTTPS mirror, so anyone intending to use another protocol needs to select manually from the mirror list. That is why the full listings still include non-HTTPS URLs alongside secure ones.

There is also an operational side to the network that is easy to overlook when things are working well. CTAN monitors mirrors to ensure they are current, and if one falls behind, then mirrors.ctan.org will not redirect users there. Updates to the mirror list can be sent to ctan@ctan.org. The master host of CTAN is ftp.dante.de in Cologne, Germany, with rsync access available at rsync://rsync.dante.ctan.org/CTAN/ and web access on https://ctan.org/. For those who want to contribute infrastructure rather than simply use it, CTAN also invites volunteers to become mirrors.

TinyTeX: A Lightweight LaTeX Distribution

This infrastructure becomes much more tangible when looking at a lightweight TeX distribution such as TinyTeX. TinyTeX is a lightweight, cross-platform, portable and easy-to-maintain LaTeX distribution based on TeX Live. It is small in size but intended to function well in most situations, especially for R users. Its appeal lies in not requiring users to install thousands of packages they will never use, installing them as needed instead. This also means installation can be done without administrator privileges, which removes one of the more familiar barriers around traditional TeX setups. TinyTeX can even be run from a flash drive.

For R users, TinyTeX is closely tied to the {tinytex} R package. The distinction is important: tinytex in lower case refers to the R package, while TinyTeX refers to the LaTeX distribution. Installation is intentionally direct. After installing the R package with install.packages('tinytex'), a user can run tinytex::install_tinytex(). Uninstallation is equally simple with tinytex::uninstall_tinytex(). For the average R Markdown user, that is often enough. Once TinyTeX is in place, PDF compilation usually requires no further manual package management.

There is slightly more to know if the aim is to compile standalone LaTeX documents from R. The {tinytex} package provides wrappers such as pdflatex(), xelatex() and lualatex(). These functions detect required LaTeX packages that are missing and install them automatically by default. In practical terms, that means a small example document can be written to a file and compiled with tinytex::pdflatex('test.tex') without much concern about whether every dependency has already been installed. For R users, this largely removes the old pattern of cryptic missing-package errors followed by manual searching through TeX repositories.

Developers may want more than the basics, and TinyTeX has a path for that as well. A helper such as tinytex:::install_yihui_pkgs() installs a collection of packages needed for building the PDF vignettes of many CRAN packages. That is a specific convenience rather than a universal requirement, but it illustrates the design philosophy behind TinyTeX: keep the initial footprint light and offer ways to add what is commonly needed later.

Using TinyTeX Outside R

For users outside R, TinyTeX still works, but the focus shifts to the command-line utility tlmgr. The documentation is direct in its assumptions: if command-line work is unwelcome, another LaTeX distribution may be a better fit. The central command is tlmgr, and much of TinyTeX maintenance can be expressed through it.

On Linux, installation places TinyTeX in $HOME/.TinyTeX and creates symlinks for executables such as pdflatex under $HOME/bin or $HOME/.local/bin if it exists. The installation script is fetched with wget and piped to sh, after first checking that Perl is correctly installed. On macOS, TinyTeX lives in ~/Library/TinyTeX, and users without write permission to /usr/local/bin may need to change ownership of that directory before installation. Windows users can run a batch file, install-bin-windows.bat, and the default installation directory is %APPDATA%/TinyTeX unless APPDATA contains spaces or non-ASCII characters, in which case %ProgramData% is used instead. PowerShell version 3.0 or higher is required on Windows.

Uninstallation follows the same self-contained logic. On Linux and macOS, tlmgr path remove is followed by deleting the TinyTeX folder. On Windows, tlmgr path remove is followed by removing the installation directory. This simplicity is a deliberate contrast with larger LaTeX distributions, which are considerably more involved to remove cleanly.

Maintenance and Package Management

Maintenance is where TinyTeX's relationship to CTAN and TeX Live becomes especially visible. If a document fails with an error such as File 'times.sty' not found, the fix is to search for the package containing that file with tlmgr search --global --file "/times.sty". In the example given, that identifies the psnfss package, which can then be installed with tlmgr install psnfss. If the package includes executables, tlmgr path add may also be needed. An alternative route is to upload the error log to the yihui/latex-pass GitHub repository, where package searching is carried out remotely.

If the problem is less obvious, a full update cycle is suggested: tlmgr update --self --all, then tlmgr path add and fmtutil-sys --all. R users have wrappers for these tasks too, including tlmgr_search(), tlmgr_install() and tlmgr_update(). Some situations still require a full reinstallation. If TeX Live reports Remote repository newer than local, TinyTeX should be reinstalled manually, which for R users can be done with tinytex::reinstall_tinytex(). Similarly, when a TeX Live release is frozen in preparation for a new one, the advice is simply to wait and then reinstall when the next release is ready.

The motivation behind TinyTeX is laid out with unusual clarity. Traditional LaTeX distributions often present a choice between a small basic installation that soon proves incomplete and a very large full installation containing thousands of packages that will never be used. TinyTeX is framed as a way around those frustrations by building on TeX Live's portability and cross-platform design while stripping away unnecessary size and complexity. The acknowledgements also underline that TinyTeX depends on the work of the TeX Live team.

Connecting the R Workflow to a Finished Report

Taken together, these notes show how closely summarisation, tabulation and publishing are linked. {dplyr} and related tools make it easy to summarise data quickly, while a wide range of R packages then turn those summaries into tables that are not only statistically useful but also presentable. CTAN and its mirrors keep the TeX ecosystem available and current across the world, and TinyTeX builds on that ecosystem to make LaTeX more manageable, especially for R users. What begins with a grouped summary in the console can end with a polished report table in HTML, PDF or Word, and understanding the chain between those stages makes the whole workflow feel considerably less mysterious.

The Fediverse: A decentralised alternative to centralised social media

27th February 2026

The Fediverse is not a single platform but a network of interconnected services, each operating independently yet communicating through shared open standards. Rather than centralising power in one company or product, it distributes control across thousands of independently run servers, known as instances, that nonetheless talk to one another through a common language. That language has a longer history than most users realise.

Those with long memories of the federated web may recall Identica, one of the earliest federated microblogging services, which ran on the OStatus protocol. In December 2012, Identica transitioned to new underlying software called pump.io, which took a different architectural approach: rather than relying on OStatus, it used JSON-LD and a REST-based inbox system designed to handle general activity streams rather than simple status updates. In time, pump.io itself eventually would be discontinued, but it was not a dead end. Its data model and design decisions fed directly into the development of what became ActivityPub, the protocol that now underpins the modern Fediverse.

ActivityPub became a W3C Recommendation in January 2018, formalising an approach to federated social networking that Identica and pump.io had helped to pioneer. Through this standard, users on different platforms can follow, reply to and interact with one another across server and software boundaries, in much the same way that email allows a Gmail user to correspond with someone on Outlook.

Microblogging at the Core

At the heart of the Fediverse is a cluster of microblogging platforms, each with its own character and community. Mastodon, the most widely used, mirrors much of what Twitter once offered but with a firm emphasis on community governance and decentralised ownership. Its character limit of 500 characters and the absence of algorithmic ranking set it apart from the mainstream.

Misskey, which enjoys particular popularity in Japan, introduces custom emoji reactions and extensive rich-text formatting, appealing to users who want greater expressiveness than Mastodon provides. Pleroma offers a lightweight alternative with a default character limit of 5,000, making it more suitable for longer posts, while Akkoma (a fork of Pleroma) adds features such as a bubble timeline, local-only posting and improved moderation tooling. Both are well regarded among technically minded administrators who want to run their own servers without the resource demands that Mastodon can place on smaller machines.

Beyond Microblogging

The Fediverse extends well beyond short-form text. PeerTube provides a decentralised video-hosting platform comparable in purpose to YouTube, using peer-to-peer technology so that popular videos gain additional bandwidth as viewership grows. Pixelfed fulfils a similar role for photo sharing, operating as an open and federated counterpart to Instagram, with a focus on privacy and user control.

For forum-style discussion, Lemmy takes the role of a decentralised Reddit, built around threaded community posts, voting and link aggregation. Event coordination is handled by Mobilizon, which provides a federated alternative to Facebook Events and allows communities to publish, share and manage gatherings without relying on any proprietary platform.

Audio is covered by Funkwhale, a federated platform for uploading and sharing music, podcasts and other audio content. It operates through ActivityPub and functions as a community-driven alternative to services such as Spotify, Bandcamp and SoundCloud, allowing instance operators to share their libraries with one another across the network.

Each of these services runs independently on its own set of instances but remains interconnected across the wider Fediverse through ActivityPub, meaning a Mastodon user can, for instance, follow a PeerTube channel and see new video posts appear directly in their timeline.

Social Networking and Multi-Protocol Platforms

Some Fediverse platforms aim less at replicating a single mainstream service and more at providing a broad social networking experience. Friendica is perhaps the most ambitious of these, supporting not only ActivityPub but also the diaspora* and OStatus protocols, as well as RSS feed ingestion and two-way email contacts. The result is a platform that can serve as a hub for a user's entire federated social life, pulling in posts from Mastodon, Pixelfed, Lemmy and other networks into a single, unified timeline. Its Facebook-like interface, with threaded comments and no character limit, makes it a natural fit for users who found Twitter-style microblogging too constraining.

Hubzilla takes a similarly expansive approach, but pushes further still, incorporating file hosting, photo sharing, a calendar and website publishing alongside its social networking features. Its distinguishing characteristic is nomadic identity, a system by which a user's account can exist simultaneously across multiple servers and be migrated or cloned without loss of data or followers. Hubzilla federates over ActivityPub, the diaspora* protocol, OStatus and its own native Zot protocol, giving it an unusually wide reach across the federated web.

Having launched in 2010, diaspora is one of the earliest decentralised social networks. It operates through its own diaspora protocol rather than ActivityPub, making it technically distinct from much of the rest of the Fediverse, though it can still communicate with platforms such as Friendica and Hubzilla that support both standards. Its central design principle is user ownership of data: posts are stored on the user's chosen server (called a pod) and the platform uses an Aspects system to let users control precisely which groups of contacts see any given post, offering fine-grained privacy controls that most other Fediverse platforms do not match.

Infrastructure and Discovery

Navigating the Fediverse is made easier by a range of supporting tools and directories. Fedi.Directory catalogues interesting and active accounts across the network, helping newcomers find communities aligned with their interests. Fediverse.Party offers an overview of the many software projects that make up the ecosystem, acting as a starting point for those deciding which platform or instance to join.

For bloggers who already maintain an RSS feed, tools such as Mastofeed can automatically publish new posts to a Mastodon account, bringing older publishing workflows into the federated network. Those who prefer more control over what gets posted and how it is worded may find a better fit in toot, a command-line and terminal user interface client for Mastodon written in Python. Because toot accepts piped input, it can be combined with a script or an AI model to generate a short, readable announcement for each new article, complete with a link, and post it directly to Mastodon without any manual intervention. This kind of bridging reflects the Fediverse's broader philosophy: existing content and communities should be able to participate without requiring users to abandon what already works for them.

Community Governance and Its Challenges

The challenge of moderating online communities is not new. Website forums, which dominated community discussion through the late 1990s and 2000s, often became ungovernable at scale, with administrators struggling to maintain civility against a tide of bad-faith participation that no small volunteer team could reliably contain. Centralised platforms such as Twitter and Facebook presented themselves as a solution, with algorithmic moderation and corporate policy appearing to offer consistency at scale. That promise has not aged well. Discourse on those platforms has deteriorated markedly, and the tools that were supposed to manage it have proved either ineffective or applied so inconsistently as to erode trust in the platforms themselves.

The Fediverse's instance-based model sits in an instructive position relative to both of those histories. Like the old forum model, each instance is self-governing, with administrators setting their own rules and moderating their own communities. Unlike a standalone forum, however, an instance has a tool that forum administrators never possessed: the ability to defederate, cutting off contact with a badly behaved community entirely rather than having to manage it directly. The European Commission operates its own official Mastodon instance, as does the European Data Protection Supervisor, reflecting a growing interest among public institutions in this kind of platform independence and controlled self-governance.

The model is not without its own difficulties. With no central authority, ensuring consistent moderation across the network is impossible by design. Harmful content that might be removed swiftly on a centralised platform can persist on instances that choose not to act, and defederation, while effective, is a blunt instrument that severs all contact rather than addressing specific behaviour. User experience also varies considerably from one instance to the next, which can make the Fediverse feel fragmented to those accustomed to the uniformity of mainstream social media. Whether that fragmentation is a flaw or a feature depends largely on what one values more: consistency or autonomy.

A Democratic Model for the Open Web

What unifies these varied platforms, tools and governance approaches is a shared commitment to an internet where users are participants rather than products. The Fediverse offers no advertising and no algorithmic manipulation of feeds, and the open-source nature of most of its software means that anyone with the technical means can inspect, fork or improve the code. The network's future will depend on continued developer investment, user education and the willingness of new arrivals to engage with an ecosystem that is deliberately more complex than a single sign-up page.

For now, the Fediverse stands as a working demonstration that a more democratic and user-directed model of online social life is achievable. Whether through microblogging on Mastodon, sharing videos on PeerTube, discovering music on Funkwhale, coordinating events through Mobilizon or managing a rich personal social hub on Friendica, it offers something that centralised platforms structurally cannot: the ability for communities to own their own corner of the internet.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.