Technology Tales

Notes drawn from experiences in consumer and enterprise technology

TOPIC: NETHERLANDS

From summary statistics to published reports with R, LaTeX and TinyTeX

19th March 2026

For anyone working across LaTeX, R Markdown and data analysis in R, there comes a point where separate tools begin to converge. Data has to be summarised, those summaries have to be turned into presentable tables and the finished result has to compile into a report that looks appropriate for its audience rather than a console dump. These notes follow that sequence, moving from the practical business of summarising data in R through to tabulation and then on to the publishing infrastructure that makes clean PDF and Word output possible.

Summarising Data with {dplyr}

The starting point for many analyses is a quick exploration of the data at hand. One useful example uses the anorexia dataset from the {MASS} package together with {dplyr}. The dataset contains weight change data for young female anorexia patients, divided into three treatment groups: Cont for the control group, CBT for cognitive behavioural treatment and FT for family treatment.

The basic manipulation starts by loading {MASS} and {dplyr}, then using filter() to create separate subsets for each treatment group. From there, mutate() adds a wtDelta column defined as Postwt - Prewt, giving the weight change for each patient. group_by(Treat) prepares the data for grouped summaries, and arrange(wtDelta) sorts within treatment groups. The notes then show how {dplyr}'s pipe operator, %>%, makes the workflow more readable by chaining these operations. The final summary table uses summarize() to compute the number of observations, the mean weight change and the standard deviation within each treatment group. The reported values are count 29, average weight change 3.006897 and standard deviation 7.308504 for CBT, count 26, average weight change -0.450000 and standard deviation 7.988705 for Cont and count 17, average weight change 7.264706 and standard deviation 7.157421 for FT.

That example is not presented as a complete statistical analysis. Instead, it serves as a quick exploratory route into the data, with the wording remaining appropriately cautious and noting that this is only a glance and not a rigorous analysis.

Choosing an R Package for Descriptive Summaries

The question of how best to summarise data opens up a broader comparison of R packages for descriptive statistics. A useful review sets out a common set of needs: a count of observations, the number and types of fields, transparent handling of missing data and sensible statistics that depend on the data type. Numeric variables call for measures such as mean, median, range and standard deviation, perhaps with percentiles. Categorical variables call for counts of levels and some sense of which categories dominate.

Base R's summary() does some of this reasonably well. It distinguishes categorical from numeric variables and reports distributions or numeric summaries accordingly, while also highlighting missing values. Yet, it does not show an overall record count, lacks standard deviation and is not especially tidy or ready for tools such as kable. Several contributed packages aim to improve on that. Hmisc::describe() gives counts of variables and observations, handles both categorical and numerical data and reports missing values clearly, showing the highest and lowest five values for numeric data instead of a simple range. pastecs::stat.desc() is more focused on numeric variables and provides confidence intervals, standard errors and optional normality tests. psych::describe() includes categorical variables but converts them to numeric codes by default before describing them, which the package documentation itself advises should be interpreted cautiously. psych::describeBy() extends this approach to grouped summaries and can return a matrix form with mat = TRUE.

Among the packages reviewed, {skimr} receives especially strong attention for balancing readability and downstream usefulness. skim() reports record and variable counts clearly, separates variables by type and includes missing data and standard summaries in an accessible layout. It also works with group_by() from {dplyr}, making grouped summaries straightforward to produce. More importantly for analytical workflows, the skim output can be treated as a tidy data frame in which each combination of variable and statistic is represented in long form, meaning the results can be filtered, transformed and plotted with standard tidyverse tools such as {ggplot2}.

{summarytools} is presented as another strong option, though with a distinction between its functions. descr() handles numeric variables and can be converted to a data frame for use with kable, while dfSummary() works across entire data frames and produces an especially polished summary. At the time of the original notes, dfSummary() was considered slow. The package author subsequently traced the issue, as documented in the same review, to an excessive number of histogram breaks being generated for variables with large values, imposing a limit to resolve it. The package also supports output through view(dfSummary(data)), which yields an attractive HTML-style summary.

Grouped Summary Table Packages

Once the data has been summarised, the next step is turning those summaries into formal tables. A detailed comparison covers a number of packages specifically designed for this purpose: {arsenal}, {qwraps2}, {Amisc}, {table1}, {tangram}, {furniture}, {tableone}, {compareGroups} and {Gmisc}. {arsenal} is described as highly functional and flexible, with tableby() able to create grouped tables in only a few lines and then be customised through control objects that specify tests, display statistics, labels and missing value treatment. {qwraps2} offers a lot of flexibility through nested lists of summary specifications, though at the cost of more code. {Amisc} can produce grouped tables and works with pander::pandoc.table(), but is noted as not being on CRAN. {table1} creates attractive tables with minimal code, though its treatment of missing values may not suit every use case. {tangram} produces visually appealing HTML output and allows custom rows such as missing counts to be inserted manually, although only HTML output is supported. {furniture} and {tableone} both support grouped table creation, but {tableone} in particular is notable because it is widely used in biomedical research for baseline characteristics tables.

The {tableone} package deserves separate mention because it is designed to summarise continuous and categorical variables in one table, a common need in medical papers. As the package introduction explains, CreateTableOne() can be used on an entire dataset or on a selected subset of variables, with factorVars specifying variables that are coded numerically but should be treated as categorical. The package can display all levels for categorical variables, report missing values via summary() and switch selected continuous variables to non-normal summaries using medians and interquartile ranges instead of means and standard deviations. For grouped comparisons, it prints p-values by default and can switch to non-parametric tests or Fisher's exact test where needed. Standardised mean differences can also be shown. Output can be captured as a matrix and written to CSV for editing in Excel or Word.

Styling and Exporting Tables

With tables constructed, the focus shifts to how they are presented and exported. As Hao Zhu's conference slides explain, the {kableExtra} package builds on knitr::kable() and provides a grammar-like approach to adding styling layers, importing the pipe %>% symbol from {magrittr} so that formatting functions can be added in the same way that layers are added in {ggplot2}. It supports themes such as kable_paper, kable_classic, kable_minimal and kable_material, as well as options for striping, hover effects, condensed layouts, fixed headers, grouped rows and columns, footnotes, scroll boxes and inline plots.

Table output is often the visible end of an analysis, and a broader review of R table packages covers a range of approaches that go well beyond the default output. In R Markdown, packages such as {gt}, {kableExtra}, {formattable}, {DT}, {reactable}, {reactablefmtr} and {flextable} all offer richer possibilities. Some are aimed mainly at HTML output, others at Word. {DT} in particular supports highly customised interactive tables with searching, filtering and cell styling through more advanced R and HTML code. {flextable} is highlighted as the strongest option when knitting to Word, given that the other packages are primarily designed for HTML.

For users working in Word-heavy settings, older but still practical workflows remain relevant too. One approach is simply to write tables to comma-separated text files and then paste and convert the content into a Word table. Another route is through {arsenal}'s write2 functions, designed as an alternative to SAS ODS. The convenience functions write2word(), write2html() and write2pdf() accept a wide range of objects: tableby, modelsum, freqlist and comparedf from {arsenal} itself, as well as knitr::kable(), xtable::xtable() and pander::pander_return() output. One notable constraint is that {xtable} is incompatible with write2word(). Beyond single tables, the functions accept a list of objects so that multiple tables, headers, paragraphs and even raw HTML or LaTeX can all be combined into a single output document. A yaml() helper adds a YAML header to the output, and a code.chunk() helper embeds executable R code chunks, while the generic write2() function handles formats beyond the three convenience wrappers, such as RTF.

The Publishing Infrastructure: CTAN and Its Mirrors

Producing PDF output from R Markdown depends on a working LaTeX installation, and the backbone of that ecosystem is CTAN, the Comprehensive TeX Archive Network. CTAN is the main archive for TeX and LaTeX packages and is supported by a large collection of mirrors spread around the world. The purpose of this distributed system is straightforward: users are encouraged to fetch files from a site that is close to them in network terms, which reduces load and tends to improve speed.

That global spread is extensive. The CTAN mirror list organises sites alphabetically by continent and then by country, with active sites listed across Africa, Asia, Europe, North America, Oceania and South America. Africa includes mirrors in South Africa and Morocco. Asia has particularly wide coverage, with many mirrors in China as well as sites in Korea, Hong Kong, India, Indonesia, Japan, Singapore, Taiwan, Saudi Arabia and Thailand. Europe is especially rich in mirrors, with hosts in Denmark, Germany, Spain, France, Italy, the Netherlands, Norway, Poland, Portugal, Romania, Switzerland, Finland, Sweden, the United Kingdom, Austria, Greece, Bulgaria and Russia. North America includes Canada, Costa Rica and the United States, while Oceania covers Australia and South America includes Brazil and Chile.

The details matter because different mirrors expose different protocols. While many support HTTPS, some also offer HTTP, FTP or rsync. CTAN provides a mirror multiplexer to make the common case simpler: pointing a browser to https://mirrors.ctan.org/ results in automatic redirection to a mirror in or near the user's country. There is one caveat. The multiplexer always redirects to an HTTPS mirror, so anyone intending to use another protocol needs to select manually from the mirror list. That is why the full listings still include non-HTTPS URLs alongside secure ones.

There is also an operational side to the network that is easy to overlook when things are working well. CTAN monitors mirrors to ensure they are current, and if one falls behind, then mirrors.ctan.org will not redirect users there. Updates to the mirror list can be sent to ctan@ctan.org. The master host of CTAN is ftp.dante.de in Cologne, Germany, with rsync access available at rsync://rsync.dante.ctan.org/CTAN/ and web access on https://ctan.org/. For those who want to contribute infrastructure rather than simply use it, CTAN also invites volunteers to become mirrors.

TinyTeX: A Lightweight LaTeX Distribution

This infrastructure becomes much more tangible when looking at a lightweight TeX distribution such as TinyTeX. TinyTeX is a lightweight, cross-platform, portable and easy-to-maintain LaTeX distribution based on TeX Live. It is small in size but intended to function well in most situations, especially for R users. Its appeal lies in not requiring users to install thousands of packages they will never use, installing them as needed instead. This also means installation can be done without administrator privileges, which removes one of the more familiar barriers around traditional TeX setups. TinyTeX can even be run from a flash drive.

For R users, TinyTeX is closely tied to the {tinytex} R package. The distinction is important: tinytex in lower case refers to the R package, while TinyTeX refers to the LaTeX distribution. Installation is intentionally direct. After installing the R package with install.packages('tinytex'), a user can run tinytex::install_tinytex(). Uninstallation is equally simple with tinytex::uninstall_tinytex(). For the average R Markdown user, that is often enough. Once TinyTeX is in place, PDF compilation usually requires no further manual package management.

There is slightly more to know if the aim is to compile standalone LaTeX documents from R. The {tinytex} package provides wrappers such as pdflatex(), xelatex() and lualatex(). These functions detect required LaTeX packages that are missing and install them automatically by default. In practical terms, that means a small example document can be written to a file and compiled with tinytex::pdflatex('test.tex') without much concern about whether every dependency has already been installed. For R users, this largely removes the old pattern of cryptic missing-package errors followed by manual searching through TeX repositories.

Developers may want more than the basics, and TinyTeX has a path for that as well. A helper such as tinytex:::install_yihui_pkgs() installs a collection of packages needed for building the PDF vignettes of many CRAN packages. That is a specific convenience rather than a universal requirement, but it illustrates the design philosophy behind TinyTeX: keep the initial footprint light and offer ways to add what is commonly needed later.

Using TinyTeX Outside R

For users outside R, TinyTeX still works, but the focus shifts to the command-line utility tlmgr. The documentation is direct in its assumptions: if command-line work is unwelcome, another LaTeX distribution may be a better fit. The central command is tlmgr, and much of TinyTeX maintenance can be expressed through it.

On Linux, installation places TinyTeX in $HOME/.TinyTeX and creates symlinks for executables such as pdflatex under $HOME/bin or $HOME/.local/bin if it exists. The installation script is fetched with wget and piped to sh, after first checking that Perl is correctly installed. On macOS, TinyTeX lives in ~/Library/TinyTeX, and users without write permission to /usr/local/bin may need to change ownership of that directory before installation. Windows users can run a batch file, install-bin-windows.bat, and the default installation directory is %APPDATA%/TinyTeX unless APPDATA contains spaces or non-ASCII characters, in which case %ProgramData% is used instead. PowerShell version 3.0 or higher is required on Windows.

Uninstallation follows the same self-contained logic. On Linux and macOS, tlmgr path remove is followed by deleting the TinyTeX folder. On Windows, tlmgr path remove is followed by removing the installation directory. This simplicity is a deliberate contrast with larger LaTeX distributions, which are considerably more involved to remove cleanly.

Maintenance and Package Management

Maintenance is where TinyTeX's relationship to CTAN and TeX Live becomes especially visible. If a document fails with an error such as File 'times.sty' not found, the fix is to search for the package containing that file with tlmgr search --global --file "/times.sty". In the example given, that identifies the psnfss package, which can then be installed with tlmgr install psnfss. If the package includes executables, tlmgr path add may also be needed. An alternative route is to upload the error log to the yihui/latex-pass GitHub repository, where package searching is carried out remotely.

If the problem is less obvious, a full update cycle is suggested: tlmgr update --self --all, then tlmgr path add and fmtutil-sys --all. R users have wrappers for these tasks too, including tlmgr_search(), tlmgr_install() and tlmgr_update(). Some situations still require a full reinstallation. If TeX Live reports Remote repository newer than local, TinyTeX should be reinstalled manually, which for R users can be done with tinytex::reinstall_tinytex(). Similarly, when a TeX Live release is frozen in preparation for a new one, the advice is simply to wait and then reinstall when the next release is ready.

The motivation behind TinyTeX is laid out with unusual clarity. Traditional LaTeX distributions often present a choice between a small basic installation that soon proves incomplete and a very large full installation containing thousands of packages that will never be used. TinyTeX is framed as a way around those frustrations by building on TeX Live's portability and cross-platform design while stripping away unnecessary size and complexity. The acknowledgements also underline that TinyTeX depends on the work of the TeX Live team.

Connecting the R Workflow to a Finished Report

Taken together, these notes show how closely summarisation, tabulation and publishing are linked. {dplyr} and related tools make it easy to summarise data quickly, while a wide range of R packages then turn those summaries into tables that are not only statistically useful but also presentable. CTAN and its mirrors keep the TeX ecosystem available and current across the world, and TinyTeX builds on that ecosystem to make LaTeX more manageable, especially for R users. What begins with a grouped summary in the console can end with a polished report table in HTML, PDF or Word, and understanding the chain between those stages makes the whole workflow feel considerably less mysterious.

Security or Control? The debate over Google's Android verification policy

7th March 2026

A policy announced by Google in August 2025 has ignited one of the more substantive disputes in mobile technology in recent years. At its surface, the question is about app security. Beneath that, it touches on platform architecture, competition law, the long history of Android's unusual relationship with openness, and the future of independent software distribution. To understand why the debate is so charged, it helps to understand how Android actually works.

An Open Platform With a Proprietary Layer

Android presents a genuinely unusual situation in the technology industry. The base operating system is the Android Open-Source Project (AOSP), which is publicly available and usable by anyone. Manufacturers can take the codebase and build their own systems without involvement from Google, as Amazon has with Fire OS and as projects such as LineageOS and GrapheneOS have demonstrated.

Most commercial Android devices, however, do not run pure AOSP. They ship with a proprietary bundle called Google Mobile Services (GMS), which includes Google Play Store, Google Play Services, Google Maps, YouTube and a range of other applications and developer frameworks. These components are not open source and require a licence from Google. Because most popular applications depend on Play Services for functions such as push notifications, location services, in-app payments and authentication, shipping without them is commercially very difficult. This layered architecture gives Google considerable influence over Android without owning it in the traditional proprietary sense.

Google has further consolidated this influence through a series of technical initiatives. Project Treble separated Android's framework from hardware-specific components to make operating system updates easier to deploy. Project Mainline went further, turning important parts of the operating system, including components responsible for media processing, network security and cryptography, into modules that Google can update directly via the Play Store, bypassing manufacturers and mobile carriers entirely. The result is a platform that is open source in its code, but practically centralised in how it evolves and is maintained.

The Policy and Its Rationale

Against this backdrop, Google announced in August 2025 that it would extend its developer identity verification requirements beyond the Play Store to cover all Android apps, including those distributed through third-party stores and direct sideloading. From September 2026, any app installed on a certified Android device in Brazil, Indonesia, Singapore and Thailand must originate from a developer who has registered their identity with Google. A global rollout is planned from 2027 onwards.

Google's stated rationale is grounded in security evidence. The company's own analysis found over 50 times more malware from internet-sideloaded sources than from apps available through Google Play. In 2025, Google Play Protect blocked 266 million risky installation attempts and helped protect users from 872,000 unique high-risk applications. Google has also documented a specific and recurring attack pattern in Southeast Asia, in which scammers impersonate bank representatives during phone calls, coaching victims into sideloading a fraudulent application that then intercepts two-factor authentication codes to drain bank accounts. The company argues that anonymous developer accounts make this kind of attack far easier to sustain.

The registration process requires developers to create an Android Developer Console account, submit government-issued identification and pay a one-time fee of $25. Organisations must additionally supply a D-U-N-S Number from Dun & Bradstreet. Google has stated explicitly that verified developers will retain full freedom to distribute apps through any channel they choose, and is building an "advanced flow" that would allow experienced users to install unverified apps after working through a series of clear warnings. Developers and power users will also retain the ability to install apps via Android Debug Bridge (ADB). Brazil's banking federation FEBRABAN and Indonesia's Ministry of Communications and Digital Affairs have both publicly welcomed the policy as a proportionate response to documented fraud.

What This Means for F-Droid

F-Droid, founded by Ciaran Gultnieks in 2010, operates as a community-run repository of free and open-source software (FOSS) applications for Android. For 15 years, it has demonstrated that app distribution can be transparent, privacy-respecting and accountable, setting a standard that challenges the mobile ecosystem more broadly. Every application listed on the platform undergoes checks for security vulnerabilities, and apps carrying advertising, user tracking or dependence on non-free software are explicitly flagged with an "Anti-Features" label. The platform requires no user accounts and displays no advertising. It still needs some learning, as I found when adding an app through it for a secure email service.

F-Droid operates through an unusual technical model that is worth understanding in its own right. Rather than distributing APKs produced by individual developers, it builds applications itself from publicly available source code. The resulting APKs are signed with F-Droid's own keys and distributed through the F-Droid client. This approach prioritises supply-chain transparency, since users can in theory verify that a distributed binary corresponds to the published source code. However, it also means that updates can be slower than other distribution channels, and that apps distributed via F-Droid cannot be updated over a Play Store version. Some developers have also noted that subtle differences in build configuration can occasionally cause issues.

The new verification requirement creates a structural problem that F-Droid cannot resolve independently. Many of the developers who contribute to its repository are hobbyists, academics or privacy-conscious individuals with no commercial motive and no desire to submit government identification to a third party as a condition of sharing software. F-Droid cannot compel those developers to register, and taking over their application identifiers on their behalf would directly undermine the open-source authorship model it exists to protect.

F-Droid is not alone in this concern. The policy equally affects alternative distribution models that have emerged alongside it. Tools such as Obtainium allow users to track and install updates directly from developers' GitHub or GitLab release pages, bypassing app stores entirely. The IzzyOnDroid repository provides a curated alternative to F-Droid's main catalogue. Aurora Store allows users to access the Play Store's catalogue without Google account credentials. All of these models, to varying degrees, depend on the ability to distribute software independently of Google's centralised infrastructure.

The Organised Opposition

On the 24th of February 2026, more than 37 organisations signed an open letter addressed to Google's leadership and copied to competition regulators worldwide. Signatories included the Electronic Frontier Foundation, the Free Software Foundation Europe, the Software Freedom Conservancy, Proton AG, Nextcloud, The Tor Project, FastMail and Vivaldi. Their central argument is that the policy extends Google's gatekeeping authority beyond its own marketplace into distribution channels where it has no legitimate operational role, and that it imposes disproportionate burdens on independent developers, researchers and civil society projects that pose no security risk to users.

The Keep Android Open campaign, initiated by Marc Prud'hommeaux, an F-Droid board member and founder of the alternative app store for iOS, App Fair, has been in contact with regulators in the United States, Brazil and Europe. F-Droid's legal infrastructure has been strengthened in recent years in anticipation of challenges of this kind. The project operates under the legal umbrella of The Commons Conservancy, a nonprofit foundation based in the Netherlands, which provides a clearly defined jurisdiction and a framework for legal compliance.

The Genuine Tension

Both positions have merit, and the debate is not easily resolved. The malware problem Google describes is real. Social engineering attacks of the kind documented in Southeast Asia cause genuine financial harm to ordinary users, and the anonymity afforded by unverified sideloading makes it considerably easier for bad actors to operate at scale and reoffend after being removed. The introduction of similar requirements on the Play Store in 2023 appears to have had some measurable effect on reducing fraudulent developer accounts.

At the same time, critics are right to question whether the policy is proportionate to the problem it is addressing. The people most harmed by anonymous sideloading fraud are not, in the main, the people who use F-Droid. FOSS users tend to be technically experienced, privacy-aware and deliberate in their choices. The open letter from Keep Android Open also notes that Android already provides multiple security mechanisms that do not require central registration, including Play Protect scanning, permission systems and the existing installation warning framework. The argument that these existing mechanisms are insufficient to address sophisticated social engineering, where users are coached to bypass warnings, has some force. The argument that they are insufficient to address independent FOSS distribution is harder to sustain.

There is a further tension between Google's security claims and its competitive interests. Requiring all app developers to register with Google strengthens Google's position as the de facto authority over the Android ecosystem, regardless of whether a developer uses the Play Store. That outcome may be an incidental consequence of a genuine security initiative, or it may reflect a deliberate consolidation of control. The open letter's signatories argue the former cannot be assumed, particularly given that Google faces separate antitrust investigations in multiple jurisdictions.

The Antitrust Dimension

The policy sits in a legally sensitive area. Android holds approximately 72.77 per cent of the global mobile operating system market as of late 2025, running on roughly 3.9 billion active devices. Platforms with that scale of market presence attract a different level of regulatory scrutiny than those operating in competitive markets.

In Europe, the Digital Markets Act (DMA) specifically targets large platforms designated as "gatekeepers" and explicitly requires that third-party app stores must be permitted. If Google were to use developer verification requirements in a manner that effectively prevented alternative stores from operating, European regulators would have grounds to intervene. The 2018 European Commission ruling against Google, which resulted in a €4.34 billion fine for abusing Android's market position through pre-installation requirements, established that Android's dominant position carries real obligations. That decision was largely upheld by the European courts in 2022.

In the United States, the Department of Justice has been pursuing separate antitrust cases relating to Google's search and advertising dominance, within which Android's role in channelling users toward Google services has been a recurring theme. The open letter's decision to copy regulators worldwide was not accidental. Its signatories have concluded that public documentation before enforcement begins creates pressure that private correspondence does not.

The key regulatory question is whether the verification requirements are genuinely necessary for security, and whether less restrictive measures could achieve the same goal. If the answer to either part of that question is no, regulators may conclude that the policy disproportionately disadvantages competing distribution channels.

What the Huawei and Amazon Cases Reveal

The importance of Google's service layer, and the difficulty of replicating it, can be understood by examining what happened when two large technology companies attempted to operate outside it. Here, we come to the experiences of Amazon and Huawei.

Amazon launched Fire OS in 2011, based on AOSP but with all Google components replaced by Amazon's own services. The platform succeeded in Fire tablets and streaming devices, where users primarily want access to Amazon's content. It failed entirely in smartphones. The Amazon Fire Phone, launched in 2014 and discontinued within a year, could not attract enough developer support to make it viable as a general-purpose device. The absence of Google Play Services meant that many popular applications were missing or required separate builds. This experience showed that Android's openness, at the operating system level, does not automatically translate into a competitive ecosystem. The real power lies in the service layer and the developer infrastructure built around it.

The Huawei case illustrates the same point more sharply. In May 2019, the United States government placed Huawei on its Entity List, restricting American firms from supplying technology to the company. Huawei had a 20 per cent global smartphone market share in 2019, which dropped to virtually zero after the restrictions took effect. Since Huawei could still use the AOSP codebase, the operating system was not the problem. The problem was Google Mobile Services. Without access to the Play Store, Google Maps, YouTube and the developer APIs that underpin much of the application ecosystem, Huawei phones became commercially unviable in international markets that expected those services.

Huawei's international smartphone market share, which had been among the top three, rapidly fell to outside the top five. The company's consumer business revenue declined by nearly 50 per cent in 2021. Huawei's subsequent efforts to build its own replacement ecosystem, Huawei Mobile Services and AppGallery, achieved limited success outside China, where the domestic mobile ecosystem already operates largely independently of Google. Both the Amazon and Huawei cases confirm that Android's formal openness does not neutralise Google's practical influence over the platform.

The Comparison With Apple

It is worth noting where the comparison with Apple, often invoked in these debates, holds and where it breaks down. Apple designs its hardware, controls its operating system, and has historically permitted application installation only through its App Store. That degree of vertical integration meant that, under the DMA, Apple faced requirements to allow alternative app marketplaces and sideloading mechanisms that represented fundamental changes to how iOS operates. Google already permits these behaviours on Android, which is why the DMA's impact on its platform is more limited.

However, the direction of travel matters. Critics argue that policies like mandatory developer verification, combined with Google's control of the update pipeline and the practical dependency of the ecosystem on Play Services, are gradually moving Android toward a model that is more controlled in practice than its open-source origins would suggest. The formal difference between Android and iOS may be narrowing, even if it has not disappeared.

Where Things Stand

The verification scheme opened to all developers in March 2026, with enforcement beginning in September 2026 in four initial countries. Google has offered assurances that sideloading is not being eliminated and that experienced users will retain a route to install unverified software. Critics point out that this route has not yet been specified clearly enough for independent organisations to assess whether it would serve as a workable mechanism for FOSS distribution. Until it is demonstrated and tested in practice, F-Droid and its allies have concluded that it cannot be relied upon.

F-Droid is not facing immediate closure. It continues to host over 3,800 applications and its governance and infrastructure have been strengthened in recent years. Its continued existence, and the existence of the broader ecosystem of independent Android distribution tools, depends on sideloading remaining practically viable. The outcome will be shaped by how Google implements the advanced flow provision, by the response of competition regulators in Europe and elsewhere, and by whether independent developers in sufficient numbers choose to comply with, work around or resist the new requirements.

Its story is, in this respect, a concrete test case for a broader question: whether the formal openness of a platform is sufficient to guarantee genuine openness in practice, or whether the combination of service dependencies, update mechanisms and registration requirements can produce a functionally closed system without formally becoming one. The answer will have implications well beyond a single FOSS app repository.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.