Technology Tales

Adventures in consumer and enterprise technology

TOPIC: SOFTWARE DISTRIBUTION

Managing Python projects with Poetry

4th October 2025

Python Poetry has become a popular choice for managing Python projects because it unifies tasks that once required several tools. Instead of juggling pip for installation, virtualenv for isolation and setuptools for packaging, Poetry brings these strands together and aims to make everyday development feel predictable and tidy. It sits in the same family of all-in-one managers as npm for JavaScript and Cargo for Rust, offering a coherent workflow that spans dependency declaration, environment management and package publishing.

At the heart of Poetry is a simple idea: declare what a project needs in one place and let the tool do the orchestration. Projects describe their dependencies, development tools and metadata in a single configuration file, and Poetry ensures that what is installed on one machine can be replicated on another without nasty surprises. That reliability comes from the presence of a lock file. Once dependencies are resolved, their exact versions are recorded, so future installations repeat the same outcome. The intent here is not only convenience but determinism, helping teams avoid the "works on my machine" refrain that haunts software work.

- Core Concepts: Configuration and Lock Files

Two files do the heavy lifting. The pyproject.toml file is where a project announces its name, version and description, as well as the dependencies required to run and to develop it. The poetry.lock file captures the concrete resolution of those requirements at a particular moment. Together, they give you an auditable, repeatable picture of your environment. The structure of TOML keeps the configuration readable, and it spares developers from spreading equivalent settings across setup.cfg, setup.py and requirements.txt. A minimal example shows how this looks in practice.

[tool.poetry]
name = "my_project"
version = "0.1.0"
description = "Example project using Poetry"
authors = ["John <john@example.com>"]

[tool.poetry.dependencies]
python = "^3.10"
requests = "^2.31.0"

[tool.poetry.dev-dependencies]
pytest = "^8.0.0"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

- Essential Commands

Working with Poetry day to day quickly becomes a matter of a few memorable commands. Initialising a project configuration starts with poetry init, which steps through the creation of pyproject.toml interactively. Adding a dependency is handled by poetry add followed by the package name. Installing everything described in the configuration is done with poetry install, which writes or updates the lock file. When it is time to refresh dependencies within permitted version ranges, poetry update re-resolves and updates what's installed. Removing a dependency is poetry remove, followed by the package name. For environment management, poetry shell opens a shell inside the virtual environment managed by Poetry, and poetry run allows execution of commands within that same environment without entering a shell. Building distributions is as simple as poetry build, which produces a wheel and a source archive, and publishing to the Python Package Index is managed by poetry publish with credentials or an API token.

- Advantages and Considerations

There are clear advantages to taking this route. The dependency experience is simplified because you do not need to keep updating a requirements.txt file by hand. With a lock file in place, environments are reproducible across developer machines and continuous integration runners, which stabilises builds and testing. Packaging is integrated rather than an extra chore, so producing and publishing a release becomes a repeatable process that sits naturally alongside development. Virtual environments are created and activated on demand, keeping projects isolated from one another with little ceremony. The configuration in TOML has the benefit of being structured and human-readable, which reduces the likelihood of configuration drift.

There are also points to consider before adopting Poetry. Projects that are deeply invested in setup.py or complex legacy build pipelines may need a clean migration to pyproject.toml for avoiding clashes. Developers who prefer manual venv and pip workflows can find Poetry opinionated at first because it expects to be responsible for the environment and dependency resolution. It is also designed with modern Python versions in mind, with examples here using Python 3.10.

- Migration from pip and requirements.txt

For teams arriving from pip and requirements.txt, moving to Poetry can be done in measured steps. The starting point is installation. Poetry provides an installer script that sets up the tool for your user account.

curl -sSL https://install.python-poetry.org | python3 -

If the installer does not add Poetry to your PATH, adding $HOME/.local/bin to PATH resolves that, after which poetry --version confirms the installation. From the root of your existing project, poetry init creates a new pyproject.toml and invites you to provide metadata and dependencies. If you already maintain requirements.txt files for production and development dependencies, Poetry can ingest those in one sweep. A single file can be imported with poetry add $(cat requirements.txt). Where development dependencies live in a separate file, they can be added into Poetry's dev group with poetry add --group dev $(cat dev-requirements.txt). Once added, Poetry resolves and pins exact versions, leaving a lock file behind to capture the resolution. After verifying that everything installs and tests pass, it becomes safe to retire earlier environment artefacts. Many teams remove requirements.txt entirely if they plan to rely solely on Poetry, delete any Pipfile and Pipfile.lock remnants left by Pipenv and migrate metadata away from setup.py or setup.cfg in favour of pyproject.toml. With that done, using the environment becomes routine. Opening a shell inside the virtual environment with poetry shell makes commands such as python or pytest use the isolated interpreter. If you prefer to avoid entering a shell, poetry run python script.py or poetry run pytest executes the command in the right context.

- Package Publishing

Publishing a package is one of the areas where Poetry streamlines the steps. Accurate metadata in pyproject.toml is important, so name, version, description and other fields should be up-to-date. An example configuration shows commonly used fields.

[tool.poetry]
name = "example-package"
version = "1.0.0"
description = "A simple example package"
authors = ["John <john@example.com>"]
license = "MIT"
readme = "README.md"
homepage = "https://github.com/john/example-package"
repository = "https://github.com/john/example-package"
keywords = ["example", "poetry"]

With metadata set, building the distribution is handled by poetry build, which creates a dist directory containing a .tar.gz source archive and a .whl wheel file. Uploading to the official Python Package Index can be done with username and password, though API tokens are the recommended method because they can be scoped and revoked without affecting account credentials. Configuring a token is done once with poetry config pypi-token.pypi, after which poetry publish will use it to upload. When testing a release before publishing for real, TestPyPI provides a safer target. Poetry supports multiple sources and can be directed to use TestPyPI by declaring it as a repository and then publishing to it.

[[tool.poetry.source]]
name = "testpypi"
url = "https://test.pypi.org/legacy/"
poetry publish -r testpypi

Once uploaded, it is sensible to confirm that the package can be installed in a clean environment using pip install example-package, which verifies that dependencies are correctly declared and wheels are intact.

- Continuous Integration with GitHub Actions

Beyond local steps, automation closes the loop. Adding a continuous integration workflow that installs dependencies, runs tests and publishes on a tagged release keeps quality checks and distribution consistent. GitHub Actions provides a hosted environment where Poetry can be installed quickly, dependencies cached and tests executed. A straightforward workflow listens for tags that begin with v, such as v1.0.0, then builds and publishes the package once tests pass. The workflow file sits under .github/workflows and looks like this.

name: Publish to PyPI

on:
  push:
    tags:
      - "v*"

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Check out repository
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: "3.10"

      - name: Install Poetry
        run: |
          curl -sSL https://install.python-poetry.org | python3 -
          echo "$HOME/.local/bin" >> $GITHUB_PATH

      - name: Install dependencies
        run: poetry install --no-interaction --no-root

      - name: Run tests with pytest
        run: poetry run pytest --maxfail=1 --disable-warnings -q

      - name: Build package
        run: poetry build

      - name: Publish to PyPI
        if: startsWith(github.ref, 'refs/tags/v')
        env:
          POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_TOKEN }}
        run: poetry publish --no-interaction --username __token__ --password $POETRY_PYPI_TOKEN_PYPI

This arrangement checks out the repository, installs a consistent Python version, brings in Poetry, installs dependencies based on the lock file, runs tests, builds distributions and only publishes when the workflow is triggered by a version tag. The API token used for publishing should be stored as a repository secret named PYPI_TOKEN so it is not exposed in the codebase or logs. Creating the tag is done locally with git tag v1.0.0 followed by git push origin v1.0.0, which triggers the workflow and results in a published package, moments later. It is often useful to extend this with a test matrix, so the suite runs across supported Python versions, as well as caching to speed up repeated runs by re-using Poetry and pip caches keyed on the lock file.

- Project Structure

Package structure is another place where Poetry encourages clarity. A simple, consistent layout makes maintenance and onboarding easier. A typical library keeps its importable code in a package directory named to match the project name in pyproject.toml, with hyphens translated to underscores. Tests live in a separate tests directory, documentation in docs and examples in a directory of the same name. The repository root contains README.md, a licence file, the lock file and a .gitignore that excludes environment directories and build artefacts. The following tree illustrates a balanced structure for a data-oriented utility library.

data-utils/
├── data_utils/
│   ├── __init__.py
│   ├── core.py
│   ├── io.py
│   ├── analysis.py
│   └── cli.py
├── tests/
│   ├── __init__.py
│   ├── test_core.py
│   └── test_analysis.py
├── docs/
│   ├── index.md
│   └── usage.md
├── examples/
│   └── demo.ipynb
├── README.md
├── LICENSE
├── pyproject.toml
├── poetry.lock
└── .gitignore

Within the package directory, init.py can define a public interface and hide internal details. This allows users of the library to import the essentials without needing to know the module layout.

from .core import clean_data
from .analysis import summarise_data

__all__ = ["clean_data", "summarise_data"]

If the project offers a command-line interface, Poetry makes it simple to declare an entry point, so users can run a console command after installation. The scripts section in pyproject.toml maps a command name to a callable, in this case the main function in a cli module.

[tool.poetry.scripts]
data-utils = "data_utils.cli:main"

A basic CLI might be implemented using Click, passing arguments to internal functions and relaying progress.

import click
from data_utils import core

@click.command()
@click.argument("path")
def main(path):
    """Simple CLI example."""
    print(f"Processing {path}...")
    core.clean_data(path)
    print("Done!")

if __name__ == "__main__":
    main()

Git ignores should filter out files that do not belong in version control. A sensible default for a Poetry project is as follows.

__pycache__/
*.pyc
*.pyo
*.pyd
.env
.venv
dist/
build/
*.egg-info/
.cache/
.coverage

- Testing and Documentation

Testing sits comfortably alongside this. Many projects adopt pytest because it is straightforward to use and integrates well with Poetry. Running tests through poetry run pytest ensures the virtual environment is used, and a simple unit test demonstrates the pattern.

from data_utils.core import clean_data

def test_clean_data_removes_nulls():
    data = [1, None, 2, None, 3]
    cleaned = clean_data(data)
    assert cleaned == [1, 2, 3]

Documentation can be kept in Markdown or built with tools. MkDocs and Sphinx are common choices for generating websites from your docs, and both can be installed as development dependencies using Poetry. Including notebooks in an examples directory is helpful for illustrating usage in richer contexts, especially for data science libraries. The README should present the essentials succinctly, covering what the project does, how to install it, a short usage example and pointers for development setup. A licence file clarifies terms of use; MIT and Apache 2.0 are widely used options in open source.

- Advanced CI: Quality Checks and Multi-version Testing

Once structure, tests and documentation are in order, quality checks can be expanded in the continuous integration workflow. Adding automated formatting, import sorting and linting tightens consistency across contributions. An enhanced workflow uses Black, isort and Flake8 before running tests and building, and also includes a matrix to test across multiple Python versions. It runs on pull requests as well as on tagged pushes, which means code quality and compatibility are verified before merging changes and again before publishing a release.

name: Lint, Test and Publish

on:
  push:
    tags:
      - "v*"
  pull_request:

jobs:
  build:
    runs-on: ubuntu-latest

    strategy:
      matrix:
        python-version: ["3.9", "3.10", "3.11"]

    steps:
      - name: Check out repository
        uses: actions/checkout@v4

      - name: Set up Python
        uses: actions/setup-python@v5
        with:
          python-version: ${{ matrix.python-version }}

      - name: Install Poetry
        run: |
          curl -sSL https://install.python-poetry.org | python3 -
          echo "$HOME/.local/bin" >> $GITHUB_PATH

      - name: Cache Poetry dependencies
        uses: actions/cache@v4
        with:
          path: |
            ~/.cache/pypoetry
            ~/.cache/pip
          key: poetry-${{ runner.os }}-${{ hashFiles('**/poetry.lock') }}
          restore-keys: |
            poetry-${{ runner.os }}-

      - name: Install dependencies
        run: poetry install --no-interaction --no-root

      - name: Check code formatting with Black
        run: poetry run black --check .

      - name: Check import order with isort
        run: poetry run isort --check-only .

      - name: Run Flake8 linting
        run: poetry run flake8 .

      - name: Run tests with pytest
        run: poetry run pytest --maxfail=1 --disable-warnings -q

      - name: Build package
        run: poetry build

      - name: Publish to PyPI
        if: startsWith(github.ref, 'refs/tags/v')
        env:
          POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_TOKEN }}
        run: poetry publish --no-interaction --username __token__ --password $POETRY_PYPI_TOKEN_PYPI

This workflow builds on the earlier one by checking style and formatting before tests. If any of those checks fail, the process stops and surfaces the problems in the job logs. Caching based on the lock file reduces the time spent installing dependencies by reusing packages where nothing has changed. The matrix section ensures that the library remains compatible with the declared range of Python versions, which is especially helpful just before a release. It is possible to extend this further with coverage reports using pytest-cov and Codecov, static type checking with mypy, or pre-commit hooks to keep local development consistent with continuous integration. Publishing to TestPyPI in a separate job can help validate packaging without affecting the real index, and once outcomes look good, the main publishing step proceeds when a tag is pushed.

- Conclusion

The result of adopting Poetry is a project that states its requirements clearly, installs them reliably and produces distributions without ceremony. For new work, it removes much of the friction that once accompanied Python packaging. For existing projects, the migration path is gentle and reversible, and the gains in determinism often show up quickly in fewer environment-related issues. When paired with a small amount of automation in a continuous integration system, the routine of building, testing and publishing becomes repeatable and visible to everyone on the team. That holds whether the package is destined for internal use on a private index or a public release on PyPI.

Avoiding errors caused by missing Julia packages when running code on different computers

15th September 2024

As part of an ongoing move to multi-location working, I am sharing scripts and other artefacts via GitHub. This includes Julia programs that I have. That has led me to realise that a bit of added automation would help iron out any package dependencies that arise. Setting up things as projects could help, yet that feels a little too much effort for what I have. Thus, I have gone for adding extra code to check on and install any missing packages instead of having failures.

For adding those extra packages, I instate the Pkg package as follows:

import Pkg

While it is a bit hackish, I then declare a single array that lists the packages to be checked:

pkglits =["HTTP", "JSON3", "DataFrames", "Dates", "XLSX"]

After that, there is a function that uses a try catch construct to find whether a package exists or not, using the inbuilt eval macro to try a using declaration:

tryusing(pkgsym) = try
@eval using $pkgsym
return true
catch e
return false
end

The above function is called in a loop that both tests the existence of a package and, if missing, installs it:

for i in 1:length(pkglits)
rslt = tryusing(Symbol(pkglits[i]))
if rslt == false
Pkg.add(pkglits[i])
end
end

Once that has completed, using the following line to instate the packages required by later processing becomes error free, which is what I sought:

using HTTP, JSON3, DataFrames, Dates, XLSX

Using inventory files with Ansible

28th October 2022

This is the second post on Ansible following my main system's upgrade to Linux Mint 21. Then, I manually ran some Ansible playbooks, only to spot messages that I had not noticed before. Here, I discuss two messages issued because of an issue with an inventory file, which is where one defines lists of servers against which playbooks are executed. The default is called hosts and is located at /etc/ansible, but the system upgrade had renamed the existing one, which meant that Ansible could not find it. The solution was to take a copy and put somewhere safer. Then, I needed to add the location of the new file to the affected ansible-playbook commands using the following construct:

ansible-playbook [playbook path] -i [inventory file path]

Before I did this, I was seeing messages including the text "Could not match supplied host pattern" or others with the following text:

[WARNING]: No inventory was parsed, only implicit localhost is available

[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

The cause was the same in each case, and attending to the inventory file got rid of the unwanted messages. The new file also should not be affected by system upgrades in the future.

Fixing an Ansible warning about boolean type conversion

27th October 2022

My primary use for Ansible is doing system updates using the inbuilt apt module. Recently, I updated my main system to Linux Mint 21 and a few things like Ansible stopped working. Removing instances that I had added with pip3 sorted the problem, but I then ran playbooks manually, only for various warning messages to appear that I had not noticed before. What follows below is one of these.

[WARNING]: The value True (type bool) in a string field was converted to u'True' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change.

The message is not so clear in some ways, not least because it had me looking for a boolean value of True when it should have been yes. A search on the web revealed something about the apt module that surprised me.: the value of the upgrade parameter is a string, when others like it take boolean values of yes or no. Thus, I had passed a bareword of yes when it should have been declared in quotes as "yes". To my mind, this is an inconsistency, but I have changed things anyway to get rid of the message.

Removing obsolete libraries from Flatpak

1st February 2020

Along with various pieces of software, Flatpak also installs KDE and GNOME libraries needed to support them. However, it does not always remove obsolete versions of those libraries whenever software gets updated. One result is that messages regarding obsolete versions of GNOME may be issued and this has been known to cause confusion because there is the GNOME instance that is part of a Linux distro like Ubuntu and using Flatpak adds another one for its software packages to use. My use of Linux Mint may lessen the chances of misunderstanding.

Thankfully, executing a single command will remove any obsolete Flatpak libraries so the messages no longer appear and there then is no need to touch your actual Linux installation. This then is the command that sorted it for me:

flatpak uninstall --unused && sudo flatpak repair

The first part that removes any unused libraries is run as a normal user, so there is no error in the above command. Administrative privileges are needed for the second section that does any repairs that are needed. It might be better if Flatpak did all this for you using the update command, but that is not how the thing works. At least, there is a quick way to address this state of affairs and there might be some good reasons for having things work as they do.

Ensuring that Flatpak remains up to date on Linux Mint 19.2

25th October 2019

The Flatpak concept offers a useful way of getting the latest version of software like LibreOffice or GIMP on Linux machines because repositories are managed conservatively when it comes to the versions of included software. Ubuntu has Snaps, which are similar in concept. Both options bundle dependencies with the packaged software so that its operation can use later versions of system libraries than what may be available with a particular distribution.

However, even Flatpak depends on what is available through the repositories for a distribution, as I found when a software update needed a version of the tool. The solution was to add PPA using the following command and agreeing to the prompts that arise (answering Y, in other words):

sudo add-apt-repository ppa:alexlarsson/flatpak

With the new PPA instated, the usual apt commands were used to update the Flatpak package and continue with the required updates. Since then, all has gone smoothly as expected.

Getting Eclipse to start without incompatibility errors on Linux Mint 19.1

12th June 2019

Recent curiosity about Java programming and Groovy scripting got me trying to start up the Eclipse IDE that I had installed on my main machine. What I got instead of a successful application startup was a message that included the following:

!MESSAGE Exception launching the Eclipse Platform:
!STACK
java.lang.ClassNotFoundException: org.eclipse.core.runtime.adaptor.EclipseStarter
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:466)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:566)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:626)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584)
at org.eclipse.equinox.launcher.Main.run(Main.java:1438)
at org.eclipse.equinox.launcher.Main.main(Main.java:1414)

The cause was a mismatch between Eclipse and the installed version of Java that it needed to run. After all, the software itself is written in the Java language and the installed version from the usual software repositories was too old for Java 11. The solution turned out to be installing a newer version as a Snap (Ubuntu's answer to Flatpak). The following command did the needful since snapd already was running on my machine:

sudo snap install eclipse --classic

The only part of the command that warrants extra comment is the --classic switch, since that is needed for a tool like Eclipse that needs to access a host file system. On executing, the software was downloaded from Snapcraft and then installed within its own bundle of dependencies. The latter adds a certain detachment from the underlying Linux installation and ensures that no messages appear because of incompatibilities like the one near the start of this post.

Updating Flatpack applications on Linux Mint 19

10th August 2018

Since upgrading to Linux Mint 19, I have installed some software from Flatpak. The cause for my curiosity was that you could have the latest versions of applications like GIMP or LibreOffice without having to depend on a third-party PPA. Installation is straightforward given the support built into Linux Mint. You just need to download the relevant package from the Flatpak website and run the file through the GUI installer. Because the packages come with extras to ensure cross-compatibility, more disk space is used, but there is no added system overhead beyond that, from what I have seen. Updating should be as easy as running the following single command too:

flatpak update

However, I needed to do a little extra work before this was possible. The first step was to update the configuration file at ~/.local/share/flatpak/repo/config to add the following lines:

[remote "flathub"]
gpg-verify=true
gpg-verify-summary=true
url=https://flathub.org/repo/
xa.title=Flathub

Once that was completed, I ran the following commands to import the required GPG key:

wget https://flathub.org/repo/flathub.gpg
flatpak --user remote-modify --gpg-import=flathub.gpg flathub

With this complete, I was able to run the update process and update any applications as necessary. After that first run, it has been integrated in to my normal processes by adding the command to the relevant alias definition.

Trying out a new way to upgrade Linux Mint in situ while going from 17.3 to 18.1

19th March 2017

There was a time when the only recommended way to upgrade Linux Mint from one version to another was to do a fresh installation with back-ups of data and a list of the installed applications created from a special tool.

Even so, it never stopped me doing my own style of in situ upgrade, though some might see that as a risky option. More often than not, that actually worked without causing major problems in a time when Linux Mint releases were more tightly tied to Ubuntu's own six-monthly cycle.

Linux Mint releases now align with Ubuntu's Long Term Support (LTS) editions. This means major changes occur only every two years, with minor releases in between. These minor updates are delivered through Linux Mint's Update Manager, making the process simple. Upgrades are not forced, so you can decide when to upgrade, as all main and interim versions receive the same extended support. The recommendation is to avoid upgrading unless something is broken on your installation.

For a number of reasons, I stuck with that advice by sticking on my main machine with Linux Mint 17.3 instead of upgrading to Linux Mint 18. The fact that I broke things on another machine using an older method of upgrading provided even more encouragement.

However, I subsequently discovered another means of upgrading between major versions of Linux Mint that had some endorsement from the project. There still are warnings about testing a live DVD version of Linux Mint on your PC first and backing up your data beforehand. Another task is ensuring that you are upgraded from a fully up-to-date Linux Mint 17.3 installation.

When you are ready, you can install mintupgrade using the following command:

sudo apt-get install mintupgrade

When that is installed, there is a sequence of tasks that you need to do. The first of these is to simulate an upgrade to test for the appearance of untoward messages and resolve them. Repeating any checking, until all is well, gets a recommendation. The command is as follows:

mintupgrade check

Once you are happy that the system is ready, the next step is to download the updated packages so they are on your machine ahead of their installation. Only then should you begin the upgrade process. The two commands that you need to execute are below:

mintupgrade download
mintupgrade upgrade

After these complete, restart your system. In my case, the process worked well, with only my PHP installation requiring attention. I resolved a clash between different versions of the scripting interpreter by removing the older one, as PHP 7 is best kept for testing. Apart from reinstalling VMware Player and upgrading from version 18 to 18.1, I had almost nothing else to do and experienced minimal disruption. This is fortunate as I rely heavily on my main PC. The alternative of a full installation would have left me sorting things out for several days afterwards because I use a customised selection of software.

Improving Font Display in Fedora 15

30th May 2011

When I first started to poke around Fedora 15 after upgrading my Fedora machine, the definition of the font display was far from being acceptable to me. Thankfully, it was something that I could resolve, and I am writing these words with the letters forming them being shown in a way that was acceptable to me. The main thing that I did to achieve this was to add a file named 99-autohinter-only.conf in the folder /etc/fonts/conf.d. The file contains the following:

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<fontconfig>
<match target="font">
<edit name="autohint" mode="assign">
<bool>true</bool>
</edit>
</match>
</fontconfig>

Enabling autohinting improves font appearance in Fedora 15. The TrueType bytecode interpreter (BCI) was recently added to FreeType after its patent expired, but this actually decreased font quality on my system. I applied Kevin Kofler's autohinting fix and installed GNOME Tweak Tool, which lets you adjust autohinting settings. This combination solved my problems, particularly with letters like "k". Now, I am considering trying the same solution in openSUSE, which also has unsatisfactory font rendering, though I'll have to wait for GNOME Tweak Tool until they release a GNOME 3 version.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.