TOPIC: WING IDE
Silencing MLX warnings when running Ollama via Homebrew on macOS
While there is an Ollama app on macOS, I chose to install it using Homebrew instead. That worked well enough, even if I kept seeing a warning message like this on macOS Tahoe:
WARN MLX dynamic library not available error="failed to load MLX dynamic library (searched: [/opt/homebrew/Cellar/ollama/0.16.2/bin
For some reason, the MLX integration into ollama is not what it should be, even if it runs without any other issues as things stand. While the native app does not have this issue and warnings like these can be overlooked at the operating system level, my chosen solution was to specify this alias in the .zshrc file:
alias ollama='command ollama "$@" 2> >(grep -v "MLX dynamic library not available" >&2)'
On executing this command to reload the configuration file, the output from ollama to stderr was much cleaner:
source ~/.zshrc
However, the alias itself still needs some unpacking to explain what is happening. Let us proceed piece by piece, focussing on the less obvious parts of the text within the quotes in the alias definition.
command: Starting the whole aliased statement with this keyword stops everything becoming recursive and makes everything safer, even if I have got away with doing that with the ls command elsewhere. If I were to include aliases in functions, the situation could be different, producing an infinite loop in the process.
"$@": Without this, the arguments to the ollama command would not be passed into the alias.
2>: This is the overall redirection to stderr and >(grep -v "MLX dynamic library not available" >&2) where the text removal takes place.
grep -v: Within the filtering statement, this command prints everything that does not match the search string. In this case, that is "MLX dynamic library not available" .
>&2: Here is where the output is sent back to stderr for the messages that appear in the console.
In all of this, it is important to distinguish between stderr (standard error output) and stdout (standard output). For ollama, the latter is how you receive a response from the LLM, while the former is used for application feedback. This surprised me when I first learned of it, yet it is common behaviour in the world of Linux and UNIX, which includes macOS.
This matters here because suppressing stderr means that you get no idea of how an LLM download is proceeding because that goes to that destination, rather than stdout as I might have expected without knowing better as I do now. Hence, the optimal approach is to subset the stderr output instead.
Loading API Keys from Linux shell environment variables in Python with Dotenv
Recently, I ran into trouble with getting Python to pick up an API key that I had defined in the underlying bash environment. This was within a Python console running inside the Positron IDE for R and Python scripting. Opening up the folder containing my Python scripts within the IDE was part of the solution. The next part was creating a .env file within the same folder. A line like this was added within the new file:
export API_KEY="<API key value>"
That meant that code like the following then read in the API key in a more robust manner:
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv('API_KEY', 'default_value')
This imports the os module and the load_dotenv method from the dotenv package. Then, load_dotenv is executed to load the .env file and its contents. After that, the os.getenv function can assign the API key to a Python variable from the value of the environment variable.
Since this also was within a Git repository, a .gitignore file needed creating with the contents .env to avoid that file being uploaded to GitHub, which is the last place where you should be storing credentials like passwords, passphrases and API keys. While my repository may be private, the state of things at these troubled times mean that even that is no failsafe.
Taking control of Ruff checks on Python scripts
Positron is becoming my tool of choice for developing Python code. Along from using a Python console like a REPL environment, it also includes Ruff for checking code compliance. One of its rules is that Python modules must be declared at the top. However, I want to use some code that checks for the present of any modules used in a script, installing those that are missing. This means that import statements appear later in a script that Ruff recommends, making me wish for a way to turn off that check since things run well anyway. The chosen solution is to create a file called pyproject.toml in the directory where my scripts are store and add the following lines in there to accomplish what I want:
[tool.ruff]
ignore = ["E402"]
Here, it helps if you open a folder in Positron, achieving the same outcome as you would in the VSCode on which the IDE is based. While I have only listed one check here, you also can have a comma-delimited list of quoted strings if you need to switch off more than one rule at once.
Managing Python projects with Poetry
Python Poetry has become a popular choice for managing Python projects because it unifies tasks that once required several tools. Instead of juggling pip for installation, virtualenv for isolation and setuptools for packaging, Poetry brings these strands together and aims to make everyday development feel predictable and tidy. It sits in the same family of all-in-one managers as npm for JavaScript and Cargo for Rust, offering a coherent workflow that spans dependency declaration, environment management and package publishing.
At the heart of Poetry is a simple idea: declare what a project needs in one place and let the tool do the orchestration. Projects describe their dependencies, development tools and metadata in a single configuration file, and Poetry ensures that what is installed on one machine can be replicated on another without nasty surprises. That reliability comes from the presence of a lock file. Once dependencies are resolved, their exact versions are recorded, so future installations repeat the same outcome. The intent here is not only convenience but determinism, helping teams avoid the "works on my machine" refrain that haunts software work.
Core Concepts: Configuration and Lock Files
Two files do the heavy lifting. The pyproject.toml file is where a project announces its name, version and description, as well as the dependencies required to run and to develop it. The poetry.lock file captures the concrete resolution of those requirements at a particular moment. Together, they give you an auditable, repeatable picture of your environment. The structure of TOML keeps the configuration readable, and it spares developers from spreading equivalent settings across setup.cfg, setup.py and requirements.txt. A minimal example shows how this looks in practice.
[tool.poetry]
name = "my_project"
version = "0.1.0"
description = "Example project using Poetry"
authors = ["John <john@example.com>"]
[tool.poetry.dependencies]
python = "^3.10"
requests = "^2.31.0"
[tool.poetry.dev-dependencies]
pytest = "^8.0.0"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
Essential Commands
Working with Poetry day to day quickly becomes a matter of a few memorable commands. Initialising a project configuration starts with poetry init, which steps through the creation of pyproject.toml interactively. Adding a dependency is handled by poetry add followed by the package name. Installing everything described in the configuration is done with poetry install, which writes or updates the lock file. When it is time to refresh dependencies within permitted version ranges, poetry update re-resolves and updates what's installed. Removing a dependency is poetry remove, followed by the package name. For environment management, poetry shell opens a shell inside the virtual environment managed by Poetry, and poetry run allows execution of commands within that same environment without entering a shell. Building distributions is as simple as poetry build, which produces a wheel and a source archive, and publishing to the Python Package Index is managed by poetry publish with credentials or an API token.
Advantages and Considerations
There are clear advantages to taking this route. The dependency experience is simplified because you do not need to keep updating a requirements.txt file by hand. With a lock file in place, environments are reproducible across developer machines and continuous integration runners, which stabilises builds and testing. Packaging is integrated rather than an extra chore, so producing and publishing a release becomes a repeatable process that sits naturally alongside development. Virtual environments are created and activated on demand, keeping projects isolated from one another with little ceremony. The configuration in TOML has the benefit of being structured and human-readable, which reduces the likelihood of configuration drift.
There are also points to consider before adopting Poetry. Projects that are deeply invested in setup.py or complex legacy build pipelines may need a clean migration to pyproject.toml for avoiding clashes. Developers who prefer manual venv and pip workflows can find Poetry opinionated at first because it expects to be responsible for the environment and dependency resolution. It is also designed with modern Python versions in mind, with examples here using Python 3.10.
Migration from pip and requirements.txt
For teams arriving from pip and requirements.txt, moving to Poetry can be done in measured steps. The starting point is installation. Poetry provides an installer script that sets up the tool for your user account.
curl -sSL https://install.python-poetry.org | python3 -
If the installer does not add Poetry to your PATH, adding $HOME/.local/bin to PATH resolves that, after which poetry --version confirms the installation. From the root of your existing project, poetry init creates a new pyproject.toml and invites you to provide metadata and dependencies. If you already maintain requirements.txt files for production and development dependencies, Poetry can ingest those in one sweep. A single file can be imported with poetry add $(cat requirements.txt). Where development dependencies live in a separate file, they can be added into Poetry's dev group with poetry add --group dev $(cat dev-requirements.txt). Once added, Poetry resolves and pins exact versions, leaving a lock file behind to capture the resolution. After verifying that everything installs and tests pass, it becomes safe to retire earlier environment artefacts. Many teams remove requirements.txt entirely if they plan to rely solely on Poetry, deleting any remnants of Pipfile and Pipfile.lock that were left by Pipenv and migrate metadata away from setup.py or setup.cfg in favour of pyproject.toml. With that done, using the environment becomes routine. Opening a shell inside the virtual environment with poetry shell makes commands such as python or pytest use the isolated interpreter. If you prefer to avoid entering a shell, poetry run python script.py or poetry run pytest executes the command in the right context.
Package Publishing
Publishing a package is one of the areas where Poetry streamlines the steps. Accurate metadata in pyproject.toml is important, so name, version, description and other fields should be up-to-date. An example configuration shows commonly used fields.
[tool.poetry]
name = "example-package"
version = "1.0.0"
description = "A simple example package"
authors = ["John <john@example.com>"]
license = "MIT"
readme = "README.md"
homepage = "https://github.com/john/example-package"
repository = "https://github.com/john/example-package"
keywords = ["example", "poetry"]
With metadata set, building the distribution is handled by poetry build, which creates a dist directory containing a .tar.gz source archive and a .whl wheel file. Uploading to the official Python Package Index can be done with username and password, though API tokens are the recommended method because they can be scoped and revoked without affecting account credentials. Configuring a token is done once with poetry config pypi-token.pypi, after which poetry publish will use it to upload. When testing a release before publishing for real, TestPyPI provides a safer target. Poetry supports multiple sources and can be directed to use TestPyPI by declaring it as a repository and then publishing to it.
[[tool.poetry.source]]
name = "testpypi"
url = "https://test.pypi.org/legacy/"
poetry publish -r testpypi
Once uploaded, it is sensible to confirm that the package can be installed in a clean environment using pip install example-package, which verifies that dependencies are correctly declared and wheels are intact.
Continuous Integration with GitHub Actions
Beyond local steps, automation closes the loop. Adding a continuous integration workflow that installs dependencies, runs tests and publishes on a tagged release keeps quality checks and distribution consistent. GitHub Actions provides a hosted environment where Poetry can be installed quickly, dependencies cached and tests executed. A straightforward workflow listens for tags that begin with v, such as v1.0.0, then builds and publishes the package once tests pass. The workflow file sits under .github/workflows and looks like this.
name: Publish to PyPI
on:
push:
tags:
- "v*"
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.10"
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Install dependencies
run: poetry install --no-interaction --no-root
- name: Run tests with pytest
run: poetry run pytest --maxfail=1 --disable-warnings -q
- name: Build package
run: poetry build
- name: Publish to PyPI
if: startsWith(github.ref, 'refs/tags/v')
env:
POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_TOKEN }}
run: poetry publish --no-interaction --username __token__ --password $POETRY_PYPI_TOKEN_PYPI
This arrangement checks out the repository, installs a consistent Python version, brings in Poetry, installs dependencies based on the lock file, runs tests, builds distributions and only publishes when the workflow is triggered by a version tag. The API token used for publishing should be stored as a repository secret named PYPI_TOKEN so it is not exposed in the codebase or logs. Creating the tag is done locally with git tag v1.0.0 followed by git push origin v1.0.0, which triggers the workflow and results in a published package, moments later. It is often useful to extend this with a test matrix, so the suite runs across supported Python versions, as well as caching to speed up repeated runs by re-using Poetry and pip caches keyed on the lock file.
Project Structure
Package structure is another place where Poetry encourages clarity. A simple, consistent layout makes maintenance and onboarding easier. A typical library keeps its importable code in a package directory named to match the project name in pyproject.toml, with hyphens translated to underscores. Tests live in a separate tests directory, documentation in docs and examples in a directory of the same name. The repository root contains README.md, a licence file, the lock file and a .gitignore that excludes environment directories and build artefacts. The following tree illustrates a balanced structure for a data-oriented utility library.
data-utils/
├── data_utils/
│ ├── __init__.py
│ ├── core.py
│ ├── io.py
│ ├── analysis.py
│ └── cli.py
├── tests/
│ ├── __init__.py
│ ├── test_core.py
│ └── test_analysis.py
├── docs/
│ ├── index.md
│ └── usage.md
├── examples/
│ └── demo.ipynb
├── README.md
├── LICENSE
├── pyproject.toml
├── poetry.lock
└── .gitignore
Within the package directory, init.py can define a public interface and hide internal details. This allows users of the library to import the essentials without needing to know the module layout.
from .core import clean_data
from .analysis import summarise_data
__all__ = ["clean_data", "summarise_data"]
If the project offers a command-line interface, Poetry makes it simple to declare an entry point, so users can run a console command after installation. The scripts section in pyproject.toml maps a command name to a callable, in this case the main function in a cli module.
[tool.poetry.scripts]
data-utils = "data_utils.cli:main"
A basic CLI might be implemented using Click, passing arguments to internal functions and relaying progress.
import click
from data_utils import core
@click.command()
@click.argument("path")
def main(path):
"""Simple CLI example."""
print(f"Processing {path}...")
core.clean_data(path)
print("Done!")
if __name__ == "__main__":
main()
Git ignores should filter out files that do not belong in version control. A sensible default for a Poetry project is as follows.
__pycache__/
*.pyc
*.pyo
*.pyd
.env
.venv
dist/
build/
*.egg-info/
.cache/
.coverage
- Testing and Documentation
Testing sits comfortably alongside this. Many projects adopt pytest because it is straightforward to use and integrates well with Poetry. Running tests through poetry run pytest ensures the virtual environment is used, and a simple unit test demonstrates the pattern.
from data_utils.core import clean_data
def test_clean_data_removes_nulls():
data = [1, None, 2, None, 3]
cleaned = clean_data(data)
assert cleaned == [1, 2, 3]
Documentation can be kept in Markdown or built with tools. MkDocs and Sphinx are common choices for generating websites from your docs, and both can be installed as development dependencies using Poetry. Including notebooks in an examples directory is helpful for illustrating usage in richer contexts, especially for data science libraries. The README should present the essentials succinctly, covering what the project does, how to install it, a short usage example and pointers for development setup. A licence file clarifies terms of use; MIT and Apache 2.0 are widely used options in open source. Advanced CI: Quality Checks and Multi-version Testing
Once structure, tests and documentation are in order, quality checks can be expanded in the continuous integration workflow. Adding automated formatting, import sorting and linting tightens consistency across contributions. An enhanced workflow uses Black, isort and Flake8 before running tests and building, and also includes a matrix to test across multiple Python versions. It runs on pull requests as well as on tagged pushes, which means code quality and compatibility are verified before merging changes and again before publishing a release.
name: Lint, Test and Publish
on:
push:
tags:
- "v*"
pull_request:
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: ["3.9", "3.10", "3.11"]
steps:
- name: Check out repository
uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install Poetry
run: |
curl -sSL https://install.python-poetry.org | python3 -
echo "$HOME/.local/bin" >> $GITHUB_PATH
- name: Cache Poetry dependencies
uses: actions/cache@v4
with:
path: |
~/.cache/pypoetry
~/.cache/pip
key: poetry-${{ runner.os }}-${{ hashFiles('**/poetry.lock') }}
restore-keys: |
poetry-${{ runner.os }}-
- name: Install dependencies
run: poetry install --no-interaction --no-root
- name: Check code formatting with Black
run: poetry run black --check .
- name: Check import order with isort
run: poetry run isort --check-only .
- name: Run Flake8 linting
run: poetry run flake8 .
- name: Run tests with pytest
run: poetry run pytest --maxfail=1 --disable-warnings -q
- name: Build package
run: poetry build
- name: Publish to PyPI
if: startsWith(github.ref, 'refs/tags/v')
env:
POETRY_PYPI_TOKEN_PYPI: ${{ secrets.PYPI_TOKEN }}
run: poetry publish --no-interaction --username __token__ --password $POETRY_PYPI_TOKEN_PYPI
This workflow builds on the earlier one by checking style and formatting before tests. If any of those checks fail, the process stops and surfaces the problems in the job logs. Caching based on the lock file reduces the time spent installing dependencies by reusing packages where nothing has changed. The matrix section ensures that the library remains compatible with the declared range of Python versions, which is especially helpful just before a release. It is possible to extend this further with coverage reports using pytest-cov and Codecov, static type checking with mypy, or pre-commit hooks to keep local development consistent with continuous integration. Publishing to TestPyPI in a separate job can help validate packaging without affecting the real index, and once outcomes look good, the main publishing step proceeds when a tag is pushed.
Conclusion
The result of adopting Poetry is a project that states its requirements clearly, installs them reliably and produces distributions without ceremony. For new work, it removes much of the friction that once accompanied Python packaging. For existing projects, the migration path is gentle and reversible, and the gains in determinism often show up quickly in fewer environment-related issues. When paired with a small amount of automation in a continuous integration system, the routine of building, testing and publishing becomes repeatable and visible to everyone on the team. That holds whether the package is destined for internal use on a private index or a public release on PyPI.
PandasGUI: A simple solution for Pandas DataFrame inspection from within VSCode
One of the things that I miss about Spyder when running Python scripts is the ability to look at DataFrames easily. Recently, I was checking a VAT return only for tmux to truncate how much of the DataFrame I could see in output from the print function. While closing tmux might have been an idea, I sought the DataFrame windowing alternative. That led me to the pandasgui package, which did exactly what I needed, apart from pausing the script execution to show me the data. The installed was done using pip:
pip install pandasgui
Once that competed, I could use the following code construct to accomplish what I wanted:
import pandasgui
pandasgui.show(df)
In my case, there were several lines between the two lines above. Nevertheless, the first line made the pandasgui package available to the script, while the second one displayed the DataFrame in a GUI with scrollbars and cells, among other things. That was close enough to what I wanted to leave me able to complete the task that was needed of me.
Fixing Python path issues after Homebrew updates on Linux Mint
With Python available by default, it is worth asking how the version on my main Linux workstation is made available courtesy of Homebrew. All that I suggest is that it either was needed by something else or I fancied having a newer version that was available through the Linux Mint repos. Regardless of the now vague reason for doing so, it meant that I had some work to do after running the following command to update and upgrade all my Homebrew packages:
brew update; brew upgrade
The first result was this message when I tried running a Python script afterwards:
-bash: /home/linuxbrew/.linuxbrew/bin/python3: No such file or directory
The solution was to issue the following command to re-link Python:
brew link --overwrite python@3.13
Since you may have a different version by the time that you read this, just change 3.13 above to whatever you have on your system. All was not quite sorted for me after that, though.
My next task was to make Pylance look in the right place for Python packages because they had been moved too. Initial inquiries were suggesting complex if robust solutions. Instead, I went for a simpler fix. The first step was to navigate to File > Preferences > Settings in the menus. Then, I sought out the Open Settings (JSON) icon in the top right of the interface and clicked on it to open a JSON containing VSCode settings. Once in there, I edited the file to end up with something like this:
"python.analysis.extraPaths": [
"/home/[account name]/.local/bin",
"/home/[account name]/.local/lib/python[python version]/site-packages"
]
Clearly, your [account name] and [python version] need to be filled in above. That approach works for me so far, leaving the more complex alternative for later should I come to need that.
What to do when the externally-managed environment error appears while using pip to install Python packages on Linux Mint 22
After upgrading to Linux Mint 22, the following message appeared when attempting to install Python packages using the pip command:
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
This will frustrate anyone following how-tos on the web, so users will need to know about it. On something like Linux Mint, the repositories may not be as up-to-date as PyPI, so picking up the very latest version has its advantages. Thus, I initially used the unrecommended --break-system-packages switch to get things going as before, since doing never broke anything before. While the way of working feels like an overkill in some ways, using pipx probably is the way forward as long as things work as I want them to do.
There is wisdom in using virtual environments too, especially when AI models are involved. For most of what I get to do, that may be getting too elaborate. Then, deleting or renaming the message file in /usr/lib/python3.12/EXTERNALLY-MANAGED is tempting if that gets around things, as retrograde as that probably is. After all, I never broke anything before this message started to appear, possibly since my interests are data related.
AttributeError: module 'PIL' has no attribute 'Image'
One of my websites has an online photo gallery. This has been a long-term activity that has taken several forms over the years. Once HTML and JavaScript based, it then was powered by Perl before PHP and MySQL came along to take things from there.
While that remains how it works, the publishing side of things has used its own selection of mechanisms over the same time span. Perl and XML were the backbone until Python and Markdown took over. There was a time when ImageMagick and GraphicsMagick handled image processing, but Python now does that as well.
That was when the error message gracing the title of this post came to my notice. Everything was working well when executed in Spyder, but the message appears when I tried running things using Python on the command line. PIL is the abbreviated name for the Python 3 pillow package; there was one called PIL in the Python 2 days.
For me, pillow loads, resizes and creates new images, which is handy for adding borders and copyright/source information to each image as well as creating thumbnails. All this happens in memory and that makes everything go quickly, much faster than disk-based tools like ImageMagick and GraphicsMagick.
Of course, nothing is going to happen if the package cannot be loaded, and that is what the error message is about. Linux is what I mainly use, so that is the context for this scenario. What I was doing was something like the following in the Python script:
import PIL
Then, I referred to PIL.Image when I needed it, and this could not be found when the script was run from the command line (BASH). The solution was to add something like the following:
from PIL import Image
That sorted it, and I must have run into trouble with PIL.ImageFilter too, since I now load it in the same manner. In both cases, I could just refer to Image or ImageFilter as I required and without the dot syntax. However, you need to make sure that there is no clash with anything in another loaded Python package when doing this.
Another way to supply the terminal output of one BASH command to another
My usual way for sending the output of one command to another is to be place one command after another, separated by the pipe (|) operator, adjusting the second command as needed. However, I recently found that this approach does not work well for docker pull commands until I uncovered another option.
The solution is to enclose the input command in $( ) within the output command. Within the parentheses, any kind of command can be declared and includes anything with piping as part of it. As long as text is being printed to the terminal, it can be fed to the second command and used as required. Thus, you can have something like the following:
docker pull $([command outputting name of image to download])
This approach has helped with other kinds of automation of docker image and container use and deployment because it is so general. There may be other uses found for the approach yet.
Building a sitemap in XML
While there are many tools that will build XML site maps, there is some satisfaction to be had in creating your own. This is despite there being a multitude of search engine optimisation plugins for content management systems like WordPress or what is built into static site generators like Hugo. Sometimes, building your own allows for added simplicity, and that is shared with recent efforts in WordPress theme development.
The sitemap XML protocol is simple enough to offer a short coding project. The basis was what Hugo generates, and I used Python to create the XML files. The only libraries that I needed were configparser, SQLAlchemy and pandas. The first two of these allowed databases to be queried, and the last on the list was used for data processing. Otherwise, it was a case of using what is built into the Python language, like file writing and looping.
Once the scripts were ready, they could be uploaded to web servers and executed by scheduled jobs using CRON to keep things up to date. Along the way, I also uncovered a way to publicise the locations of the sitemap files to search engine bots using robots.txt. The structure of the instruction is the following:
User-agent: *
Sitemap: sitemap.xml
This means that it announces to all bots the location of the sitemap file. In my case, I always included the full URL for the XML file, and that clearly varies by website location.