Technology Tales

Notes drawn from experiences in consumer and enterprise technology

TOPIC: HOMEBREW

Silencing MLX warnings when running Ollama via Homebrew on macOS

15th February 2026

While there is an Ollama app on macOS, I chose to install it using Homebrew instead. That worked well enough, even if I kept seeing a warning message like this on macOS Tahoe:

WARN MLX dynamic library not available error="failed to load MLX dynamic library (searched: [/opt/homebrew/Cellar/ollama/0.16.2/bin

For some reason, the MLX integration into ollama is not what it should be, even if it runs without any other issues as things stand. While the native app does not have this issue and warnings like these can be overlooked at the operating system level, my chosen solution was to specify this alias in the .zshrc file:

alias ollama='command ollama "$@" 2> >(grep -v "MLX dynamic library not available" >&2)'

On executing this command to reload the configuration file, the output from ollama to stderr was much cleaner:

source ~/.zshrc

However, the alias itself still needs some unpacking to explain what is happening. Let us proceed piece by piece, focussing on the less obvious parts of the text within the quotes in the alias definition.

command: Starting the whole aliased statement with this keyword stops everything becoming recursive and makes everything safer, even if I have got away with doing that with the ls command elsewhere. If I were to include aliases in functions, the situation could be different, producing an infinite loop in the process.

"$@": Without this, the arguments to the ollama command would not be passed into the alias.

2>: This is the overall redirection to stderr and >(grep -v "MLX dynamic library not available" >&2) where the text removal takes place.

grep -v: Within the filtering statement, this command prints everything that does not match the search string. In this case, that is "MLX dynamic library not available" .

>&2: Here is where the output is sent back to stderr for the messages that appear in the console.

In all of this, it is important to distinguish between stderr (standard error output) and stdout (standard output). For ollama, the latter is how you receive a response from the LLM, while the former is used for application feedback. This surprised me when I first learned of it, yet it is common behaviour in the world of Linux and UNIX, which includes macOS.

This matters here because suppressing stderr means that you get no idea of how an LLM download is proceeding because that goes to that destination, rather than stdout as I might have expected without knowing better as I do now. Hence, the optimal approach is to subset the stderr output instead.

Grouping directories first in output from ls commands executed in terminal sessions on macOS and Linux

12th February 2026

This enquiry began with my seeing directories and files being sorted by alphabetical order without regard for type in macOS Finder. In Windows and Linux file managers, I am accustomed to directories and files being listed in distinct blocks, albeit within the same listings, with the former preceding the latter. What I had missed was that the ls command and its aliases did what I was seeing in macOS Finder, which perhaps is why the operating system and its default apps work like that.

Over to Linux

On the zsh implementation that macOS uses, there is no way to order the output so that directories are listed before files. However, the situation is different on Linux because of the use of GNU tooling. Here, the --group-directories-first switch is available, and I have started to use this on my own Linux systems, web servers as well as workstations. This can be set up in .bashrc or .bash_aliases like the following:

alias ls='ls --color=auto --group-directories-first'
alias ll='ls -lh --color=auto --group-directories-first'

Above, the --color=auto switch adds colour to the output too. Issuing the following command makes the updates available in a terminal session (~ is the shorthand for the home directory below):

source ~/.bashrc

Back to macOS

While that works well on Linux, additional tweaks are needed to implement the same on macOS. Firstly, you have to install Homebrew using this command (you may be asked for your system password to let the process proceed):

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

To make it work, this should be added to the .zshrc file in your home folder:

export PATH="/opt/homebrew/opt/coreutils/libexec/gnubin:$PATH"

Then, you need to install coreutils for GNU commands like gls (a different name is used to distinguish it from what comes with macOS) and adding dircolors gives you coloured text output as well:

brew install coreutils
brew install dircolors

Once those were in place, I found that adding these lines to the .zshrc file was all that was needed (note those extra g's):

alias ls='gls --color=auto --group-directories-first'
alias ll='gls -lh --color=auto --group-directories-first'

If your experience differs, they may need to be preceded with this line in the same configuration file:

eval "$(dircolors -b)"

The final additions then look like this:

export PATH="/opt/homebrew/opt/coreutils/libexec/gnubin:$PATH"
eval "$(dircolors -b)"
alias ls='gls --color=auto --group-directories-first'
alias ll='gls -lh --color=auto --group-directories-first'

Following those, issuing this command will make the settings available in your terminal session:

source ~/.zshrc

Closing Remarks

In summary, you have learned how to list directories before files, and not intermingled as is the default situation. For me, this discovery was educational and adds some extra user-friendliness that was not there before the tweaks. While we may be considering two operating systems and two different shells (bash and zsh), there is enough crossover to make terminal directory and file listing operations function consistently regardless of where you are working.

Preventing authentication credentials from entering Git repositories

6th February 2026

Keeping credentials out of version control is a fundamental security practice that prevents numerous problems before they occur. Once secrets enter Git history, remediation becomes complex and cannot guarantee complete removal. Understanding how to structure projects, configure tooling, and establish workflows that prevent credential commits is essential for maintaining secure development practices.

Understanding What Needs Protection

Credentials come in many forms across different technology stacks. Database passwords, API keys, authentication tokens, encryption keys, and service account credentials all represent sensitive data that should never be committed. Configuration files containing these secrets vary by platform but share common characteristics: they hold information that would allow unauthorised access to systems, data, or services.

# This should never be in Git
database:
  password: secretpassword123
api:
  key: sk_live_abc123def456
# Neither should this
DB_PASSWORD=secretpassword123
API_KEY=sk_live_abc123def456
JWT_SECRET=mysecretkey

Even hashed passwords require careful consideration. Whilst bcrypt hashes with appropriate cost factors (10 or higher, requiring approximately 65 milliseconds per computation) provide protection against immediate exploitation, they still represent sensitive data. The hash format includes a version identifier, cost factor, salt, and hash components that could be targeted by offline attacks if exposed. Plaintext credentials offer no protection whatsoever and represent immediate critical exposure.

Establishing Git Ignore Rules from the Start

The foundation of credential protection is proper .gitignore configuration established before any sensitive files are created. This proactive approach prevents problems rather than requiring remediation after discovery. Begin every project by identifying which files will contain secrets and excluding them immediately.

# Credentials and secrets
.env
.env.*
!.env.example
config/secrets.yml
config/database.yml
config/credentials/

# Application-specific sensitive files
wp-config.php
settings.php
configuration.php
appsettings.Production.json
application-production.properties

# User data and session storage
/storage/credentials/
/var/sessions/
/data/users/

# Keys and certificates
*.key
*.pem
*.p12
*.pfx
!public.key

# Cache and logs that might leak data
/cache/
/logs/
/tmp/
*.log

The negation pattern !.env.example demonstrates an important technique: excluding all .env files whilst explicitly including example files that show structure without containing secrets. This pattern ensures that developers understand what configuration is needed without exposing actual credentials.

Notice the broad exclusions for entire categories rather than specific files. Excluding *.key prevents any private key files from being committed, whilst !public.key allows the explicit inclusion of public keys that are safe to share. This defence-in-depth approach catches variations and edge cases that specific file exclusions might miss.

Separating Examples from Actual Configuration

Version control should contain example configurations that demonstrate structure without exposing secrets. Create .example or .sample files that show developers what configuration is required, whilst keeping actual credentials out of Git entirely.

# config/secrets.example.yml
database:
  host: localhost
  username: app_user
  password: REPLACE_WITH_DATABASE_PASSWORD

api:
  endpoint: https://api.example.com
  key: REPLACE_WITH_API_KEY
  secret: REPLACE_WITH_API_SECRET

encryption:
  key: REPLACE_WITH_32_BYTE_ENCRYPTION_KEY

Documentation should explain where developers obtain the actual values. For local development, this might involve running setup scripts that generate credentials. For production, it involves deployment processes that inject secrets from secure storage. The example file serves as a template and checklist, ensuring nothing is forgotten whilst preventing accidental commits of real secrets.

Using Environment Variables

Environment variables provide a standard mechanism for separating configuration from code. Applications read credentials from the environment rather than from files tracked in Git. This pattern works across virtually all platforms and languages.

// Instead of hardcoding
$db_password = 'secretpassword123';

// Read from environment
$db_password = getenv('DB_PASSWORD');
// Instead of requiring a config file with secrets
const apiKey = 'sk_live_abc123def456';

// Read from environment
const apiKey = process.env.API_KEY;

Environment files (.env) provide convenience for local development, but must be excluded from Git. The pattern of .env for actual secrets and .env.example for structure becomes standard across many frameworks. Developers copy the example to create their local configuration, filling in actual values that never leave their machine.

Implementing Pre-Commit Hooks

Pre-commit hooks provide automated checking before changes enter the repository. These hooks scan staged files for patterns that match secrets and block commits when suspicious content is detected. This automated enforcement prevents mistakes that manual review might miss.

The pre-commit framework manages hooks across multiple repositories and languages. Installation is straightforward, and configuration defines which checks run before each commit.

pip install pre-commit

Create a configuration file defining which hooks to run:

# .pre-commit-config.yaml
repos:
  - repo: https://github.com/pre-commit/pre-commit-hooks
    rev: v4.4.0
    hooks:
      - id: check-added-large-files
      - id: check-json
      - id: check-yaml
      - id: detect-private-key
      - id: end-of-file-fixer
      - id: trailing-whitespace

  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.4.0
    hooks:
      - id: detect-secrets
        args: ['--baseline', '.secrets.baseline']

Install the hooks in your repository:

pre-commit install

Now every commit triggers these checks automatically. The detect-private-key hook catches SSH keys and other private key formats. The detect-secrets hook uses entropy analysis and pattern matching to identify potential credentials. When suspicious content is detected, the commit is blocked and the developer is alerted to review the flagged content.

Configuring Git-Secrets

The git-secrets tool from AWS specifically targets secret detection. It scans commits, commit messages, and merges to prevent credentials from entering repositories. Installation and configuration establish patterns that identify secrets.

# Install via Homebrew
brew install git-secrets

# Install hooks in a repository
cd /path/to/repo
git secrets --install

# Register AWS patterns
git secrets --register-aws

# Add custom patterns
git secrets --add 'password\s*=\s*["\047][^\s]+'
git secrets --add 'api[_-]?key\s*=\s*["\047][^\s]+'

The tool maintains a list of prohibited patterns and scans all content before allowing commits. Custom patterns can be added to match organisation-specific secret formats. The --register-aws command adds patterns for AWS access keys and secret keys, whilst custom patterns catch application-specific credential formats.

For teams, establishing git-secrets across all repositories ensures consistent protection. Template directories provide a mechanism for automatic installation in new repositories:

# Create a template with git-secrets installed
git secrets --install ~/.git-templates/git-secrets

# Use the template for all new repositories
git config --global init.templateDir ~/.git-templates/git-secrets

Now, every git init automatically includes secret scanning hooks.

Enabling GitHub Secret Scanning

GitHub Secret Scanning provides server-side protection that cannot be bypassed by local configuration. GitHub automatically scans repositories for known secret patterns and alerts repository administrators when matches are detected. This works for both new commits and historical content.

For public repositories, secret scanning is enabled by default. For private repositories, it requires GitHub Advanced Security. Enable it through repository settings under Security & Analysis. GitHub maintains partnerships with service providers to detect their specific secret formats, and when a partner pattern is found, both you and the service provider are notified.

The scanning covers not just code but also issues, pull requests, discussions, and wiki content. This comprehensive approach catches secrets that might be accidentally pasted into comments or documentation. The detection happens continuously, so even old content gets scanned when new patterns are added.

Custom patterns extend detection to organisation-specific secret formats. Define regular expressions that match your internal API key formats, authentication tokens, or other proprietary credentials. These custom patterns apply across all repositories in your organisation, providing consistent protection.

Structuring Projects for Credential Isolation

Project structure itself can prevent credentials from accidentally entering Git. Establish clear separation between code that belongs in version control and configuration that remains environment-specific. Create dedicated directories for credentials and ensure they are excluded from tracking.

project/
├── src/                    # Code - belongs in Git
├── tests/                  # Tests - belongs in Git
├── config/
│   ├── app.yml            # General config - belongs in Git
│   ├── secrets.example.yml # Example - belongs in Git
│   └── secrets.yml        # Actual secrets - excluded from Git
├── credentials/           # Entire directory excluded
│   ├── database.yml
│   └── api-keys.json
├── .env.example           # Example - belongs in Git
├── .env                   # Actual secrets - excluded from Git
└── .gitignore             # Defines exclusions

This structure makes it obvious which files contain secrets. The credentials/ directory is clearly separated from source code, and its exclusion from Git is explicit. Developers can see at a glance that this directory requires different handling.

Documentation should explain the structure and the reasoning behind it. New team members need to understand why certain directories are empty in their fresh clones and where to obtain the configuration files that populate them. Clear documentation prevents confusion and ensures everyone follows the same patterns.

Managing Development Credentials

Development environments require credentials but should never use production secrets. Generate separate development credentials that provide access to development resources only. These credentials can be less stringently protected whilst still not being committed to Git.

Development credential management varies by organisation size and infrastructure. For small teams, shared development credentials stored in a team password manager might suffice. For larger organisations, each developer receives individual credentials for development resources, with access controlled through identity management systems.

Some teams commit development credentials intentionally, arguing that development databases contain no sensitive data and convenience outweighs risk. This approach is controversial and depends on your security model. If development credentials can access any production resources or if development data has any sensitivity, they must be protected. Even purely synthetic development data might reveal business logic or system architecture worth protecting.

The safer approach maintains the same credential handling patterns across all environments. This ensures that developers build habits that prevent production credential exposure. When development and production follow identical patterns, muscle memory built during development prevents mistakes in production.

Provisioning Production Credentials

Production credentials should never touch developer machines or version control. Deployment processes inject credentials at runtime through environment variables, secret management services, or deployment-time configuration.

Continuous deployment pipelines read credentials from secret stores and make them available to applications without exposing them to humans. GitHub Actions, GitLab CI, Jenkins, and other CI/CD systems provide secure variable storage that is injected during builds and deployments.

# .github/workflows/deploy.yml
jobs:
  deploy:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Deploy to production
        env:
          DB_PASSWORD: ${{ secrets.DB_PASSWORD }}
          API_KEY: ${{ secrets.API_KEY }}
        run: |
          ./deploy.sh

The secrets.DB_PASSWORD syntax references encrypted secrets stored in GitHub's secure storage. These values are never exposed in logs or visible to anyone except during the deployment process. The deployment script receives them as environment variables and can configure the application appropriately.

Secret management services like HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, or Google Cloud Secret Manager provide centralised credential storage with access controls, audit logging, and automatic rotation. Applications authenticate to these services and retrieve credentials at runtime, ensuring that secrets are never stored on disk or in environment files.

Rotating Credentials Regularly

Regular credential rotation limits exposure duration if secrets are compromised. Establish rotation schedules based on credential sensitivity and access patterns. Database passwords might rotate quarterly, API keys monthly, and authentication tokens weekly or daily. Automated rotation reduces operational burden and ensures consistency.

Rotation requires coordination between secret generation, distribution, and application updates. Secret management services can automate much of this process, generating new credentials, updating secure storage, and triggering application reloads. Manual rotation involves generating new credentials, updating all systems that use them, and verifying functionality before disabling old credentials.

The rotation schedule balances security against operational complexity. More frequent rotation provides better security but increases the risk of service disruption if processes fail. Less frequent rotation simplifies operations but extends exposure windows. Find the balance that matches your risk tolerance and operational capabilities.

Training and Culture

Technical controls provide necessary guardrails, but security ultimately depends on people understanding why credentials matter and how to protect them. Training should cover the business impact of credential exposure, the techniques for keeping secrets out of Git, and the procedures for responding if mistakes occur.

New developer onboarding should include credential management as a core topic. Before developers commit their first code, they should understand what constitutes a secret, why it must stay out of Git, and how to configure their local environment properly. This prevents problems rather than correcting them after they occur.

Regular security reminders reinforce good practices. When new secret types are introduced or new tools are adopted, communicate the changes and update documentation. Security reviews should check credential handling practices, not just looking for exposed secrets, but also verifying that proper patterns are followed.

Creating a culture where admitting mistakes is safe encourages early reporting. If a developer accidentally commits a credential, they should feel comfortable immediately alerting the security team, rather than hoping no one notices. Early detection enables faster response and reduces damage.

Responding to Detection

Despite best efforts, secrets sometimes enter repositories. Rapid response limits damage. Immediate credential rotation assumes compromise and prevents exploitation. Removing the file from future commits whilst leaving it in history provides no security benefit, as the exposure has already occurred.

Tools like BFG Repo-Cleaner can remove secrets from Git history, but this is complex and cannot guarantee complete removal. Anyone who cloned the repository before clean-up retains the compromised credentials in their local copy. Forks, clones on other systems, and backup copies may all contain the secrets.

The most reliable response is assuming the credential is compromised and rotating it immediately. History clean-up can follow as a secondary measure to reduce ongoing exposure, but it should never be the primary response. Treat any secret that entered Git as if it were publicly posted because once in Git history, it effectively was.

Continuous Improvement

Credential management practices should evolve with your infrastructure and team. Regular reviews identify gaps and opportunities for improvement. When new credential types are introduced, update .gitignore patterns, secret scanning rules, and documentation. When new developers join, gather feedback on clarity and completeness of onboarding materials.

Metrics help track effectiveness. Monitor secret scanning alerts, track rotation compliance, and measure time-to-rotation when credentials are exposed. These metrics identify areas needing improvement and demonstrate progress over time.

Summary

Preventing credentials from entering Git repositories requires multiple complementary approaches. Establish comprehensive .gitignore configurations before creating any credential files. Separate example configurations from actual secrets, keeping only examples in version control. Use environment variables to inject credentials at runtime rather than storing them in configuration files. Implement pre-commit hooks and server-side scanning to catch mistakes before they enter history. Structure projects to clearly separate code from credentials, making it obvious what belongs in Git and what does not.

Train developers on credential management and create a culture where security is everyone's responsibility. Provision production credentials through deployment processes and secret management services, ensuring they never touch developer machines or version control. Rotate credentials regularly to limit exposure windows. Respond rapidly when secrets are detected, assuming compromise and rotating immediately.

Security is not a one-time configuration but an ongoing practice. Regular reviews, continuous improvement, and adaptation to new threats and technologies keep credential management effective. The investment in prevention is far less than the cost of responding to exposed credentials, making it essential to get right from the beginning.

Related Resources

Finding a better way to uninstall Mac applications

14th January 2026

If you were to consult an AI about uninstalling software under macOS, you would be given a list of commands to run in the Terminal. That feels far less slick than either Linux or Windows. Thus, I set to looking for a cleaner solution. It came in the form of AppCleaner from FreeMacSoft.

This finds the files to remove once you have supplied the name of the app that you wish to uninstall. Once you have reviewed those, you can set it to remove them to the recycling bin, after which they can be expunged from there. Handily, this automates the manual graft that otherwise would be needed.

It amazes me that such an operation is not handled within macOS itself, instead of leaving it to the software providers themselves, or third-party tools like this one. Otherwise, a Mac could get very messy, though Homebrew offers ways of managing software installations for certain cases. Surprisingly, the situation is more free-form than on iOS, too.

Generating Git commit messages automatically using aicommit and OpenAI

25th October 2025

One of the overheads of using version control systems like Subversion or Git is the need to create descriptive messages for each revision. Now that GitHub has its copilot, it now generates those messages for you. However, that still leaves anyone with a local git repository out in the cold, even if you are uploading to GitHub as your remote repo.

One thing that a Homebrew update does is to highlight other packages that are available, which is how I got to learn of a tool that helps with this, aicommit. Installing is just a simple command away:

brew install aicommit

Once that is complete, you now have a tool that generates messages describing very commit using GPT. For it to work, you do need to get yourself set up with OpenAI's API services and generate a token that you can use. That needs an environment variable to be set to make it available. On Linux (and Mac), this works:

export OPENAI_API_KEY=<Enter the API token here, without the brackets>

Because I use this API for Python scripting, that part was already in place. Thus, I could proceed to the next stage: inserting it into my workflow. For the sake of added automation, this uses shell scripting on my machines. The basis sequence is this:

git add .
git commit -m "<a default message>"
git push

The first line above stages everything while the second commits the files with an associated message (git makes this mandatory, much like Subversion) and the third pushes the files into the GitHub repository. Fitting in aicommit then changes the above to this:

git add .
aicommit
git push

There is now no need to define a message because aicommit does that for you, saving some effort. However, token limitations on the OpenAI side mean that the aicommit command can fail, causing the update operation to abort. Thus, it is safer to catch that situation using the following code:

git add .
if ! aicommit 2>/dev/null; then
    echo "  aicommit failed, using fallback"
    git commit -m "<a default message>"
fi
git push

This now informs me what has happened when the AI option is overloaded and the scripts fallback to a default option that is always available with git. While there is more to my git scripting than this, the snippets included here should get across how things can work. They go well for small push operations, which is what happens most of the time; usually, I do not attempt more than that.

Command line installation and upgrading of VSCode and VSCodium on Windows, macOS and Linux

25th October 2025

Downloading and installing software packages from a website is all very well until you need to update them. Then, a single command streamlines the process significantly. Given that VSCode and VSCodium are updated regularly, this becomes all the more pertinent and explains why I chose them for this piece.

Windows

Now that Windows 10 is more or less behind us, we can focus on Windows 11. That comes with the winget command by default, which is handy because it allows command line installation of anything that is in the Windows store, which includes VSCode and VSCodium. The commands can be as simple as these:

winget install VisualStudioCode
winget install VSCodium.VSCodium

The above is shorthand for this, though:

winget install --id VisualStudioCode
winget install --id VSCodium.VSCodium

If you want exact matches, the above then becomes:

winget install -e --id VisualStudioCode
winget install -e --id VSCodium.VSCodium

For upgrades, this is what is needed:

winget upgrade Microsoft.VisualStudioCode
winget upgrade VSCodium.VSCodium

Even better, you can do an upgrade everything at once operation:

winget upgrade --all

The last part certainly is better than the round trip to a website and back to going through an installation GUI. There is a lot less mouse clicking for one thing.

macOS

On macOS, you need to have Homebrew installed to make things more streamlined. To complete that, you need to run the following command (which may need you to enter your system password to get things to happen):

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Then, you can execute one or both of these in the Terminal app, perhaps having to authorise everything with your password when requested to do so:

brew install --cask visual-studio-code
brew install --cask vscodium

The reason for the -cask switch is that these are apps that you want to go into the correct locations on macOS as well as having their icons appear in Launchpad. Omitting it is fine for command line utilities, but not for these.

To update and upgrade everything that you have installed via Homebrew, just issue the following in a terminal session:

brew update && brew upgrade

Debian, Ubuntu & Linux Mint

Like any other Debian or Ubuntu derivative, Linux Mint has its own in-built package management system via apt. Other Linux distributions have their own way of doing things (Fedora and Arch come to mind here), yet the essential idea is similar in many cases. Because there are a number of steps, I have split out VSCode from VSCodium for added clarity. Because of the way that things are set up, one or both apps can be updated using the usual apt commands without individual attention.

VSCode

The first step is to download the repository key using the following command:

wget -qO- https://packages.microsoft.com/keys/microsoft.asc \
| gpg --dearmor > packages.microsoft.gpg
sudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg

Then, you can add the repository like this:

echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/packages.microsoft.gpg] \
https://packages.microsoft.com/repos/code stable main" \
| sudo tee /etc/apt/sources.list.d/vscode.list

With that in place, the last thing that you need to do is issue the command for doing the installation from the repository:

sudo apt update; sudo apt install code

Above, I have put two commands together: one to update the repository and another to do the installation.

VSCodium

Since the VSCodium process is similar, here are the three commands together: one for downloading the repository key, another that adds the new repository and one more to perform the repository updates and subsequent installation:

curl -fSsL https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg \
| sudo gpg --dearmor | sudo tee /usr/share/keyrings/vscodium-archive-keyring.gpg >/dev/null

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/vscodium-archive-keyring.gpg] \
https://download.vscodium.com/debs vscodium main" \
| sudo tee /etc/apt/sources.list.d/vscodium.sources

sudo apt update; sudo apt install codium

After the three steps have completed successfully, VSCodium is installed and available to use on your system, and is accessible through the menus too.

Controlling the version of Python used in the Positron console with virtual environments

21st October 2025

Because I have Homebrew installed on my Linux system for getting Hugo and LanguageTool on there, I also have a later version of Python than is available from the Linux Mint repositories. Both 3.12 and 3.13 are on my machine as a consequence. Here is the line in my .bashrc file that makes that happen:

eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"

The result is when I issue the command which python3, this is what I get:

/home/linuxbrew/.linuxbrew/bin/python3

However, Positron looks to /usr/bin/python3 by default. Since this can get confusing, setting a virtual environment has its uses as long as you create it with the intended Python version. This is how you can do it, even if I needed to use sudo mode for some reason:

python3 -m venv .venv

When working solely on the command line, activating it becomes a necessity, adding another manual step to a mind that had resisted all this until recently:

source .venv/bin/activate

Thankfully, just issuing the deactivate command will do the eponymous action. Even better, just opening a folder with a venv in Positron saves you from issuing the extra commands and grants you the desired Python version in the console that it opens. Having run into some clashes between package versions, I am beginning to appreciate having a dedicated environment for a set of Python scripts, especially when an IDE makes it easy to work with such an arrangement.

Fixing Python path issues after Homebrew updates on Linux Mint

30th August 2025

With Python available by default, it is worth asking how the version on my main Linux workstation is made available courtesy of Homebrew. All that I suggest is that it either was needed by something else or I fancied having a newer version that was available through the Linux Mint repos. Regardless of the now vague reason for doing so, it meant that I had some work to do after running the following command to update and upgrade all my Homebrew packages:

brew update; brew upgrade

The first result was this message when I tried running a Python script afterwards:

-bash: /home/linuxbrew/.linuxbrew/bin/python3: No such file or directory

The solution was to issue the following command to re-link Python:

brew link --overwrite python@3.13

Since you may have a different version by the time that you read this, just change 3.13 above to whatever you have on your system. All was not quite sorted for me after that, though.

My next task was to make Pylance look in the right place for Python packages because they had been moved too. Initial inquiries were suggesting complex if robust solutions. Instead, I went for a simpler fix. The first step was to navigate to File > Preferences > Settings in the menus. Then, I sought out the Open Settings (JSON) icon in the top right of the interface and clicked on it to open a JSON containing VSCode settings. Once in there, I edited the file to end up with something like this:

    "python.analysis.extraPaths": [
        "/home/[account name]/.local/bin",
        "/home/[account name]/.local/lib/python[python version]/site-packages"
    ]

Clearly, your [account name] and [python version] need to be filled in above. That approach works for me so far, leaving the more complex alternative for later should I come to need that.

Resolving a clash between Homebrew and Python

22nd November 2022

For reasons that I cannot recall now, I installed the Hugo static website generator on my Linux system and web servers using Homebrew. The only reason that I suggest is that it might have been a way to get the latest version at the time because Linux Mint only does major changes like that every two years, keeping it in line with long-term support editions of Ubuntu.

When Homebrew was installed, it changed the lookup path for command line executables by adding the following line to my .bashrc file:

eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"

This executed the following lines:

export HOMEBREW_PREFIX="/home/linuxbrew/.linuxbrew";
export HOMEBREW_CELLAR="/home/linuxbrew/.linuxbrew/Cellar";
export HOMEBREW_REPOSITORY="/home/linuxbrew/.linuxbrew/Homebrew";
export PATH="/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin${PATH+:$PATH}";
export MANPATH="/home/linuxbrew/.linuxbrew/share/man${MANPATH+:$MANPATH}:";
export INFOPATH="/home/linuxbrew/.linuxbrew/share/info:${INFOPATH:-}";

While the result suits Homebrew, it changed the setup of Python and its packages on my system. Eventually, this had undesirable consequences, like messing up how Spyder started, so I wanted to change this. There are other things that I have automated using Python and these were not working either.

One way that I have seen suggested is to execute the following command, but I cannot vouch for this:

brew unlink python

What I did was to comment out the offending line in .bashrc and replace it with the following:

export PATH="$PATH:/home/linuxbrew/.linuxbrew/bin:/home/linuxbrew/.linuxbrew/sbin"

export HOMEBREW_PREFIX="/home/linuxbrew/.linuxbrew";
export HOMEBREW_CELLAR="/home/linuxbrew/.linuxbrew/Cellar";
export HOMEBREW_REPOSITORY="/home/linuxbrew/.linuxbrew/Homebrew";

export MANPATH="/home/linuxbrew/.linuxbrew/share/man${MANPATH+:$MANPATH}:";
export INFOPATH="${INFOPATH:-}/home/linuxbrew/.linuxbrew/share/info";

The first command adds Homebrew paths to the end of the PATH variable rather than the beginning, which was the previous arrangement. This ensures system folders are searched for executable files before Homebrew folders. It also means Python packages load from my user area instead of the Homebrew location, which happened under Homebrew's default configuration. When working with Python packages, remember not to install one version at the system level and another in your user area, as this creates conflicts.

So far, the result of the Homebrew changes is not unsatisfactory, and I will watch for any rough edges that need addressing. If something comes up, then I will set things up in another way.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.