Technology Tales

Adventures in consumer and enterprise technology

TOPIC: LINUX

Installing PowerShell on Linux Mint for some cross-platform testing

25th November 2025

Given how well shell scripting works on Linux and my familiarity with it, the need to install PowerShell on a Linux system may seem surprising. However, this was part of some testing that I wanted to do on a machine that I controlled before moving the code to a client's system. The first step was to ensure that any prerequisites were in place:

sudo apt update
sudo apt install -y wget apt-transport-https software-properties-common

After that, the next moves were to download and install the required package for instating Microsoft repository details:

wget -q https://packages.microsoft.com/config/ubuntu/24.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb

Then, I could install PowerShell itself:

sudo apt update
sudo apt install -y powershell

When it was in place, issuing the following command started up the extra shell for what I needed to do:

pwsh

During my investigations, I found that my local version of PowerShell was not the same as on the client's system, meaning that any code was not as portable as I might have expected, Nevertheless, it is good to have this for future reference and proves how interoperable Microsoft has needed to become.

Remote access between Mac and Linux: Choosing the right approach

28th October 2025

Connecting from a Mac to a Linux desktop on the same network can be done in several ways, and the right choice depends on whether terminal access suffices or a full graphical session is needed. Terminal access is the simplest to arrange and often the most robust, while graphical access can be provided either by creating a fresh desktop session or by sharing the one already open on the Linux machine. Each approach trades ease of setup, performance and fidelity in different ways, so it helps to understand the options before settling on a configuration.

Understanding Your Requirements

The choice between methods rests primarily on three questions. First, is command-line access sufficient, or is a graphical desktop required? Second, if a desktop is needed, should it be a new session, or must it mirror the existing physical display? Third, how important is responsiveness compared to visual fidelity and feature completeness?

For administrative tasks that involve editing configuration files, managing services, or running scripts, SSH provides everything necessary. When a desktop environment is required, the decision becomes whether to view the exact state of the Linux machine's monitor or to work in a separate session.

The Three Main Options

SSH for Terminal Access

SSH requires no graphical overhead and works reliably over any connection. For many administrative tasks, this is all that is needed. Setting up SSH access is straightforward and forms the foundation for other secure operations, including file transfer and tunnelling.

RDP for New Desktop Sessions

Remote Desktop Protocol excels at creating new sessions with clean input handling and good performance over imperfect connections. RDP with a lightweight desktop such as Xfce delivers the most responsive experience for new sessions, though it does not support compositing desktops like Cinnamon well. The protocol translates keyboard and mouse input in a way that many clients have optimised for years, making it the most forgiving route when precise input behaviour matters.

VNC for Virtual or Shared Desktops

VNC can either create new virtual desktops or share the physical display. TigerVNC is suitable when a new Cinnamon session is acceptable and continuity with the local environment is valued. It can launch a full Cinnamon desktop in a virtual session, though it may feel less responsive than RDP, particularly when network conditions are suboptimal.

x11vnc mirrors the physical display exactly, making it ideal for monitoring ongoing work or providing remote guidance. This is the only option when the requirement is to see precisely what appears on the Linux machine's screen. However, it shares the same performance characteristics as other VNC solutions and is limited to showing what is already displayed locally.

Making the Choice

The decision ultimately comes down to the specific use case. If the goal is to work efficiently in a fresh desktop session with optimal responsiveness, RDP to an Xfce desktop environment is the clear choice. If maintaining the full Cinnamon experience in a new session is important, TigerVNC provides that continuity. When the task requires seeing or controlling the exact desktop session that is already running on the Linux machine, x11vnc is the only viable option.

In the articles that follow, we will examine the practical setup and configuration of x11vnc for sharing physical desktops, followed by detailed guidance on SSH, RDP and TigerVNC for those preferring terminal access or fresh desktop sessions.

What's Next

Part 2 explores x11vnc in detail, covering everything from basic setup to advanced performance tuning, input handling with KVM switches, clipboard troubleshooting and running x11vnc as a system service.

Part 3 examines SSH for terminal access, RDP with Xfce for responsive remote sessions, and TigerVNC for virtual Cinnamon desktops, along with file transfer options and operational considerations.

Command line installation and upgrading of VSCode and VSCodium on Windows, macOS and Linux

25th October 2025

Downloading and installing software packages from a website is all very well until you need to update them. Then, a single command streamlines the process significantly. Given that VSCode and VSCodium are updated regularly, this becomes all the more pertinent and explains why I chose them for this piece.

Windows

Now that Windows 10 is more or less behind us, we can focus on Windows 11. That comes with the winget command by default, which is handy because it allows command line installation of anything that is in the Windows store, which includes VSCode and VSCodium. The commands can be as simple as these:

winget install VisualStudioCode
winget install VSCodium.VSCodium

The above is shorthand for this, though:

winget install --id VisualStudioCode
winget install --id VSCodium.VSCodium

If you want exact matches, the above then becomes:

winget install -e --id VisualStudioCode
winget install -e --id VSCodium.VSCodium

For upgrades, this is what is needed:

winget upgrade Microsoft.VisualStudioCode
winget upgrade VSCodium.VSCodium

Even better, you can do an upgrade everything at once operation:

winget upgrade --all

The last part certainly is better than the round trip to a website and back to going through an installation GUI. There is a lot less mouse clicking for one thing.

macOS

On macOS, you need to have Homebrew installed to make things more streamlined. To complete that, you need to run the following command (which may need you to enter your system password to get things to happen):

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Then, you can execute one or both of these in the Terminal app, perhaps having to authorise everything with your password when requested to do so:

brew install --cask visual-studio-code
brew install --cask vscodium

The reason for the -cask switch is that these are apps that you want to go into the correct locations on macOS as well as having their icons appear in Launchpad. Omitting it is fine for command line utilities, but not for these.

To update and upgrade everything that you have installed via Homebrew, just issue the following in a terminal session:

brew update && brew upgrade

Debian, Ubuntu & Linux Mint

Like any other Debian or Ubuntu derivative, Linux Mint has its own in-built package management system via apt. Other Linux distributions have their own way of doing things (Fedora and Arch come to mind here), yet the essential idea is similar in many cases. Because there are a number of steps, I have split out VSCode from VSCodium for added clarity. Because of the way that things are set up, one or both apps can be updated using the usual apt commands without individual attention.

VSCode

The first step is to download the repository key using the following command:

wget -qO- https://packages.microsoft.com/keys/microsoft.asc \
| gpg --dearmor > packages.microsoft.gpg
sudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg

Then, you can add the repository like this:

echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/packages.microsoft.gpg] \
https://packages.microsoft.com/repos/code stable main" \
| sudo tee /etc/apt/sources.list.d/vscode.list

With that in place, the last thing that you need to do is issue the command for doing the installation from the repository:

sudo apt update; sudo apt install code

Above, I have put two commands together: one to update the repository and another to do the installation.

VSCodium

Since the VSCodium process is similar, here are the three commands together: one for downloading the repository key, another that adds the new repository and one more to perform the repository updates and subsequent installation:

curl -fSsL https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg \
| sudo gpg --dearmor | sudo tee /usr/share/keyrings/vscodium-archive-keyring.gpg >/dev/null

echo "deb [arch=amd64 signed-by=/usr/share/keyrings/vscodium-archive-keyring.gpg] \
https://download.vscodium.com/debs vscodium main" \
| sudo tee /etc/apt/sources.list.d/vscodium.sources

sudo apt update; sudo apt install codium

After the three steps have completed successfully, VSCodium is installed and available to use on your system, and is accessible through the menus too.

Controlling the version of Python used in the Positron console with virtual environments

21st October 2025

Because I have Homebrew installed on my Linux system for getting Hugo and LanguageTool on there, I also have a later version of Python than is available from the Linux Mint repositories. Both 3.12 and 3.13 are on my machine as a consequence. Here is the line in my .bashrc file that makes that happen:

eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"

The result is when I issue the command which python3, this is what I get:

/home/linuxbrew/.linuxbrew/bin/python3

However, Positron looks to /usr/bin/python3 by default. Since this can get confusing, setting a virtual environment has its uses as long as you create it with the intended Python version. This is how you can do it, even if I needed to use sudo mode for some reason:

python3 -m venv .venv

When working solely on the command line, activating it becomes a necessity, adding another manual step to a mind that had resisted all this until recently:

source .venv/bin/activate

Thankfully, just issuing the deactivate command will do the eponymous action. Even better, just opening a folder with a venv in Positron saves you from issuing the extra commands and grants you the desired Python version in the console that it opens. Having run into some clashes between package versions, I am beginning to appreciate having a dedicated environment for a set of Python scripts, especially when an IDE makes it easy to work with such an arrangement.

And then there was one...

25th August 2025

Even with the rise of the internet, magazine publishing was in rude health at the end of the last century. That persisted through the first decade of this century before any decline began to be noticed. Now, things are a poor shadow of what once used to be.

Though a niche area with some added interest, Linux magazine publishing has not been able to escape this trend. At the height of the flourish, we had the launch of Linux Voice by former writers of Linux Format. That only continues as a section in Linux Magazine these days, one of the first casualties from a general decline in readership.

Next, it was the turn of Linux User & Developer. This came into the hands of Future when they acquired Imagine Publishing and remained in print for a time after that due to the conditions of the deal set by the competition regulator. Once beyond that, Future closed it to stick with its own more lightweight title, Linux Format, transferring some of the writers across to the latter. This echoed what they did with Web Designer; they closed that in favour of .Net.

In both cases, dispatching the in-house publications might have been a better idea; without content, you cannot have readers. In time, .Net met its demise and the same fate awaited Linux Format a few months ago after it celebrated its silver jubilee. There was no goodbye edition this time: it went out with a bang, reminding us of glory days that have passed. The content was becoming more diverse with hardware features getting included, perhaps reflecting that something else was happening behind the scenes.

After all that, Linux Magazine soldiers on to outlive the others. It perhaps helps that it is published by an operation centred around the title, Linux New Media, which also publishes Admin Magazine. Even so, it too has needed to trim things a bit. For a time, there was a dedicated Raspberry Pi title that lives on as a section in Linux Magazine.

It was while sprucing up some of the content on here that I was reminded of the changes. Where do you find surveys of what is available? The options could include desktop environments and Linux distributions as much as code editors, web browsers and other kinds of software. While it is true that operations like Future have portals for exactly this kind of thing, they too have a nemesis in the form of GenAI. Asking ChatGPT may be their undoing as much as web publishing triggered a decline in magazine publishing.

Technology upheaval means destruction as much as creation, and there are occasions when spending quiet time reading a paper periodical is just what you need. We all need time away from screen reading, whatever the device happens to be.

File comparison using PowerShell

16th August 2025

In the past, I have compared files on the Linux/UNIX command line as well as the legacy Windows command line. Recently, I decided to try it using PowerShell. Here is the command structure:

Compare-Object (Get-Content ".\[name of one text file]") (Get-Content ".\[name of another text file]") > [path and name of output file]

Admittedly, this is more verbose than the others that I have mentioned above. Nevertheless, it does the job and sends everything to a text file for review. The Compare-Object piece does the comparison once the Get-Content portions have read in the content.

Avoiding repeated token requests by installing the Git credential helper on Linux Mint

19th March 2025

On a new machine, I found asking for the same access token repeatedly. Since this is a long string, that is convenient and does not take long to become irritating. Thus, I sought a way to make it more streamlined. My initial attempt produced the following message:

git: 'credential-libsecret' is not a git command

The main cause for the above was the absence of the libsecret credential helper, crucial for managing credentials securely in a keyring, from my system. The solution was to install the required packages from the command line:

sudo apt install libsecret-1-0 libsecret-1-dev

Following installation, the next step was to navigate to the appropriate directory and execute the make command to compile the files within the directory, transforming them into an executable credential helper:

cd /usr/share/doc/git/contrib/credential/libsecret; sudo make

With the credential helper fully built, Git needed to be configured to use it by executing the following:

git config --global credential.helper /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret

Since one error message is enough for any new activity, it made sense to confirm that the credential helper resided in the correct location. That was accomplished by issuing this command:

ls -l /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret

All was well in my case, saving the need to reinstall Git or repeat the manual compilation of the credential helper. When all was done, I was ready to automate things further.

What to do when the externally-managed environment error appears while using pip to install Python packages on Linux Mint 22

16th December 2024

After upgrading to Linux Mint 22, the following message appeared when attempting to install Python packages using the pip command:

error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.

If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.

If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.

See /usr/share/doc/python3.12/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

This will frustrate anyone following how-tos on the web, so users will need to know about it. On something like Linux Mint, the repositories may not be as up-to-date as PyPI, so picking up the very latest version has its advantages. Thus, I initially used the unrecommended --break-system-packages switch to get things going as before, since doing never broke anything before. While the way of working feels like an overkill in some ways, using pipx probably is the way forward as long as things work as I want them to do.

There is wisdom in using virtual environments too, especially when AI models are involved. For most of what I get to do, that may be getting too elaborate. Then, deleting or renaming the message file in /usr/lib/python3.12/EXTERNALLY-MANAGED is tempting if that gets around things, as retrograde as that probably is. After all, I never broke anything before this message started to appear, possibly since my interests are data related.

Making the LanguageTool embedded HTTP Server work on Windows 11

11th August 2024

My choice of Markdown editor is VS Code or VSCodium, the latter being a fork of the former with Microsoft telemetry removed. In either case, I use the LanguageTool Linter extension for the required grammar and spelling checks. Pointing that to the remote web service offered by LanguageTool could get punitive, even if I am a subscriber. Thus, I use a locally installed equivalent instead.

In my usual Linux system, that is how I work. However, I have replicated the set-up on a Windows laptop for added flexibility. The needed the JRE, so that was downloaded from the Oracle website and then installed. The next step is to download the LanguageTool embedded HTTP Server zip file and decompress it to a chosen location. To run the server, the command like the following is issued from the Windows Terminal (the single line may break over two here):

java -cp "[Chosen Location]\LanguageTool-stable\LanguageTool-6.4\languagetool-server.jar" org.languagetool.server.HTTPServer --port 8081 --allow-origin

That is enough to get things going because it fulfils the default settings of the LanguageTool Linter extension in VS Code or VSCodium. The fastText application is unavailable for Windows, so I did without it. So far, things are operating acceptably, even if there is a way to address more memory should that be required.

Catching keyboard interruptions in a Python script for a more orderly exit

17th April 2024

A while back, I was using a Python script to watch a folder and process photos in there, whenever a new one was added. Eventually, I ended up with a few of these because I was unable to work out a way to get multiple folders watched in the same script.

In each of them, though, I needed a tidy way to exit a running script in place of the stream of consciousness that often emerges when you do such things to it. Because I knew what was happening anyway, I needed a script to terminate quietly and set to uncover a way to achieve this.

What came up was something like the code that you see below. While I naturally did some customisations, I kept the essential construct to capture keyboard interruption shortcuts, like the use of CTRL + C in a Linux command line interface.

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        print('Interrupted')
        try:
            sys.exit(130)
        except SystemExit:
            os._exit(130)

What is happening above is that the interruption operation is captured in a nested TRY/EXCEPT block. The outer block catches the interruption, while the inner one runs through different ways of performing the script termination. For the first termination function call, you need to call the SYS package and the second needs the OS one, so you need to declare these at the start of your script.

Of the two, SYS.EXIT is preferable because it invokes clean-up activities while OS._EXIT does not, which might explain the "_" prefix in the second of these. In fact, calling SYS.EXIT is not that different to issuing RAISE SYSTEMEXIT instead because that lies underneath it. Here OS._EXIT is the fallback for when SYS.EXIT fails, and it is not all that desirable given the abrupt action that it invokes.

The exit code of 130 is fed to both, since that is what is issued when you terminate a running instance of an application on Linux anyway. Using 0 could negate any means of picking up what has happened if you have downstream processing. In my use case, everything was standalone, so that did not matter so much.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.