What to do an error appears when using pip to install Python packages on Linux Mint 22
16th December 2024After upgrading to Linux Mint 22, the following message appeared when attempting to install Python packages using the pip
command:
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
This will frustrate anyone following how-tos on the web, so users will need to know about it. On something like Linux Mint, the repositories may not be as up-to-date as PyPI, so picking up the very latest version has its advantages. Thus, I initially used the unrecommended --break-system-packages
switch to get things going as before, since doing never broke anything before. While the way of working feels like an overkill in some ways, using pipx
probably is the way forward as long as things work as I want them to do.
There is wisdom in using virtual environments too, especially when AI models are involved. For most of what I get to do, that may be getting too elaborate. Then, deleting or renaming the message file in /usr/lib/python3.12/EXTERNALLY-MANAGED
is tempting if that gets around things, as retrograde as that probably is. After all, I never broke anything before this message started to appear, possibly since my interests are data related.
Something to try when you get a message like this caused by a filename with a leading hyphen: "mv: illegal option -- u"
3rd December 2024Recently, I downloaded some WEBP files from Ideogram and attempted to move them to another folder. That is when I got the message that you see in the title of this entry. Because I had not looked at the filenames, I baffled when I got this from a simple command that I had been using with some success until then. Because I was using an iMac, I tried the suggestion of installing coreutils
to get GNU mv
and cp
to see if that would help:
brew install coreutils
The above command gave me gmv
and gcp
for the GNU versions of mv
and cp
that comes with macOS. Trying gmv
only got me the following message:
gmv: cannot combine --backup with --exchange, -n, or --update=none-fail
The ls
command could list all files, but not the WEBP ones. Thus, I executed the following to show what I wanted:
ls | grep -i webp
That got around the problem by doing a subset of the directory listing. It was then that I spotted the leading hyphen. To avoid the problem tripping me up again, I renamed the offending file using this command:
mv -- -iunS9U4RFevWpaju6ArIQ.webp iunS9U4RFevWpaju6ArIQ.webp
Here, the --
switch tells the mv
command not to look for any more options and only to expect filenames. When I tried enclosing the filename in quotes, I still got problems, even that might have because I was using double quotes instead of single quotes. Another option is to escape the leading hyphen like this:
mv ./-iunS9U4RFevWpaju6ArIQ.webp iunS9U4RFevWpaju6ArIQ.webp
Once the offending file was renamed, I could move the files to their final location. That could have used the --
option too, saving me an extra command, only for my wanting this not to trip me up again. Naturally, working in Finder might have avoided all this as much as not having a file with a leading hyphen in its name, but there would have been nothing to learn then.
Upgrading a web server from Debian 11 to Debian 12
25th November 2024While Debian 12 may be with us since the middle of 2023 and Debian 13 is due in the middle of next year, it has taken me until now to upgrade one of my web servers. The tardiness may have something to do with a mishap on another system that resulted in a rebuild, something to avoid it at all possible.
Nevertheless, I went and had a go with the aforementioned web server after doing some advance research. Thus, I can relate the process that you find here in the knowledge that it worked for me. Also, I will have it on file for everyone's future reference. The first step is to ensure that the system is up-to-date by executing the following commands:
sudo apt update
sudo apt upgrade
sudo apt dist-upgrade
Next, it is best to remove extraneous packages using these commands:
sudo apt --purge autoremove
sudo apt autoclean
Once you have backed up important data and configuration files, you can move to the first step of the upgrade process. This involves changing the repository locations from what is there for bullseye (Debian 11) to those for bookworm (Debian 12). Issuing the following commands will accomplish this:
sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list.d/*
In my case, I found the second of these to be extraneous since everything was included in the single file. Also, Debian 12 has added a new non-free repository called non-free-firmware. This can be added at this stage by manual editing of the above. In my case, I did it later because the warning message only began to appear at that stage.
Once the repository locations, it is time to update the package information using the following command:
sudo apt update
Then, it is time to first perform a minimal upgrade using the following command, that takes a conservative approach by updating existing packages without installing any new ones:
sudo apt upgrade --without-new-pkgs
Once that has completed, one needs to issue the following command to install new packages if needed for dependencies and even remove incompatible or unnecessary ones, as well as performing kernel upgrades:
sudo apt full-upgrade
Given all the changes, the completion of the foregoing commands' execution necessitates a system restart, which can be the most nerve-wracking part of the process when you are dealing with a remote server accessed using SSH. While, there are a few options for accomplishing this, here is one that is compatible with the upgrade cycle:
sudo systemctl reboot
Once you can log back into the system again, there is one more piece of housekeeping needed. This step not only removes redundant packages that were automatically installed, but also does the same for their configuration files, an act that really cleans up things. The command to execute is as follows:
sudo apt --purge autoremove
For added reassurance that the upgrade has completed, issuing the following command will show details like the operating system's distributor ID, description, release version and codename:
lsb_release -a
If you run the above commands as root, the sudo prefix is not needed, yet it is perhaps safer to execute them under a less privileged account anyway. The process needs the paying of attention to any prompts and questions about configuration files and service restarts if they arise. Nothing like that came up in my case, possibly because this web server serves flat files created using Hugo, avoiding the use of scripting and databases, which would add to the system complexity. Such a simple situation makes the use of scripting more of a possibility. The exercise was speedy enough for me too, though patience is of the essence should a 30–60 minute completion time be your lot, depending on your system and internet speed.
Manually updating Let's Encrypt certificates
8th November 2024Normally, Let's Encrypt certificates get renewed automatically. Thus, it came as a surprise to me to receive an email telling me that one of my websites had a certificate that was about to expire. The next step was to renew the certificate manually.
That sent me onto the command line in an SSH session to the Ubuntu server in question. Once there, I used the following command to check on my certificates to confirm that the email alert was correct:
sudo certbot certificates
Then, I issued this command to do a test run of the update:
sudo certbot renew --dry-run
In the knowledge that nothing of concern came up in the dry run, then it was time to do the update for real using this command:
sudo certbot renew
Rerunning sudo certbot certificates
checked that all was in order. All that did what should have happened automatically; adding a cron job should address that, though, and adding the --quiet
switch should cut down on any system emails too.
Resolving an issue with printing from a Windows 11 guest running in Parallels Desktop on macOS Sonoma after installing a replacement device
24th October 2024Recently, I ran into trouble with a Brother multi-function printer while using it with my iMac. It had worked fine with Windows machines before then, so I decided to see it there was a compatibility issue. Since the output was no better, I decided to replace it. After all, it was nearly thirteen years old.
Having not got on well with inkjet printers over the years, I decided on an HP multifunction printer based around a colour laser system. The Brother had been connected using a USB cable, but the HP allowed for Wi-Fi printing, so I opted for that instead. The connection between the device and the network was sorted using the available app on an Android phone.
Then, there was setting the device up on the iMac. Doing that on macOS worked well; going to Printers & Scanners in the System Settings app and clicking on the add button was enough to start that. The crux came when getting the same done on a Windows 11 Home guest that I have running within Parallels Desktop.
While the printer appeared under Bluetooth & devices > Printers & scanners already, attempt to print resulted in errors. The solution was to go back to macOS and open the System Settings app. Going into General > Sharing took me to the Printer Sharing setting. Turning this on, I set it so that it allowed everyone to print. That resolved the issue.
All of this was on macOS Sonoma, where postscript printing is not supported any more; Internet Printing Protocol (IPP) is what it uses instead. That does mean that printing with older versions of Parallels Desktop may not work any more. Thankfully, my software is the latest version, so I got things to work for me as I needed.
Reactivating Touch ID on an iMac when the options are greyed out in System Settings
23rd October 2024Recently, when the battery in my iMac keyboard ran out of charge, I merely connected it to the all-in-one system using the supplied cable. However, a software upgrade meant a system restart, which lost the ability to unlock the iMac using the Touch ID.
When I went to Touch ID & Password within the Systems Settings app, I found all the options greyed out, preventing me from restoring things that way. The result was that I needed to disconnect the cable before turning off the keyboard in advance of turning it back on again. That was enough to restore Touch ID usage; the settings were not only activated but turned on for me. It is a little lesson on how different things can be for a new Mac user.
What to do when Tuta Mail issues this message when logging into an account on macOS: Could not access Secret Storage
24th September 2024Two things changed before Tuta Mail stopped working as before: modifying Keychain Access settings and upgrading macOS from Sonoma to Sequoia. Either could have been a cause or none of them. The first of these was more likely a culprit than the other.
The result was the same: logging into Tuta Mail yielded an error like this: Could not access Secret Storage. The solution essentially is a two-step process: remove the app and delete its settings folder. Reinstallation then happens after these.
In Finder, go to Applications and move Tuta Mail to the Bin before clearing it from there. That uninstalls the app.
The next step needs you show hidden files and folders using the Command + Shift + .
shortcut. Then, go to your home folder (this may need use of the Command + Shift + H
shortcut). Open up the Library folder and find the folder called Application Support. Enter that and find the subfolder named tutanota-desktop
. That needs to go to the Bin too before expunging it from there. Doing that provides the clean slate for restoration to commence. After this, using the Command + Shift + .
shortcut again hides the normally hidden files and folders once more.
Nothing is resolved with the removal of /Users/[username]/Library/Application Support/tutanota-desktop
. Using the rm
command from the command line interface will remove it faster than Finder, though that may be easier for many users.
Clearing the Julia REPL
23rd September 2024During development, there are times when you need to clear the Julia REPL. It can become so laden with content that it becomes hard to perform debugging of your code. One way to accomplish this is issuing the CTRL + L keyboard shortcut while focus is within the REPL; you need to click on it first. Another is to issue the following in the REPL itself:
print("\033c")
Here \033
is an escape code in octal format. It is often used in terminal control sequences. The c
character is what resets the terminal to its initial state. Printing this sequence is what does the clearance, and variations can be used to clear other kinds of console screens too. That makes it a more generic solution.
Dropping to an underlying shell using the ;
character is another possibility. Then, you can use the clear
or cls
commands as needed; the latter is for Windows systems.
One last option is to define a Julia function for doing this:
function clear_console()
run(`clear`) # or `cls` for Windows
end
Calling the clear_console
function then clears the screen programmatically, allowing for greater automation. The run
function is the one that sends that command in backticks to the underlying shell for execution. Even using that alone should work too.
Little helpers
22nd September 2024This could have been a piece that appeared on my outdoors blog until I got second thoughts. One reason why I might have done so is that I am making more use of Perplexity for searching the web and gaining more value from its output. However, that is proving more useful in writing what you find on here. Knowing the sources for a dynamically generated article adds more confidence when fact checking, and it is remarkable what comes up that you would find quickly with Google. There is added value with this one.
A better candidate would have been Anthropic's Claude. That has come in handy when writing trip reports. Being able to use a stub to prototype a blog entry really has its uses. The reality is that everything gets rewritten before anything gets published; these tools are never so good as to feature everything that you want to mention, even if they do a good job of mimicking your writing tone and style. Nevertheless, being able to work with the content beyond doing a brain dump from one's memory is an undeniable advance.
Sometimes, there are occasions when using Bing's access to OpenAI through Copilot helps with production of images. In reality, I do have an extensive personal library of images, so they possibly should suffice in many ways. However, curiosity about the technology overrides the effort that photo processing requires.
While there may be some level of controversy surrounding the use of AI tools in content creation, using such tooling for proofing content should not raise too much ire. Grammarly comes up a lot, though it is LanguageTool that I use to avoid excessive butting into my writing style. That has changed to comply with rules that had passed me without my noticing, but there are other things that need to be turned off. Configuring the proof tools in other ways might be better, so that is something to explore, or we could end up with too much standardisation of writing; there needs to be room for human creativity at all times.
All of these are just a sample of what is available. Just checking in with The Rundown AI will reveal that there is an onslaught of innovation right now. Hype also is a problem, yet we need to learn to use these tools. The changeover is equivalent to the explosive increase in availability of personal computing a generation ago. That brought its own share of challenges (some were on the curve while others were not) until everything settled down, and it will be the same with what is happening now.
Avoiding errors caused by missing Julia packages when running code on different computers
15th September 2024As part of an ongoing move to multi-location working, I am sharing scripts and other artefacts via GitHub. This includes Julia programs that I have. That has led me to realise that a bit of added automation would help iron out any package dependencies that arise. Setting up things as projects could help, yet that feels a little too much effort for what I have. Thus, I have gone for adding extra code to check on and install any missing packages instead of having failures.
For adding those extra packages, I instate the Pkg
package as follows:
import Pkg
While it is a bit hackish, I then declare a single array that lists the packages to be checked:
pkglits =["HTTP", "JSON3", "DataFrames", "Dates", "XLSX"]
After that, there is a function that uses a try catch construct to find whether a package exists or not, using the inbuilt eval
macro to try a using declaration:
tryusing(pkgsym) = try
@eval using $pkgsym
return true
catch e
return false
end
The above function is called in a loop that both tests the existence of a package and, if missing, installs it:
for i in 1:length(pkglits)
rslt = tryusing(Symbol(pkglits[i]))
if rslt == false
Pkg.add(pkglits[i])
end
end
Once that has completed, using the following line to instate the packages required by later processing becomes error free, which is what I sought:
using HTTP, JSON3, DataFrames, Dates, XLSX