TOPIC: UBUNTU
Ansible automation for Linux Mint updates with repository failover handling
7th November 2025Recently, I had a Microsoft PPA output disrupt an Ansible playbook mediated upgrade process for my main Linux workstation. Thus, I ended up creating a failover for this situation, and the first step in the playbook was to define the affected repo:
vars:
microsoft_repo_url: "https://packages.microsoft.com/repos/code/dists/stable/InRelease"
The next move was to start defining tasks, with the first testing the repo to pick up any lack of responsiveness and flag that for subsequent operations.
tasks:
- name: Check Microsoft repository availability
uri:
url: "{{ microsoft_repo_url }}"
method: HEAD
return_content: no
timeout: 10
register: microsoft_repo_check
failed_when: false
- name: Set flag to skip Microsoft updates if unreachable
set_fact:
skip_microsoft_repos: "{{ microsoft_repo_check.status is not defined or microsoft_repo_check.status != 200 }}"
In the event of a failure, the next task was to disable the repo to allow other processing to take place. This was accomplished by temporarily renaming the relevant files under /etc/apt/sources.list.d/.
- name: Temporarily disable Microsoft repositories
become: true
shell: |
for file in /etc/apt/sources.list.d/microsoft*.list; do
[ -f "$file" ] && mv "$file" "${file}.disabled"
done
for file in /etc/apt/sources.list.d/vscode*.list; do
[ -f "$file" ] && mv "$file" "${file}.disabled"
done
when: skip_microsoft_repos | default(false)
changed_when: false
With that completed, the rest of the update actions could be performed near enough as usual.
- name: Update APT cache (retry up to 5 times)
apt:
update_cache: yes
register: apt_update_result
retries: 5
delay: 10
until: apt_update_result is succeeded
- name: Perform normal upgrade
apt:
upgrade: yes
register: apt_upgrade_result
retries: 3
delay: 10
until: apt_upgrade_result is succeeded
- name: Perform dist-upgrade with autoremove and autoclean
apt:
upgrade: dist
autoremove: yes
autoclean: yes
register: apt_dist_result
retries: 3
delay: 10
until: apt_dist_result is succeeded
After those, another renaming operation restores the earlier filenames to what they were.
- name: Re-enable Microsoft repositories
become: true
shell: |
for file in /etc/apt/sources.list.d/*.disabled; do
base="$(basename "$file" .disabled)"
if [[ "$base" == microsoft* || "$base" == vscode* || "$base" == edge* ]]; then
mv "$file" "/etc/apt/sources.list.d/$base"
fi
done
when: skip_microsoft_repos | default(false)
changed_when: false
Needless to say, this disabling only happens in the event of there being a system failure. Otherwise, the steps are skipped and everything else is completed as it should be. While there is some cause for extended the repository disabling actions to other third repos as well, that is something that I will leave aside for now. Even this shows just how much can be done using Ansible playbooks and how much automation can be achieved. As it happens, I even get Flatpaks updated in much the same way:
- name: Ensure Flatpak is installed
apt:
name: flatpak
state: present
update_cache: yes
cache_valid_time: 3600
- name: Update Flatpak remotes
command: flatpak update --appstream -y
register: flatpak_appstream
changed_when: "'Now at' in flatpak_appstream.stdout"
failed_when: flatpak_appstream.rc != 0
- name: Update all Flatpak applications
command: flatpak update -y
register: flatpak_result
changed_when: "'Now at' in flatpak_result.stdout"
failed_when: flatpak_result.rc != 0
- name: Install unused Flatpak applications
command: flatpak uninstall --unused
register: flatpak_cleanup
changed_when: "'Nothing' not in flatpak_cleanup.stdout"
failed_when: flatpak_cleanup.rc != 0
- name: Repair Flatpak installations
command: flatpak repair
register: flatpak_repair
changed_when: flatpak_repair.stdout is search('Repaired|Fixing')
failed_when: flatpak_repair.rc != 0
The ability to call system commands as you see in the above sequence is an added bonus, though getting the response detection completely sorted remains an outstanding task. All this has only scratched the surface of what is possible.
Automating Positron and RStudio updates on Linux Mint 22
6th November 2025Elsewhere, I have written about avoiding manual updates with VSCode and VSCodium. Here, I come to IDE's produced by Posit, formerly RStudio, for data science and analytics uses. The first is a more recent innovation that works with both R and Python code natively, while the second has been around for much longer and focusses on native R code alone, though there are R packages allowing an interface of sorts with Python. Neither are released via a PPA, necessitating either manual downloading or the scripted approach taken here for a Linux system. Each software tool will be discussed in turn.
Positron
Now, we work through a script that automates the upgrade process for Positron. This starts with a shebang line calling the bash executable before moving to a line that adds safety to how the script works using a set statement. Here, the -e switch triggers exiting whenever there is an error, halting the script before it carries on to perform any undesirable actions. That is followed by the -u switch that causes errors when unset variables are called; normally these would be assigned a missing value, which is not desirable in all cases. Lastly, the -o pipefail switch causes a pipeline (cmd1 | cmd2 | cm3) to fail if any command in the pipeline produces an error, which can help debugging because the error is associated with the command that fails to complete.
#!/bin/bash
set -euo pipefail
The next step then is to determine the architecture of the system on which the script is running so that the correct download is selected.
ARCH=$(uname -m)
case "$ARCH" in
x86_64) POSIT_ARCH="x64" ;;
aarch64|arm64) POSIT_ARCH="arm64" ;;
*) echo "Unsupported arch: $ARCH"; exit 1 ;;
esac
Once that completes, we define the address of the web page to be interrogated and the path to the temporary file that is to be downloaded.
RELEASES_URL="https://github.com/posit-dev/positron/releases"
TMPFILE="/tmp/positron-latest.deb"
Now, we scrape the page to find the address of the latest DEB file that has been released.
echo "Finding latest Positron .deb for $POSIT_ARCH..."
DEB_URL=$(curl -fsSL "$RELEASES_URL" \
| grep -Eo "https://cdn\.posit\.co/[A-Za-z0-9/_\.-]+Positron-[0-9\.~-]+-${POSIT_ARCH}\.deb" \
| head -n 1)
If that were to fail, we get an error message produced before the script is aborted.
if [ -z "${DEB_URL:-}" ]; then
echo "Could not find a .deb link for ${POSIT_ARCH} on the releases page"
exit 1
fi
Should all go well thus far, we download the latest DEB file using curl.
echo "Downloading: $DEB_URL"
curl -fL "$DEB_URL" -o "$TMPFILE"
When the download completes, we try installing the package using apt, much like we do with a repo, apart from specifying an actual file path on our system.
echo "Installing Positron..."
sudo apt install -y "$TMPFILE"
Following that, we delete the installation file and issue a message informing the user of the task's successful completion.
echo "Cleaning up..."
rm -f "$TMPFILE"
echo "Done."
When I do this, I tend to find that the Python REPL console does not open straight away, causing me to shut down Positron and leaving things for a while before starting it again. There may be temporary files that need to be expunged and that needs its own time. Someone else might have a better explanation that I am happy to use if that makes more sense than what I am suggesting. Otherwise, all works well.
RStudio
A lot of the same processing happens during the script updating RStudio, so we will just cover the differences. The set -x statement ensures that every command is printed to the console for the debugging that was needed while this was being developed. Otherwise, much code, including architecture detection, is shared between the two apps.
#!/bin/bash
set -euo pipefail
set -x
# --- Detect architecture ---
ARCH=$(uname -m)
case "$ARCH" in
x86_64) RSTUDIO_ARCH="amd64" ;;
aarch64|arm64) RSTUDIO_ARCH="arm64" ;;
*) echo "Unsupported architecture: $ARCH"; exit 1 ;;
esac
Figuring out the distro version and the web page to scrape was where additional effort was needed, and that is reflected in some of the code that follows. Otherwise, many of the ideas applied with Positron also have a place here.
# --- Detect Ubuntu base ---
DISTRO=$(grep -oP '(?<=UBUNTU_CODENAME=).*' /etc/os-release || true)
[ -z "$DISTRO" ] && DISTRO="noble"
# --- Define paths ---
TMPFILE="/tmp/rstudio-latest.deb"
LOGFILE="/var/log/rstudio_update.log"
echo "Detected Ubuntu base: ${DISTRO}"
echo "Fetching latest version number from Posit..."
# --- Get version from Posit's official RStudio Desktop page ---
VERSION=$(curl -s https://posit.co/download/rstudio-desktop/ \
| grep -Eo 'rstudio-[0-9]+\.[0-9]+\.[0-9]+-[0-9]+' \
| head -n 1 \
| sed -E 's/rstudio-([0-9]+\.[0-9]+\.[0-9]+-[0-9]+)/\1/')
if [ -z "$VERSION" ]; then
echo "Error: Could not extract the latest RStudio version number from Posit's site."
exit 1
fi
echo "Latest RStudio version detected: ${VERSION}"
# --- Construct download URL (Jammy build for Noble until Noble builds exist) ---
BASE_DISTRO="jammy"
BASE_URL="https://download1.rstudio.org/electron/${BASE_DISTRO}/${RSTUDIO_ARCH}"
FULL_URL="${BASE_URL}/rstudio-${VERSION}-${RSTUDIO_ARCH}.deb"
echo "Downloading from:"
echo " ${FULL_URL}"
# --- Validate URL before downloading ---
if ! curl --head --silent --fail "$FULL_URL" >/dev/null; then
echo "Error: The expected RStudio package was not found at ${FULL_URL}"
exit 1
fi
# --- Download and install ---
curl -L "$FULL_URL" -o "$TMPFILE"
echo "Installing RStudio..."
sudo apt install -y "$TMPFILE" | tee -a "$LOGFILE"
# --- Clean up ---
rm -f "$TMPFILE"
echo "RStudio update to version ${VERSION} completed successfully." | tee -a "$LOGFILE"
When all ended, RStudio worked without a hitch, leaving me to move on to other things. The next time that I am prompted to upgrade the environment, this is the way I likely will go.
Remote access between Mac and Linux, Part 3: SSH, RDP and TigerVNC
30th October 2025This is Part 3 of a three-part series on connecting a Mac to a Linux Mint desktop. Part 1 introduced the available options, whilst Part 2 covered x11vnc for sharing physical desktops.
Whilst x11vnc excels at sharing an existing desktop, many scenarios call for terminal access or a fresh graphical session. This article examines three alternatives: SSH for command-line work, RDP for responsive remote desktops with Xfce, and TigerVNC for virtual Cinnamon sessions.
Terminal Access via SSH
For many administrative tasks, a secure shell session is enough. On the Linux machine, the OpenSSH server needs to be installed and running. On Debian or Ubuntu-based systems, including Linux Mint, the required packages are available with standard tools.
Installing with sudo apt install openssh-server followed by enabling the service with sudo systemctl enable ssh and starting it with sudo systemctl start ssh is all that is needed. The machine's address on the local network can be identified with ip addr show, and it is the entry under inet for the active interface that will be used.
From the Mac, a terminal session to that address is opened with a command of the form ssh username@192.168.1.xxx and this yields a full shell on the Linux machine without further configuration. On a home network, there is no need for router changes and SSH requires no extra client software on macOS.
SSH forms the foundation for secure operations beyond terminal access. It enables file transfer via scp and rsync, and can be used to create encrypted tunnels for other protocols when access from outside the local network is required.
RDP for New Desktop Sessions
Remote Desktop Protocol creates a new login session on the Linux machine and tends to feel smoother over imperfect links. On Linux Mint with Cinnamon, RDP is often the more responsive choice on a Mac, but Cinnamon's reliance on 3D compositing means xrdp does not work with it reliably. The usual workaround is to keep Cinnamon for local use and install a lightweight desktop specifically for remote sessions. Xfce works well in this role.
Setting Up xrdp with Xfce
After updating the package list, install xrdp with sudo apt install xrdp, set it to start automatically with sudo systemctl enable xrdp, and start it with sudo systemctl start xrdp. If a lightweight environment is not already available, install Xfce with sudo apt install xfce4, then tell xrdp to use it by creating a simple session file for the user account with echo "startxfce4" > ~/.xsession. Restarting the service with sudo systemctl restart xrdp completes the server side.
The Linux machine's IP address can be checked again so it can be entered into Microsoft Remote Desktop, which is a free download from the Mac App Store. Adding a new connection with the Linux IP and the user's credentials often suffices, and the first connection may present a certificate prompt that can be accepted.
RDP uses port 3389 by default, which needs no router configuration on the same network. It creates a new session rather than attaching to the one already shown on the Linux monitor, so it is not a means to view the live Cinnamon desktop, but performance is typically smooth and latency is well handled.
Why RDP with Xfce?
It is common for xrdp on Ubuntu-based distributions to select a simpler session type unless the user instructs it otherwise, which is why the small .xsession file pointing to Xfce helps. The combination of RDP's protocol efficiency and Xfce's lightweight nature delivers the most responsive experience for new sessions. The protocol translates keyboard and mouse input in a way that many clients have optimised for years, making it the most forgiving route when precise input behaviour matters. The trade-off is that what is shown is a separate desktop session, which can be a benefit or a drawback depending on the task.
TigerVNC for New Cinnamon Sessions
Those who want to keep Cinnamon for remote use can do so with a VNC server that creates a new virtual desktop. TigerVNC is a common choice on Linux Mint. Installing tigervnc-standalone-server, setting a password with vncpasswd and creating an xstartup file under ~/.vnc that launches Cinnamon will provide a new session for each connection.
Configuring TigerVNC
A minimal xstartup for Cinnamon sets the environment to X11, establishes the correct session variables and starts cinnamon-session. Making this file executable and then launching vncserver :1 starts a VNC server on port 5901. The server can be stopped later with vncserver -kill :1.
The xstartup script determines what desktop environment a virtual session launches, and setting the environment variables to Cinnamon then starting cinnamon-session is enough to present the expected desktop. Marking that startup file as executable is easy to miss, and it is required for TigerVNC to run it.
From the Mac, the built-in Screen Sharing app can be used from Finder's Connect to Server entry by supplying vnc://192.168.1.xxx:5901, or a third-party viewer such as RealVNC Viewer can connect to the same address and port. This approach provides the Cinnamon look and feel, though it can be less responsive than RDP when the network is not ideal, and it also creates a new desktop session rather than sharing the one already in use on the Linux screen.
Clipboard Support in TigerVNC
For TigerVNC, clipboard support typically requires the vncconfig helper application to be running on the server. Starting vncconfig -nowin & in the background, often by adding it to the ~/.vnc/xstartup file, enables clipboard synchronisation between the VNC client and server for plain text.
File Transfer
File transfer between the machines is best handled using the command-line tools that accompany SSH. On macOS, scp file.txt username@192.168.1.xxx:/home/username/ sends a file to Linux and scp username@192.168.1.xxx:/home/username/file.txt ~/Desktop/ retrieves one, whilst rsync with -avz flags can be used for larger or incremental transfers.
These tools work reliably regardless of which remote access method is being used for interactive sessions. File copy-paste is not supported by VNC protocols, making scp and rsync the dependable choice for moving files between machines.
Operational Considerations
Port Management
Understanding port mappings helps avoid connection issues. VNC display numbers map directly to TCP ports, so :0 means 5900, :1 means 5901 and so on. RDP uses port 3389 by default. When connecting with viewers, supplying the address alone will use the default port for that protocol. If a specific port must be stated, use a single colon with the actual TCP port number.
First Connection Issues
If a connection fails unexpectedly, checking whether a server is listening with netstat can save time. On first-time connections to an RDP server, the client may display a certificate warning that can be accepted for home use.
Making Services Persistent
For regular use, enabling services at boot removes the need for manual intervention. Both xrdp and TigerVNC can be configured to start automatically, ensuring that remote access is available whenever the Linux machine is running. The systemd service approach described for x11vnc in Part 2 can be adapted for TigerVNC if automatic startup of virtual sessions is desired.
Security and Convenience
Security considerations in a home setting are straightforward. When both machines are on the same local network, there is no need to adjust router settings for any of these methods. If remote access from outside the home is required, port forwarding and additional protections would be needed.
SSH can be exposed with careful key-based authentication, RDP should be placed behind a VPN or an SSH tunnel, and VNC should not be left open to the internet without an encrypted wrapper. For purely local use, enabling the necessary services at boot or keeping a simple set of commands to hand often suffices.
xrdp can be enabled once and left to run in the background, so the Mac's Microsoft Remote Desktop app can connect whenever needed. This provides a consistent way to access a fresh Xfce session without affecting what is displayed on the Linux machine's monitor.
Summary and Recommendations
The choice between these methods ultimately comes down to the specific use case. SSH provides everything necessary for administrative work and forms the foundation for secure file transfer. RDP into an Xfce session is a sensible choice when responsiveness and clean input handling are the priorities and a separate desktop is acceptable. TigerVNC can launch a full Cinnamon session for those who value continuity with the local environment and do not mind the slight loss of responsiveness that can accompany VNC.
For file transfer, the command-line tools that accompany SSH remain the most reliable route. Clipboard synchronisation for plain text is available in each approach, though TigerVNC typically needs vncconfig running on the server to enable it.
Having these options at hand allows a Mac and a Linux Mint desktop to work together smoothly on a home network. The setup is not onerous, and once a choice is made and the few necessary commands are learned, the connection can become an ordinary part of using the machines. After that, the day-to-day experience can be as simple as opening a single app on the Mac, clicking a saved connection and carrying on from where the Linux machine last left off.
The Complete Picture
Across this three-part series, we have examined the full range of remote access options between Mac and Linux:
- Part 1 provided the decision framework for choosing between terminal access, new desktop sessions and sharing physical displays.
- Part 2 explored x11vnc in detail, including performance tuning, input handling with KVM switches, clipboard troubleshooting and systemd service configuration.
- Part 3 covered SSH for terminal access, RDP with Xfce for responsive remote sessions, TigerVNC for virtual Cinnamon desktops, and file transfer considerations.
Each approach has its place, and understanding the trade-offs allows the right tool to be selected for the task at hand.
Command line installation and upgrading of VSCode and VSCodium on Windows, macOS and Linux
25th October 2025Downloading and installing software packages from a website is all very well until you need to update them. Then, a single command streamlines the process significantly. Given that VSCode and VSCodium are updated regularly, this becomes all the more pertinent and explains why I chose them for this piece.
Windows
Now that Windows 10 is more or less behind us, we can focus on Windows 11. That comes with the winget command by default, which is handy because it allows command line installation of anything that is in the Windows store, which includes VSCode and VSCodium. The commands can be as simple as these:
winget install VisualStudioCode
winget install VSCodium.VSCodium
The above is shorthand for this, though:
winget install --id VisualStudioCode
winget install --id VSCodium.VSCodium
If you want exact matches, the above then becomes:
winget install -e --id VisualStudioCode
winget install -e --id VSCodium.VSCodium
For upgrades, this is what is needed:
winget upgrade Microsoft.VisualStudioCode
winget upgrade VSCodium.VSCodium
Even better, you can do an upgrade everything at once operation:
winget upgrade --all
The last part certainly is better than the round trip to a website and back to going through an installation GUI. There is a lot less mouse clicking for one thing.
macOS
On macOS, you need to have Homebrew installed to make things more streamlined. To complete that, you need to run the following command (which may need you to enter your system password to get things to happen):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Then, you can execute one or both of these in the Terminal app, perhaps having to authorise everything with your password when requested to do so:
brew install --cask visual-studio-code
brew install --cask vscodium
The reason for the -cask switch is that these are apps that you want to go into the correct locations on macOS as well as having their icons appear in Launchpad. Omitting it is fine for command line utilities, but not for these.
To update and upgrade everything that you have installed via Homebrew, just issue the following in a terminal session:
brew update && brew upgrade
Debian, Ubuntu & Linux Mint
Like any other Debian or Ubuntu derivative, Linux Mint has its own in-built package management system via apt. Other Linux distributions have their own way of doing things (Fedora and Arch come to mind here), yet the essential idea is similar in many cases. Because there are a number of steps, I have split out VSCode from VSCodium for added clarity. Because of the way that things are set up, one or both apps can be updated using the usual apt commands without individual attention.
VSCode
The first step is to download the repository key using the following command:
wget -qO- https://packages.microsoft.com/keys/microsoft.asc \
| gpg --dearmor > packages.microsoft.gpg
sudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg
Then, you can add the repository like this:
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/packages.microsoft.gpg] \
https://packages.microsoft.com/repos/code stable main" \
| sudo tee /etc/apt/sources.list.d/vscode.list
With that in place, the last thing that you need to do is issue the command for doing the installation from the repository:
sudo apt update; sudo apt install code
Above, I have put two commands together: one to update the repository and another to do the installation.
VSCodium
Since the VSCodium process is similar, here are the three commands together: one for downloading the repository key, another that adds the new repository and one more to perform the repository updates and subsequent installation:
curl -fSsL https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg \
| sudo gpg --dearmor | sudo tee /usr/share/keyrings/vscodium-archive-keyring.gpg >/dev/null
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/vscodium-archive-keyring.gpg] \
https://download.vscodium.com/debs vscodium main" \
| sudo tee /etc/apt/sources.list.d/vscodium.sources
sudo apt update; sudo apt install codium
After the three steps have completed successfully, VSCodium is installed and available to use on your system, and is accessible through the menus too.
Possibly a retrograde way to keep an old scanner going on Linux Mint 22?
5th March 2025For making a copy of a document for official purposes, I needed to get my scanner going with the new workstation. The device is an Epson Perfection 4490 Photo that I acquired in 2007 after its Canon predecessor, a CanoScan 5000F, began to malfunction. It has served me well since then, though digital photography has meant that scanning images is not something that I do frequently these days, the last time being in early 2022 to get larger images into my online photo gallery.
The age means that software support is an issue, more particularly for Windows 11. However, Linux can leave old devices behind it too. It does not help that there is little incentive for Epson to update its drivers either. Thus, Linux Mint's move from LIBSANE to LIBSANE1 makes things less straightforward when the Epson software needs the former.
While you can take apart a DEB file, re-edit its components before creating a new version, that sounds tricky to me. Nevertheless, it may be the way to go for others. Instead, I downloaded a DEB file for LIBSANE from Ubuntu and installed that instead. With that installed, the Epson software installed fully, allowing VueScan to work as I needed. Thus, the document got copied as I needed, and the rest then could happen as required.
When I went looking up solutions to my conundrum on Perplexity, it kept telling me that it was not the best way to go. However, I still took the chance, knowing that I could roll things back if needed. Computers never know you that well without a multitude of data, so the safety first approach has its merits, even if it can be overly cautious in some cases.
If I ever do need to replace the scanner, I probably would replace the printer with a multi-function device at the same time. The move would save some desk space, and I have had a good experience with such a device elsewhere. For now, though, such a move is on the long finger; securing a new freelance contract is higher up any to-do list.
What to do when the externally-managed environment error appears while using pip to install Python packages on Linux Mint 22
16th December 2024After upgrading to Linux Mint 22, the following message appeared when attempting to install Python packages using the pip command:
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
This will frustrate anyone following how-tos on the web, so users will need to know about it. On something like Linux Mint, the repositories may not be as up-to-date as PyPI, so picking up the very latest version has its advantages. Thus, I initially used the unrecommended --break-system-packages switch to get things going as before, since doing never broke anything before. While the way of working feels like an overkill in some ways, using pipx probably is the way forward as long as things work as I want them to do.
There is wisdom in using virtual environments too, especially when AI models are involved. For most of what I get to do, that may be getting too elaborate. Then, deleting or renaming the message file in /usr/lib/python3.12/EXTERNALLY-MANAGED is tempting if that gets around things, as retrograde as that probably is. After all, I never broke anything before this message started to appear, possibly since my interests are data related.
Upgrading a web server from Debian 11 to Debian 12
25th November 2024While Debian 12 may be with us since the middle of 2023 and Debian 13 is due in the middle of next year, it has taken me until now to upgrade one of my web servers. The tardiness may have something to do with a mishap on another system that resulted in a rebuild, something to avoid it at all possible.
Nevertheless, I went and had a go with the aforementioned web server after doing some advance research. Thus, I can relate the process that you find here in the knowledge that it worked for me. Also, I will have it on file for everyone's future reference. The first step is to ensure that the system is up-to-date by executing the following commands:
sudo apt update
sudo apt upgrade
sudo apt dist-upgrade
Next, it is best to remove extraneous packages using these commands:
sudo apt --purge autoremove
sudo apt autoclean
Once you have backed up important data and configuration files, you can move to the first step of the upgrade process. This involves changing the repository locations from what is there for bullseye (Debian 11) to those for bookworm (Debian 12). Issuing the following commands will accomplish this:
sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list.d/*
In my case, I found the second of these to be extraneous since everything was included in the single file. Also, Debian 12 has added a new non-free repository called non-free-firmware. This can be added at this stage by manual editing of the above. In my case, I did it later because the warning message only began to appear at that stage.
Once the repository locations, it is time to update the package information using the following command:
sudo apt update
Then, it is time to first perform a minimal upgrade using the following command, that takes a conservative approach by updating existing packages without installing any new ones:
sudo apt upgrade --without-new-pkgs
Once that has completed, one needs to issue the following command to install new packages if needed for dependencies and even remove incompatible or unnecessary ones, as well as performing kernel upgrades:
sudo apt full-upgrade
Given all the changes, the completion of the foregoing commands' execution necessitates a system restart, which can be the most nerve-wracking part of the process when you are dealing with a remote server accessed using SSH. While, there are a few options for accomplishing this, here is one that is compatible with the upgrade cycle:
sudo systemctl reboot
Once you can log back into the system again, there is one more piece of housekeeping needed. This step not only removes redundant packages that were automatically installed, but also does the same for their configuration files, an act that really cleans up things. The command to execute is as follows:
sudo apt --purge autoremove
For added reassurance that the upgrade has completed, issuing the following command will show details like the operating system's distributor ID, description, release version and codename:
lsb_release -a
If you run the above commands as root, the sudo prefix is not needed, yet it is perhaps safer to execute them under a less privileged account anyway. The process needs the paying of attention to any prompts and questions about configuration files and service restarts if they arise. Nothing like that came up in my case, possibly because this web server serves flat files created using Hugo, avoiding the use of scripting and databases, which would add to the system complexity. Such a simple situation makes the use of scripting more of a possibility. The exercise was speedy enough for me too, though patience is of the essence should a 30–60 minute completion time be your lot, depending on your system and internet speed.
Manually updating Let's Encrypt certificates
8th November 2024Normally, Let's Encrypt certificates get renewed automatically. Thus, it came as a surprise to me to receive an email telling me that one of my websites had a certificate that was about to expire. The next step was to renew the certificate manually.
That sent me onto the command line in an SSH session to the Ubuntu server in question. Once there, I used the following command to check on my certificates to confirm that the email alert was correct:
sudo certbot certificates
Then, I issued this command to do a test run of the update:
sudo certbot renew --dry-run
In the knowledge that nothing of concern came up in the dry run, then it was time to do the update for real using this command:
sudo certbot renew
Rerunning sudo certbot certificates checked that all was in order. All that did what should have happened automatically; adding a cron job should address that, though, and adding the --quiet switch should cut down on any system emails too.
Resolving "repository doesn't support architecture i386" error when checking for updates to Brave Browser on Linux
7th June 2024Recently, I started to observe the following message when doing my usual update routine on Linux Mint (Debian, Ubuntu and their variants are likely affected as well):
N: Skipping acquire of configured file 'main/binary-i386/Packages' as repository 'https://brave-browser-apt-release.s3.brave.com stable InRelease' doesn't support architecture 'i386'
As the message suggests, there was something amiss with the repository set up for Brave, a browser that I added for extra privacy. Since Firefox remains the main one that I use, Brave is something that I have in hand for when I need it. Handily, its installation routine adds in repository information for keeping it up to date. However, there is an issue with what you find in /etc/apt/sources.list.d/brave-browser-release.list. By default, the line appears like thus:
deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg] https://brave-browser-apt-release.s3.brave.com/ stable main
To avoid the i386 error, it needs to look like this instead:
deb [signed-by=/usr/share/keyrings/brave-browser-archive-keyring.gpg arch=amd64] https://brave-browser-apt-release.s3.brave.com/ stable main
The difference between the tow is the presence of arch=amd64 in the second version. This stops the search for non-existent i386 files, the 32 bit version in other words. With Y2K2038 in the offing, the days of 32 bit computing architectures are numbered because there is a real limit to the magnitude of the dates that can be represented in any case. Thus, sticking with 64 bit ones is both the present for many and the future for all.
Getting rid of the "Get more security upgrades through Ubuntu Pro with 'esm-apps' enabled" message when performing a system update
15th April 2024Not so long ago, I got the above message while running sudo apt upgrade on an Ubuntu Server system. This was not the first time that this kind of thing happened to me, so I started searching the web for a solution. You do get to see complaints about advertising, but these are never useful.
Accordingly, here are some possible ways of remediating the situation:
- Execute the following commands to disable the responsible services, renaming the configuration file to prevent it from being used (deleting or editing the configuration file to remove the unwanted content are other options):
sudo systemctl mask apt-news.servicesudo systemctl mask esm-cache.servicesudo mv /etc/apt/apt.conf.d/20apt-esm-hook.conf
/etc/apt/apt.conf.d/20apt-esm-hook.conf.disabled - Alternatively, simply remove the
ubuntu-advantage-toolspackage, which contains the/etc/apt/apt.conf.d/20apt-esm-hook.conffile. - Another option is to remove the
ubuntu-pro-clientpackage. - Lastly, there also is the possibility of enabling ESM, though that was not desirable for me.
In my case, it may have been the penultimate option on the list that I chose. In any case, I was rid of the unwanted message.