TOPIC: LINUX DISTRIBUTIONS
Ansible automation for Linux Mint updates with repository failover handling
7th November 2025Recently, I had a Microsoft PPA output disrupt an Ansible playbook mediated upgrade process for my main Linux workstation. Thus, I ended up creating a failover for this situation, and the first step in the playbook was to define the affected repo:
vars:
microsoft_repo_url: "https://packages.microsoft.com/repos/code/dists/stable/InRelease"
The next move was to start defining tasks, with the first testing the repo to pick up any lack of responsiveness and flag that for subsequent operations.
tasks:
- name: Check Microsoft repository availability
uri:
url: "{{ microsoft_repo_url }}"
method: HEAD
return_content: no
timeout: 10
register: microsoft_repo_check
failed_when: false
- name: Set flag to skip Microsoft updates if unreachable
set_fact:
skip_microsoft_repos: "{{ microsoft_repo_check.status is not defined or microsoft_repo_check.status != 200 }}"
In the event of a failure, the next task was to disable the repo to allow other processing to take place. This was accomplished by temporarily renaming the relevant files under /etc/apt/sources.list.d/.
- name: Temporarily disable Microsoft repositories
become: true
shell: |
for file in /etc/apt/sources.list.d/microsoft*.list; do
[ -f "$file" ] && mv "$file" "${file}.disabled"
done
for file in /etc/apt/sources.list.d/vscode*.list; do
[ -f "$file" ] && mv "$file" "${file}.disabled"
done
when: skip_microsoft_repos | default(false)
changed_when: false
With that completed, the rest of the update actions could be performed near enough as usual.
- name: Update APT cache (retry up to 5 times)
apt:
update_cache: yes
register: apt_update_result
retries: 5
delay: 10
until: apt_update_result is succeeded
- name: Perform normal upgrade
apt:
upgrade: yes
register: apt_upgrade_result
retries: 3
delay: 10
until: apt_upgrade_result is succeeded
- name: Perform dist-upgrade with autoremove and autoclean
apt:
upgrade: dist
autoremove: yes
autoclean: yes
register: apt_dist_result
retries: 3
delay: 10
until: apt_dist_result is succeeded
After those, another renaming operation restores the earlier filenames to what they were.
- name: Re-enable Microsoft repositories
become: true
shell: |
for file in /etc/apt/sources.list.d/*.disabled; do
base="$(basename "$file" .disabled)"
if [[ "$base" == microsoft* || "$base" == vscode* || "$base" == edge* ]]; then
mv "$file" "/etc/apt/sources.list.d/$base"
fi
done
when: skip_microsoft_repos | default(false)
changed_when: false
Needless to say, this disabling only happens in the event of there being a system failure. Otherwise, the steps are skipped and everything else is completed as it should be. While there is some cause for extended the repository disabling actions to other third repos as well, that is something that I will leave aside for now. Even this shows just how much can be done using Ansible playbooks and how much automation can be achieved. As it happens, I even get Flatpaks updated in much the same way:
- name: Ensure Flatpak is installed
apt:
name: flatpak
state: present
update_cache: yes
cache_valid_time: 3600
- name: Update Flatpak remotes
command: flatpak update --appstream -y
register: flatpak_appstream
changed_when: "'Now at' in flatpak_appstream.stdout"
failed_when: flatpak_appstream.rc != 0
- name: Update all Flatpak applications
command: flatpak update -y
register: flatpak_result
changed_when: "'Now at' in flatpak_result.stdout"
failed_when: flatpak_result.rc != 0
- name: Install unused Flatpak applications
command: flatpak uninstall --unused
register: flatpak_cleanup
changed_when: "'Nothing' not in flatpak_cleanup.stdout"
failed_when: flatpak_cleanup.rc != 0
- name: Repair Flatpak installations
command: flatpak repair
register: flatpak_repair
changed_when: flatpak_repair.stdout is search('Repaired|Fixing')
failed_when: flatpak_repair.rc != 0
The ability to call system commands as you see in the above sequence is an added bonus, though getting the response detection completely sorted remains an outstanding task. All this has only scratched the surface of what is possible.
Automating Positron and RStudio updates on Linux Mint 22
6th November 2025Elsewhere, I have written about avoiding manual updates with VSCode and VSCodium. Here, I come to IDE's produced by Posit, formerly RStudio, for data science and analytics uses. The first is a more recent innovation that works with both R and Python code natively, while the second has been around for much longer and focusses on native R code alone, though there are R packages allowing an interface of sorts with Python. Neither are released via a PPA, necessitating either manual downloading or the scripted approach taken here for a Linux system. Each software tool will be discussed in turn.
Positron
Now, we work through a script that automates the upgrade process for Positron. This starts with a shebang line calling the bash executable before moving to a line that adds safety to how the script works using a set statement. Here, the -e switch triggers exiting whenever there is an error, halting the script before it carries on to perform any undesirable actions. That is followed by the -u switch that causes errors when unset variables are called; normally these would be assigned a missing value, which is not desirable in all cases. Lastly, the -o pipefail switch causes a pipeline (cmd1 | cmd2 | cm3) to fail if any command in the pipeline produces an error, which can help debugging because the error is associated with the command that fails to complete.
#!/bin/bash
set -euo pipefail
The next step then is to determine the architecture of the system on which the script is running so that the correct download is selected.
ARCH=$(uname -m)
case "$ARCH" in
x86_64) POSIT_ARCH="x64" ;;
aarch64|arm64) POSIT_ARCH="arm64" ;;
*) echo "Unsupported arch: $ARCH"; exit 1 ;;
esac
Once that completes, we define the address of the web page to be interrogated and the path to the temporary file that is to be downloaded.
RELEASES_URL="https://github.com/posit-dev/positron/releases"
TMPFILE="/tmp/positron-latest.deb"
Now, we scrape the page to find the address of the latest DEB file that has been released.
echo "Finding latest Positron .deb for $POSIT_ARCH..."
DEB_URL=$(curl -fsSL "$RELEASES_URL" \
| grep -Eo "https://cdn\.posit\.co/[A-Za-z0-9/_\.-]+Positron-[0-9\.~-]+-${POSIT_ARCH}\.deb" \
| head -n 1)
If that were to fail, we get an error message produced before the script is aborted.
if [ -z "${DEB_URL:-}" ]; then
echo "Could not find a .deb link for ${POSIT_ARCH} on the releases page"
exit 1
fi
Should all go well thus far, we download the latest DEB file using curl.
echo "Downloading: $DEB_URL"
curl -fL "$DEB_URL" -o "$TMPFILE"
When the download completes, we try installing the package using apt, much like we do with a repo, apart from specifying an actual file path on our system.
echo "Installing Positron..."
sudo apt install -y "$TMPFILE"
Following that, we delete the installation file and issue a message informing the user of the task's successful completion.
echo "Cleaning up..."
rm -f "$TMPFILE"
echo "Done."
When I do this, I tend to find that the Python REPL console does not open straight away, causing me to shut down Positron and leaving things for a while before starting it again. There may be temporary files that need to be expunged and that needs its own time. Someone else might have a better explanation that I am happy to use if that makes more sense than what I am suggesting. Otherwise, all works well.
RStudio
A lot of the same processing happens during the script updating RStudio, so we will just cover the differences. The set -x statement ensures that every command is printed to the console for the debugging that was needed while this was being developed. Otherwise, much code, including architecture detection, is shared between the two apps.
#!/bin/bash
set -euo pipefail
set -x
# --- Detect architecture ---
ARCH=$(uname -m)
case "$ARCH" in
x86_64) RSTUDIO_ARCH="amd64" ;;
aarch64|arm64) RSTUDIO_ARCH="arm64" ;;
*) echo "Unsupported architecture: $ARCH"; exit 1 ;;
esac
Figuring out the distro version and the web page to scrape was where additional effort was needed, and that is reflected in some of the code that follows. Otherwise, many of the ideas applied with Positron also have a place here.
# --- Detect Ubuntu base ---
DISTRO=$(grep -oP '(?<=UBUNTU_CODENAME=).*' /etc/os-release || true)
[ -z "$DISTRO" ] && DISTRO="noble"
# --- Define paths ---
TMPFILE="/tmp/rstudio-latest.deb"
LOGFILE="/var/log/rstudio_update.log"
echo "Detected Ubuntu base: ${DISTRO}"
echo "Fetching latest version number from Posit..."
# --- Get version from Posit's official RStudio Desktop page ---
VERSION=$(curl -s https://posit.co/download/rstudio-desktop/ \
| grep -Eo 'rstudio-[0-9]+\.[0-9]+\.[0-9]+-[0-9]+' \
| head -n 1 \
| sed -E 's/rstudio-([0-9]+\.[0-9]+\.[0-9]+-[0-9]+)/\1/')
if [ -z "$VERSION" ]; then
echo "Error: Could not extract the latest RStudio version number from Posit's site."
exit 1
fi
echo "Latest RStudio version detected: ${VERSION}"
# --- Construct download URL (Jammy build for Noble until Noble builds exist) ---
BASE_DISTRO="jammy"
BASE_URL="https://download1.rstudio.org/electron/${BASE_DISTRO}/${RSTUDIO_ARCH}"
FULL_URL="${BASE_URL}/rstudio-${VERSION}-${RSTUDIO_ARCH}.deb"
echo "Downloading from:"
echo " ${FULL_URL}"
# --- Validate URL before downloading ---
if ! curl --head --silent --fail "$FULL_URL" >/dev/null; then
echo "Error: The expected RStudio package was not found at ${FULL_URL}"
exit 1
fi
# --- Download and install ---
curl -L "$FULL_URL" -o "$TMPFILE"
echo "Installing RStudio..."
sudo apt install -y "$TMPFILE" | tee -a "$LOGFILE"
# --- Clean up ---
rm -f "$TMPFILE"
echo "RStudio update to version ${VERSION} completed successfully." | tee -a "$LOGFILE"
When all ended, RStudio worked without a hitch, leaving me to move on to other things. The next time that I am prompted to upgrade the environment, this is the way I likely will go.
Remote access between Mac and Linux, Part 3: SSH, RDP and TigerVNC
30th October 2025This is Part 3 of a three-part series on connecting a Mac to a Linux Mint desktop. Part 1 introduced the available options, whilst Part 2 covered x11vnc for sharing physical desktops.
Whilst x11vnc excels at sharing an existing desktop, many scenarios call for terminal access or a fresh graphical session. This article examines three alternatives: SSH for command-line work, RDP for responsive remote desktops with Xfce, and TigerVNC for virtual Cinnamon sessions.
Terminal Access via SSH
For many administrative tasks, a secure shell session is enough. On the Linux machine, the OpenSSH server needs to be installed and running. On Debian or Ubuntu-based systems, including Linux Mint, the required packages are available with standard tools.
Installing with sudo apt install openssh-server followed by enabling the service with sudo systemctl enable ssh and starting it with sudo systemctl start ssh is all that is needed. The machine's address on the local network can be identified with ip addr show, and it is the entry under inet for the active interface that will be used.
From the Mac, a terminal session to that address is opened with a command of the form ssh username@192.168.1.xxx and this yields a full shell on the Linux machine without further configuration. On a home network, there is no need for router changes and SSH requires no extra client software on macOS.
SSH forms the foundation for secure operations beyond terminal access. It enables file transfer via scp and rsync, and can be used to create encrypted tunnels for other protocols when access from outside the local network is required.
RDP for New Desktop Sessions
Remote Desktop Protocol creates a new login session on the Linux machine and tends to feel smoother over imperfect links. On Linux Mint with Cinnamon, RDP is often the more responsive choice on a Mac, but Cinnamon's reliance on 3D compositing means xrdp does not work with it reliably. The usual workaround is to keep Cinnamon for local use and install a lightweight desktop specifically for remote sessions. Xfce works well in this role.
Setting Up xrdp with Xfce
After updating the package list, install xrdp with sudo apt install xrdp, set it to start automatically with sudo systemctl enable xrdp, and start it with sudo systemctl start xrdp. If a lightweight environment is not already available, install Xfce with sudo apt install xfce4, then tell xrdp to use it by creating a simple session file for the user account with echo "startxfce4" > ~/.xsession. Restarting the service with sudo systemctl restart xrdp completes the server side.
The Linux machine's IP address can be checked again so it can be entered into Microsoft Remote Desktop, which is a free download from the Mac App Store. Adding a new connection with the Linux IP and the user's credentials often suffices, and the first connection may present a certificate prompt that can be accepted.
RDP uses port 3389 by default, which needs no router configuration on the same network. It creates a new session rather than attaching to the one already shown on the Linux monitor, so it is not a means to view the live Cinnamon desktop, but performance is typically smooth and latency is well handled.
Why RDP with Xfce?
It is common for xrdp on Ubuntu-based distributions to select a simpler session type unless the user instructs it otherwise, which is why the small .xsession file pointing to Xfce helps. The combination of RDP's protocol efficiency and Xfce's lightweight nature delivers the most responsive experience for new sessions. The protocol translates keyboard and mouse input in a way that many clients have optimised for years, making it the most forgiving route when precise input behaviour matters. The trade-off is that what is shown is a separate desktop session, which can be a benefit or a drawback depending on the task.
TigerVNC for New Cinnamon Sessions
Those who want to keep Cinnamon for remote use can do so with a VNC server that creates a new virtual desktop. TigerVNC is a common choice on Linux Mint. Installing tigervnc-standalone-server, setting a password with vncpasswd and creating an xstartup file under ~/.vnc that launches Cinnamon will provide a new session for each connection.
Configuring TigerVNC
A minimal xstartup for Cinnamon sets the environment to X11, establishes the correct session variables and starts cinnamon-session. Making this file executable and then launching vncserver :1 starts a VNC server on port 5901. The server can be stopped later with vncserver -kill :1.
The xstartup script determines what desktop environment a virtual session launches, and setting the environment variables to Cinnamon then starting cinnamon-session is enough to present the expected desktop. Marking that startup file as executable is easy to miss, and it is required for TigerVNC to run it.
From the Mac, the built-in Screen Sharing app can be used from Finder's Connect to Server entry by supplying vnc://192.168.1.xxx:5901, or a third-party viewer such as RealVNC Viewer can connect to the same address and port. This approach provides the Cinnamon look and feel, though it can be less responsive than RDP when the network is not ideal, and it also creates a new desktop session rather than sharing the one already in use on the Linux screen.
Clipboard Support in TigerVNC
For TigerVNC, clipboard support typically requires the vncconfig helper application to be running on the server. Starting vncconfig -nowin & in the background, often by adding it to the ~/.vnc/xstartup file, enables clipboard synchronisation between the VNC client and server for plain text.
File Transfer
File transfer between the machines is best handled using the command-line tools that accompany SSH. On macOS, scp file.txt username@192.168.1.xxx:/home/username/ sends a file to Linux and scp username@192.168.1.xxx:/home/username/file.txt ~/Desktop/ retrieves one, whilst rsync with -avz flags can be used for larger or incremental transfers.
These tools work reliably regardless of which remote access method is being used for interactive sessions. File copy-paste is not supported by VNC protocols, making scp and rsync the dependable choice for moving files between machines.
Operational Considerations
Port Management
Understanding port mappings helps avoid connection issues. VNC display numbers map directly to TCP ports, so :0 means 5900, :1 means 5901 and so on. RDP uses port 3389 by default. When connecting with viewers, supplying the address alone will use the default port for that protocol. If a specific port must be stated, use a single colon with the actual TCP port number.
First Connection Issues
If a connection fails unexpectedly, checking whether a server is listening with netstat can save time. On first-time connections to an RDP server, the client may display a certificate warning that can be accepted for home use.
Making Services Persistent
For regular use, enabling services at boot removes the need for manual intervention. Both xrdp and TigerVNC can be configured to start automatically, ensuring that remote access is available whenever the Linux machine is running. The systemd service approach described for x11vnc in Part 2 can be adapted for TigerVNC if automatic startup of virtual sessions is desired.
Security and Convenience
Security considerations in a home setting are straightforward. When both machines are on the same local network, there is no need to adjust router settings for any of these methods. If remote access from outside the home is required, port forwarding and additional protections would be needed.
SSH can be exposed with careful key-based authentication, RDP should be placed behind a VPN or an SSH tunnel, and VNC should not be left open to the internet without an encrypted wrapper. For purely local use, enabling the necessary services at boot or keeping a simple set of commands to hand often suffices.
xrdp can be enabled once and left to run in the background, so the Mac's Microsoft Remote Desktop app can connect whenever needed. This provides a consistent way to access a fresh Xfce session without affecting what is displayed on the Linux machine's monitor.
Summary and Recommendations
The choice between these methods ultimately comes down to the specific use case. SSH provides everything necessary for administrative work and forms the foundation for secure file transfer. RDP into an Xfce session is a sensible choice when responsiveness and clean input handling are the priorities and a separate desktop is acceptable. TigerVNC can launch a full Cinnamon session for those who value continuity with the local environment and do not mind the slight loss of responsiveness that can accompany VNC.
For file transfer, the command-line tools that accompany SSH remain the most reliable route. Clipboard synchronisation for plain text is available in each approach, though TigerVNC typically needs vncconfig running on the server to enable it.
Having these options at hand allows a Mac and a Linux Mint desktop to work together smoothly on a home network. The setup is not onerous, and once a choice is made and the few necessary commands are learned, the connection can become an ordinary part of using the machines. After that, the day-to-day experience can be as simple as opening a single app on the Mac, clicking a saved connection and carrying on from where the Linux machine last left off.
The Complete Picture
Across this three-part series, we have examined the full range of remote access options between Mac and Linux:
- Part 1 provided the decision framework for choosing between terminal access, new desktop sessions and sharing physical displays.
- Part 2 explored x11vnc in detail, including performance tuning, input handling with KVM switches, clipboard troubleshooting and systemd service configuration.
- Part 3 covered SSH for terminal access, RDP with Xfce for responsive remote sessions, TigerVNC for virtual Cinnamon desktops, and file transfer considerations.
Each approach has its place, and understanding the trade-offs allows the right tool to be selected for the task at hand.
Command line installation and upgrading of VSCode and VSCodium on Windows, macOS and Linux
25th October 2025Downloading and installing software packages from a website is all very well until you need to update them. Then, a single command streamlines the process significantly. Given that VSCode and VSCodium are updated regularly, this becomes all the more pertinent and explains why I chose them for this piece.
Windows
Now that Windows 10 is more or less behind us, we can focus on Windows 11. That comes with the winget command by default, which is handy because it allows command line installation of anything that is in the Windows store, which includes VSCode and VSCodium. The commands can be as simple as these:
winget install VisualStudioCode
winget install VSCodium.VSCodium
The above is shorthand for this, though:
winget install --id VisualStudioCode
winget install --id VSCodium.VSCodium
If you want exact matches, the above then becomes:
winget install -e --id VisualStudioCode
winget install -e --id VSCodium.VSCodium
For upgrades, this is what is needed:
winget upgrade Microsoft.VisualStudioCode
winget upgrade VSCodium.VSCodium
Even better, you can do an upgrade everything at once operation:
winget upgrade --all
The last part certainly is better than the round trip to a website and back to going through an installation GUI. There is a lot less mouse clicking for one thing.
macOS
On macOS, you need to have Homebrew installed to make things more streamlined. To complete that, you need to run the following command (which may need you to enter your system password to get things to happen):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
Then, you can execute one or both of these in the Terminal app, perhaps having to authorise everything with your password when requested to do so:
brew install --cask visual-studio-code
brew install --cask vscodium
The reason for the -cask switch is that these are apps that you want to go into the correct locations on macOS as well as having their icons appear in Launchpad. Omitting it is fine for command line utilities, but not for these.
To update and upgrade everything that you have installed via Homebrew, just issue the following in a terminal session:
brew update && brew upgrade
Debian, Ubuntu & Linux Mint
Like any other Debian or Ubuntu derivative, Linux Mint has its own in-built package management system via apt. Other Linux distributions have their own way of doing things (Fedora and Arch come to mind here), yet the essential idea is similar in many cases. Because there are a number of steps, I have split out VSCode from VSCodium for added clarity. Because of the way that things are set up, one or both apps can be updated using the usual apt commands without individual attention.
VSCode
The first step is to download the repository key using the following command:
wget -qO- https://packages.microsoft.com/keys/microsoft.asc \
| gpg --dearmor > packages.microsoft.gpg
sudo install -D -o root -g root -m 644 packages.microsoft.gpg /etc/apt/keyrings/packages.microsoft.gpg
Then, you can add the repository like this:
echo "deb [arch=amd64 signed-by=/etc/apt/keyrings/packages.microsoft.gpg] \
https://packages.microsoft.com/repos/code stable main" \
| sudo tee /etc/apt/sources.list.d/vscode.list
With that in place, the last thing that you need to do is issue the command for doing the installation from the repository:
sudo apt update; sudo apt install code
Above, I have put two commands together: one to update the repository and another to do the installation.
VSCodium
Since the VSCodium process is similar, here are the three commands together: one for downloading the repository key, another that adds the new repository and one more to perform the repository updates and subsequent installation:
curl -fSsL https://gitlab.com/paulcarroty/vscodium-deb-rpm-repo/raw/master/pub.gpg \
| sudo gpg --dearmor | sudo tee /usr/share/keyrings/vscodium-archive-keyring.gpg >/dev/null
echo "deb [arch=amd64 signed-by=/usr/share/keyrings/vscodium-archive-keyring.gpg] \
https://download.vscodium.com/debs vscodium main" \
| sudo tee /etc/apt/sources.list.d/vscodium.sources
sudo apt update; sudo apt install codium
After the three steps have completed successfully, VSCodium is installed and available to use on your system, and is accessible through the menus too.
Controlling the version of Python used in the Positron console with virtual environments
21st October 2025Because I have Homebrew installed on my Linux system for getting Hugo and LanguageTool on there, I also have a later version of Python than is available from the Linux Mint repositories. Both 3.12 and 3.13 are on my machine as a consequence. Here is the line in my .bashrc file that makes that happen:
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
The result is when I issue the command which python3, this is what I get:
/home/linuxbrew/.linuxbrew/bin/python3
However, Positron looks to /usr/bin/python3 by default. Since this can get confusing, setting a virtual environment has its uses as long as you create it with the intended Python version. This is how you can do it, even if I needed to use sudo mode for some reason:
python3 -m venv .venv
When working solely on the command line, activating it becomes a necessity, adding another manual step to a mind that had resisted all this until recently:
source .venv/bin/activate
Thankfully, just issuing the deactivate command will do the eponymous action. Even better, just opening a folder with a venv in Positron saves you from issuing the extra commands and grants you the desired Python version in the console that it opens. Having run into some clashes between package versions, I am beginning to appreciate having a dedicated environment for a set of Python scripts, especially when an IDE makes it easy to work with such an arrangement.
SAS Packages: Revolutionising code sharing in the SAS ecosystem
26th July 2025In the world of statistical programming, SAS has long been the backbone of data analysis for countless organisations worldwide. Yet, for decades, one of the most significant challenges facing SAS practitioners has been the efficient sharing and reuse of code. Knowledge and expertise have often remained siloed within individual developers or teams, creating inefficiencies and missed opportunities for collaboration. Enter the SAS Packages Framework (SPF), a solution that changes how SAS professionals share, distribute and utilise code across their organisations and the broader community.
- The Problem: Fragmented Knowledge and Complex Dependencies
Anyone who has worked extensively with SAS knows the frustration of trying to share complex macros or functions with colleagues. Traditional code sharing in SAS has been plagued by several issues:
- Dependency nightmares: A single macro often relies on dozens of utility macros working behind the scenes, making it nearly impossible to share everything needed for the code to function properly
- Version control chaos: Keeping track of which version of which macro works with which other components becomes an administrative burden
- Platform compatibility issues: Code that works on Windows might fail on Linux systems and vice versa
- Lack of documentation: Without proper documentation and help systems, even the most elegant code becomes unusable to others
- Knowledge concentration: Valuable SAS expertise remains trapped within individuals rather than being shared with the broader community
These challenges have historically meant that SAS developers spend countless hours reinventing the wheel, recreating functionality that already exists elsewhere in their organisation or the wider SAS community.
- The Solution: SAS Packages Framework
The SAS Packages Framework, developed by Bartosz Jabłoński, represents a paradigm shift in how SAS code is organised, shared and deployed. At its core, a SAS package is an automatically generated, single, standalone zip file containing organised and ordered code structures, extended with additional metadata and utility files. This solution addresses the fundamental challenges of SAS code sharing by providing:
- Functionality over complexity: Instead of worrying about 73 utility macros working in the background, you simply share one file and tell your colleagues about the main functionality they need to use.
- Complete self-containment: Everything needed for the code to function is bundled into one file, eliminating the "did I remember to include everything?" problem that has plagued SAS developers for years.
- Automatic dependency management: The framework handles the loading order of code components and automatically updates system options like
cmplib=andfmtsearch=for functions and formats. - Cross-platform compatibility: Packages work seamlessly across different operating systems, from Windows to Linux and UNIX environments.
- Beyond Macros: A Spectrum of SAS Functionality
One of the most compelling aspects of the SAS Packages Framework is its versatility. While many code-sharing solutions focus solely on macros, SAS packages support a wide range of SAS functionality:
- User-defined functions (both FCMP and CASL)
- IML modules for matrix programming
- PROC PROTO C routines for high-performance computing
- Custom formats and informats
- Libraries and datasets
- PROC DS2 threads and packages
- Data generation code
- Additional content such as documentation PDF's
This comprehensive approach means that virtually any SAS functionality can be packaged and shared, making the framework suitable for everything from simple utility macros to complex analytical frameworks.
- Real-World Applications: From Pharmaceutical Research to General Analytics
The adoption of SAS packages has been particularly notable in the pharmaceutical industry, where code quality, validation and sharing are critical concerns. The PharmaForest initiative, led by PHUSE Japan's Open-Source Technology Working Group, exemplifies how the framework is being used to revolutionise pharmaceutical SAS programming. PharmaForest offers a collaborative repository of SAS packages specifically designed for pharmaceutical applications, including:
- OncoPlotter: A comprehensive package for creating figures commonly used in oncology studies
- SAS FAKER: Tools for generating realistic test data while maintaining privacy
- SASLogChecker: Automated log review and validation tools
- rtfCreator: Streamlined RTF output generation
The initiative's philosophy captures perfectly the spirit of the SAS Packages Framework: "Through SAS packages, we want to actively encourage sharing of SAS know-how that has often stayed within individuals. By doing this, we aim to build up collective knowledge, boost productivity, ensure quality through standardisation and energise our community".
- The SASPAC Archive: A Growing Ecosystem
The establishment of SASPAC (SAS Packages Archive) represents the maturation of the SAS packages ecosystem. This dedicated repository serves as the official home for SAS packages, with each package maintained as a separate repository complete with version history and documentation. Some notable packages available through SASPAC include:
- BasePlus: Extends BASE SAS with functionality that many developers find themselves wishing was built into SAS itself. With 12 stars on GitHub, it's become one of the most popular packages in the archive.
- MacroArray: Provides macro array functionality that simplifies complex macro programming tasks, addressing a long-standing gap in SAS's macro language capabilities.
- SQLinDS: Enables SQL queries within data steps, bridging the gap between SAS's powerful data step processing and SQL's intuitive query syntax.
- DFA (Dynamic Function Arrays): Offers advanced data structures that extend SAS's analytical capabilities.
- GSM (Generate Secure Macros): Provides tools for protecting proprietary code while still enabling sharing and collaboration.
- Getting Started: Surprisingly Simple
Despite the capabilities, getting started with SAS packages is fairly straightforward. The framework can be deployed in multiple ways, depending on your needs. For a quick test or one-time use, you can enable the framework directly from the web:
filename packages "%sysfunc(pathname(work))";
filename SPFinit url "https://raw.githubusercontent.com/yabwon/SAS_PACKAGES/main/SPF/SPFinit.sas";
%include SPFinit;
For permanent installation, you simply create a directory for your packages and install the framework:
filename packages "C:SAS_PACKAGES";
%installPackage(SPFinit)
Once installed, using packages becomes as simple as:
%installPackage(packageName)
%helpPackage(packageName)
%loadPackage(packageName)
- Developer Benefits: Quality and Efficiency
For SAS developers, the framework offers numerous advantages that go beyond simple code sharing:
- Enforced organisation: The package development process naturally encourages better code organisation and documentation practices.
- Built-in testing: The framework includes testing capabilities that help ensure code quality and reliability.
- Version management: Packages include metadata such as version numbers and generation timestamps, supporting modern DevOps practices.
- Integrity verification: The framework provides tools to verify package authenticity and integrity, addressing security concerns in enterprise environments.
- Cherry-picking: Users can load only specific components from a package, reducing memory usage and namespace pollution.
- The Future of SAS Code Sharing
The growing adoption of SAS packages represents more than just a new tool, it signals a fundamental shift towards a more collaborative and efficient SAS ecosystem. The framework's MIT licensing and 100% open-source nature ensure that it remains accessible to all SAS users, from individual practitioners to large enterprise installations. This democratisation of advanced code-sharing capabilities levels the playing field and enables even small teams to benefit from enterprise-grade development practices.
As the ecosystem continues to grow, with contributions from pharmaceutical companies, academic institutions and individual developers worldwide, the SAS Packages Framework is proving that the future of SAS programming lies not in isolated development, but in collaborative, community-driven innovation.
For SAS practitioners looking to modernise their development practices, improve code quality and tap into the collective knowledge of the global SAS community, exploring SAS packages isn't just an option, it's becoming an essential step towards more efficient and effective statistical programming.
Getting Pylance to recognise locally installed packages in VSCode running on Linux Mint
17th December 2024When using VSCode on Linux Mint, I noticed that it was not finding any Python package installed into my user area, as is my usual way of working. Thus, it was being highlighted as being missing when it was already there.
The solution was to navigate to File > Preferences > Settings and click on the Open Settings (JSON) icon in the top right-hand corner of the app to add something like the following snippet in there.
"python.analysis.extraPaths": [
"/home/[user]/.local/bin"
]
Once you have added your user account to the above, saving the file is the next step. Then, VSCode needs a restart to pick up the new location. After that, the false positives get eliminated.
What to do when the externally-managed environment error appears while using pip to install Python packages on Linux Mint 22
16th December 2024After upgrading to Linux Mint 22, the following message appeared when attempting to install Python packages using the pip command:
error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.12/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
This will frustrate anyone following how-tos on the web, so users will need to know about it. On something like Linux Mint, the repositories may not be as up-to-date as PyPI, so picking up the very latest version has its advantages. Thus, I initially used the unrecommended --break-system-packages switch to get things going as before, since doing never broke anything before. While the way of working feels like an overkill in some ways, using pipx probably is the way forward as long as things work as I want them to do.
There is wisdom in using virtual environments too, especially when AI models are involved. For most of what I get to do, that may be getting too elaborate. Then, deleting or renaming the message file in /usr/lib/python3.12/EXTERNALLY-MANAGED is tempting if that gets around things, as retrograde as that probably is. After all, I never broke anything before this message started to appear, possibly since my interests are data related.
Upgrading a web server from Debian 11 to Debian 12
25th November 2024While Debian 12 may be with us since the middle of 2023 and Debian 13 is due in the middle of next year, it has taken me until now to upgrade one of my web servers. The tardiness may have something to do with a mishap on another system that resulted in a rebuild, something to avoid it at all possible.
Nevertheless, I went and had a go with the aforementioned web server after doing some advance research. Thus, I can relate the process that you find here in the knowledge that it worked for me. Also, I will have it on file for everyone's future reference. The first step is to ensure that the system is up-to-date by executing the following commands:
sudo apt update
sudo apt upgrade
sudo apt dist-upgrade
Next, it is best to remove extraneous packages using these commands:
sudo apt --purge autoremove
sudo apt autoclean
Once you have backed up important data and configuration files, you can move to the first step of the upgrade process. This involves changing the repository locations from what is there for bullseye (Debian 11) to those for bookworm (Debian 12). Issuing the following commands will accomplish this:
sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list
sudo sed -i 's/bullseye/bookworm/g' /etc/apt/sources.list.d/*
In my case, I found the second of these to be extraneous since everything was included in the single file. Also, Debian 12 has added a new non-free repository called non-free-firmware. This can be added at this stage by manual editing of the above. In my case, I did it later because the warning message only began to appear at that stage.
Once the repository locations, it is time to update the package information using the following command:
sudo apt update
Then, it is time to first perform a minimal upgrade using the following command, that takes a conservative approach by updating existing packages without installing any new ones:
sudo apt upgrade --without-new-pkgs
Once that has completed, one needs to issue the following command to install new packages if needed for dependencies and even remove incompatible or unnecessary ones, as well as performing kernel upgrades:
sudo apt full-upgrade
Given all the changes, the completion of the foregoing commands' execution necessitates a system restart, which can be the most nerve-wracking part of the process when you are dealing with a remote server accessed using SSH. While, there are a few options for accomplishing this, here is one that is compatible with the upgrade cycle:
sudo systemctl reboot
Once you can log back into the system again, there is one more piece of housekeeping needed. This step not only removes redundant packages that were automatically installed, but also does the same for their configuration files, an act that really cleans up things. The command to execute is as follows:
sudo apt --purge autoremove
For added reassurance that the upgrade has completed, issuing the following command will show details like the operating system's distributor ID, description, release version and codename:
lsb_release -a
If you run the above commands as root, the sudo prefix is not needed, yet it is perhaps safer to execute them under a less privileged account anyway. The process needs the paying of attention to any prompts and questions about configuration files and service restarts if they arise. Nothing like that came up in my case, possibly because this web server serves flat files created using Hugo, avoiding the use of scripting and databases, which would add to the system complexity. Such a simple situation makes the use of scripting more of a possibility. The exercise was speedy enough for me too, though patience is of the essence should a 30–60 minute completion time be your lot, depending on your system and internet speed.
What to do when a GPG signature becomes invalid for a package repository on Linux Mint
12th September 2024During a package update on my main Linux system, I encountered the following kind of error message:
An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://cli.github.com/packages stable InRelease: The following signatures were invalid: EXPKEYSIG <GPG Key> GitHub CLI
The message indicated a problem with the GPG signature verification for the GitHub CLI repository. The cause was that the signature for the repository was invalid, preventing the package manager from updating the repository's index files. The first step then was to remove the invalid GPG key using the following command:
sudo apt-key del <GPG Key>
With the invalid GPG key removed, the next step is to add the new GPG key for the GitHub CLI repository by issuing the following command:
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo tee /usr/share/keyrings/githubcli-archive-keyring.gpg > /dev/null
Once I had the new GPG key, I was able to use my usual system update process without any problem. The error message was gone, and updates and upgrades proceeded as intended.