TOPIC: SCRIPTING LANGUAGES
Loading API Keys from Linux shell environment variables in Python with Dotenv
23rd October 2025Recently, I ran into trouble with getting Python to pick up an API key that I had defined in the underlying bash environment. This was within a Python console running inside the Positron IDE for R and Python scripting. Opening up the folder containing my Python scripts within the IDE was part of the solution. The next part was creating a .env file within the same folder. A line like this was added within the new file:
export API_KEY="<API key value>"
That meant that code like the following then read in the API key in a more robust manner:
import os
from dotenv import load_dotenv
load_dotenv()
api_key = os.getenv('API_KEY', 'default_value')
This imports the os module and the load_dotenv method from the dotenv package. Then, load_dotenv is executed to load the .env file and its contents. After that, the os.getenv function can assign the API key to a Python variable from the value of the environment variable.
Since this also was within a Git repository, a .gitignore file needed creating with the contents .env to avoid that file being uploaded to GitHub, which is the last place where you should be storing credentials like passwords, passphrases and API keys. While my repository may be private, the state of things at these troubled times mean that even that is no failsafe.
Controlling the version of Python used in the Positron console with virtual environments
21st October 2025Because I have Homebrew installed on my Linux system for getting Hugo and LanguageTool on there, I also have a later version of Python than is available from the Linux Mint repositories. Both 3.12 and 3.13 are on my machine as a consequence. Here is the line in my .bashrc file that makes that happen:
eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"
The result is when I issue the command which python3, this is what I get:
/home/linuxbrew/.linuxbrew/bin/python3
However, Positron looks to /usr/bin/python3 by default. Since this can get confusing, setting a virtual environment has its uses as long as you create it with the intended Python version. This is how you can do it, even if I needed to use sudo mode for some reason:
python3 -m venv .venv
When working solely on the command line, activating it becomes a necessity, adding another manual step to a mind that had resisted all this until recently:
source .venv/bin/activate
Thankfully, just issuing the deactivate command will do the eponymous action. Even better, just opening a folder with a venv in Positron saves you from issuing the extra commands and grants you the desired Python version in the console that it opens. Having run into some clashes between package versions, I am beginning to appreciate having a dedicated environment for a set of Python scripts, especially when an IDE makes it easy to work with such an arrangement.
File comparison using PowerShell
16th August 2025In the past, I have compared files on the Linux/UNIX command line as well as the legacy Windows command line. Recently, I decided to try it using PowerShell. Here is the command structure:
Compare-Object (Get-Content ".\[name of one text file]") (Get-Content ".\[name of another text file]") > [path and name of output file]
Admittedly, this is more verbose than the others that I have mentioned above. Nevertheless, it does the job and sends everything to a text file for review. The Compare-Object piece does the comparison once the Get-Content portions have read in the content.
Keeping a graphical eye on CPU temperature and power consumption on the Linux command line
20th March 2025Following my main workstation upgrade in January, some extra monitoring has been needed. This follows on from the experience with building its predecessor more than three years ago.
Being able to do this in a terminal session keeps things lightweight, and I have done that with text displays like what you see below using a combination of sensors and nvidia-smi in the following command:
watch -n 2 "sensors | grep -i 'k10'; sensors | grep -i 'tdie'; sensors | grep -i 'tctl'; echo "" | tee /dev/fd/2; nvidia-smi"
Everything is done within a watch command that refreshes the display every two seconds. Then, the panels are built up by a succession of commands separated with semicolons, one for each portion of the display. The grep command is used to pick out the desired output of the sensors command that is piped to it; doing that twice gets us two lines. The next command, echo "" | tee /dev/fd/2, adds an extra line by sending a space to STDERR output before the output of nvidia-smi is displayed. The result can be seen in the screenshot below.

However, I also came across a more graphical way to do things using commands like turbostat or sensors along with AWK programming and ttyplot. Using the temperature output from the above and converting that needs the following:
while true; do sensors | grep -i 'tctl' | awk '{ printf("%.2f\n", $2); fflush(); }'; sleep 2; done | ttyplot -s 100 -t "CPU Temperature (Tctl)" -u "°C"
This is done in an infinite while loop to keep things refreshing; the watch command does not work for piping output from the sensors command to both the awk and ttyplot commands in sequence and on a repeating, periodic basis. The awk command takes the second field from the input text, formats it to two places of decimals and prints it before flushing the output buffer afterwards. The ttyplot command then plots those numbers on the plot seen below in the screenshot with a y-axis scaled to a maximum of 100 (-s), units of °C (-u) and a title of CPU Temperature (Tctl) (-t).

A similar thing can be done for the CPU wattage, which is how I learned of the graphical display possibilities in the first place. The command follows:
sudo turbostat --Summary --quiet --show PkgWatt --interval 1 | sudo awk '{ printf("%.2f\n", $1); fflush(); }' | sudo ttyplot -s 200 -t "Turbostat - CPU Power (watts)" -u "watts"
Handily, the turbostat can be made to update every so often (every second in the command above), avoiding the need for any infinite while loop. Since only a summary is needed for the wattage, all other output can be suppressed, though everything needs to work using superuser privileges, unlike the sensors command earlier. Then, awk is used like before to process the wattage for plotting; the first field is what is being picked out here. After that, ttyplot displays the plot seen in the screenshot below with appropriate title, units and scaling. All works with output from one command acting as input to another using pipes.

All of this offers a lightweight way to keep an eye on system load, with the top command showing the impact of different processes if required. While there are graphical tools for some things, command line possibilities cannot be overlooked either.
Finding human balance in an age of AI code generation
12th March 2025Recently, I was asked about how I felt about AI. Given that the other person was not an enthusiast, I picked on something that happened to me, not so long ago. It involved both Perplexity and Google Gemini when I was trying to debug something: both produced too much code. The experience almost inspired a LinkedIn post, only for some of the thinking to go online here for now. A spot of brainstorming using an LLM sounds like a useful exercise.
Going back to the original question, it happened during a meeting about potential freelance work. Thus, I tapped into experiences with code generators over several decades. The first one involved a metadata-driven tool that I developed; users reported that there was too much imperfect code to debug with the added complexity that dealing with clinical study data brings. That challenge resurfaced with another bespoke tool that someone else developed, and I opted to make things simpler: produce some boilerplate code and let users take things from there. Later, someone else again decided to have another go, seemingly with more success.
It is even more challenging when you are insufficiently familiar with the code that is being produced. That happened to me with shell scripting code from Google Gemini that was peppered with some Awk code. There was no alternative but to learn a bit more about the language from Tutorials Point and seek out an online book elsewhere. That did get me up to speed, and I will return to these when I am in need again.
Then, there was the time when I was trying to get a Julia script to deal with Google Drive needing permissions to be set. This started Google Gemini into adding more and more error checking code with try catch blocks. Since I did not have the issue at that point, I opted to halt and wait for its recurrence. When it did, I opted for a simpler approach, especially with the gdrive CLI tool starting up a web server for completing the process of reactivation. While there are times when shell scripting is better than Julia for these things, I added extra robustness and user-friendliness anyway.
During that second task, I was using VS Code with the GitHub Copilot plugin. There is a need to be careful, yet that can save time when it adds suggestions for you to include or reject. The latter may apply when it adds conditional logic that needs more checking, while simple code outputting useful text to the console can be approved. While that certainly is how I approach things for now, it brings up an increasingly relevant question for me.
How do we deal with all this code production? In an environment with myriads of unit tests and a great deal of automation, there may be more capacity for handling the output than mere human inspection and review, which can overwhelm the limitations of a human context window. A quick search revealed that there are automated tools for just this purpose, possibly with their own learning curves; otherwise, manual working could be a better option in some cases.
After all, we need to do our own thinking too. That was brought home to me during the Julia script editing. To come up with a solution, I had to step away from LLM output and think creatively to come up with something simpler. There was a tension between the two needs during the exercise, which highlighted how important it is to learn not to be distracted by all the new technology. Being an introvert in the first place, I need that solo space, only to have to step away from technology to get that when it was a refuge in the first place.
For anyone with a programming hobby, they have to limit all this input to avoid being overwhelmed; learning a programming language could involve stripping out AI extensions from a code editor, for instance, LLM output has its place, yet it has to be at a human scale too. That perhaps is the genius of a chat interface, and we now have Agentic AI too. It is as if the technology curve never slackens, at least not until the current boom ends, possibly when things break because they go too far beyond us. All this acceleration is fine until we need to catch up with what is happening.
Clearing the Julia REPL
23rd September 2024During development, there are times when you need to clear the Julia REPL. It can become so laden with content that it becomes hard to perform debugging of your code. One way to accomplish this is issuing the CTRL + L keyboard shortcut while focus is within the REPL; you need to click on it first. Another is to issue the following in the REPL itself:
print("\033c")
Here \033 is an escape code in octal format. It is often used in terminal control sequences. The c character is what resets the terminal to its initial state. Printing this sequence is what does the clearance, and variations can be used to clear other kinds of console screens too. That makes it a more generic solution.
Dropping to an underlying shell using the ; character is another possibility. Then, you can use the clear or cls commands as needed; the latter is for Windows systems.
One last option is to define a Julia function for doing this:
function clear_console()
run(`clear`) # or `cls` for Windows
end
Calling the clear_console function then clears the screen programmatically, allowing for greater automation. The run function is the one that sends that command in backticks to the underlying shell for execution. Even using that alone should work too.
Unzipping more than one file at a time in Linux and macOS
10th September 2024To me, it sounded like a task for shell scripting, but I wanted to extract three zip archives in one go. They had come from Google Drive and contained different splits of the files that I needed, raw images from a camera. However, I found a more succinct method than the line of code that you see below (it is intended for the BASH shell):
for z in *.zip; do; unzip "$z"; done
That loops through each file that matches a glob string. All I needed was something like this:
unzip '*.zip'
Without embarking on a search, I got close but have not quoted the search string. Without the quoting, it was not working for me. To be sure that I was extracting more than I needed, I made the wildcard string more specific for my case.
Once the extraction was complete, I moved the files into a Lightroom Classic repository for working on them later. All this happened on an iMac, but the extraction itself should work on any UNIX-based operating system, so long as the shell supports it.
Executing PowerShell scripts in Windows 11
14th August 2024Recently, I have added the capability to update a Hugo-driven website from a laptop running Windows 11. Compared to what you get with Linux, I do feel a little like a fish out of water when it comes to using Windows for tasks that I accomplish more often on the former. That includes running PowerShell scripts instead of their BASH counterparts. While Linux Subsystem for Windows could be an option, my machine runs Windows 11 Home, where it is unavailable. Learning the ways of the Windows Terminal cannot do any harm in any case.
The default action of not executing PowerShell scripts is not a bad approach when it comes to keeping machines secure for less technical users. For the rest, you need to learn how to use the Set-ExecutionPolicy commandlet. Doing this in a safe means doing it in a restrictive manner. Thus, I chose the following command and executed it in a terminal running with admin privileges:
Set-ExecutionPolicy -ExecutionPolicy Unrestricted -Scope CurrentUser
The scope here is for the currently logged-in user, instead of allowing every user the same capability. Some undoubtedly might suggest an execution policy of AllSigned and that adds effort that I was unwilling to expend, hence the choice that I made. This is not that critical a machine, so that is why I made the choice that I did. There was nothing too complicated about the script logic anyway.

Making the script available without needing to specify the path to it was my next step. In my case, I added a new location to the Path environment variable. To accomplish that, you need to find the Control Panel, open it and go to System and Security. Then, move to System (Control Panel\System and Security\System) and click on Advanced System Settings. In the new dialogue box that appears, click on the Environment Variables... button. Next, select the Path entry and click on the Edit button. That spawns another dialogue box where I added the new path. Clicking the OK button in each dialogue box closes them all, one at a time, to get back to the Control Panel window again. That too can be closed, and any open terminals shut down and a new one opened. The process is clunky, yet it works once you know what to do.
At the end of all this, I had a scripted process for updating a Hugo-driven website. It was not as sleek as what I have on my Linux system, yet it works well enough to allow more flexibility. In time, I may refine things further.
AttributeError: module 'PIL' has no attribute 'Image'
11th March 2024One of my websites has an online photo gallery. This has been a long-term activity that has taken several forms over the years. Once HTML and JavaScript based, it then was powered by Perl before PHP and MySQL came along to take things from there.
While that remains how it works, the publishing side of things has used its own selection of mechanisms over the same time span. Perl and XML were the backbone until Python and Markdown took over. There was a time when ImageMagick and GraphicsMagick handled image processing, but Python now does that as well.
That was when the error message gracing the title of this post came to my notice. Everything was working well when executed in Spyder, but the message appears when I tried running things using Python on the command line. PIL is the abbreviated name for the Python 3 pillow package; there was one called PIL in the Python 2 days.
For me, pillow loads, resizes and creates new images, which is handy for adding borders and copyright/source information to each image as well as creating thumbnails. All this happens in memory and that makes everything go quickly, much faster than disk-based tools like ImageMagick and GraphicsMagick.
Of course, nothing is going to happen if the package cannot be loaded, and that is what the error message is about. Linux is what I mainly use, so that is the context for this scenario. What I was doing was something like the following in the Python script:
import PIL
Then, I referred to PIL.Image when I needed it, and this could not be found when the script was run from the command line (BASH). The solution was to add something like the following:
from PIL import Image
That sorted it, and I must have run into trouble with PIL.ImageFilter too, since I now load it in the same manner. In both cases, I could just refer to Image or ImageFilter as I required and without the dot syntax. However, you need to make sure that there is no clash with anything in another loaded Python package when doing this.
Upgrading Julia packages
23rd January 2024Whenever a new version of Julia is released, I have certain actions to perform. With Julia 1.10, installing and updating it has become more automated thanks to shell scripting or the use of WINGET, depending on your operating system. Because my environment predates this, I found that the manual route still works best for me, and I will continue to do that.
Returning to what needs doing after an update, this includes updating Julia packages. In the REPL, this involves dropping to the PKG subshell using the ] key if you want to avoid using longer commands or filling your history with what is less important for everyday usage.
Previously, I often ran code to find a package was missing after updating Julia, so the add command was needed to reinstate it. That may raise its head again, but there also is the up command for upgrading all packages that were installed. This could be a time saver when only a single command is needed for all packages and not one command for each package as otherwise might be the case.