Technology Tales

Adventures & experiences in contemporary technology

Catching keyboard interruptions in a Python script for a more orderly exit

17th April 2024

A while back, I was using a Python script to watch a folder and process photos in there, whenever a new one was added. Eventually, I ended up with a few of these because I was unable to work out a way to get multiple folders watched in the same script.

In each of them, though, I needed a tidy way to exit a running script in place of the stream of consciousness that often emerges when you do such things to it. Because I knew what was happening anyway, I needed a script to terminate quietly and set to uncover a way to achieve this.

What came up was something like the code that you see below. While I naturally did some customisations, I kept the essential construct to capture keyboard interruption shortcuts, like the use of CTRL + C in a Linux command line interface.

if __name__ == '__main__':
    try:
        main()
    except KeyboardInterrupt:
        print('Interrupted')
        try:
            sys.exit(130)
        except SystemExit:
            os._exit(130)

What is happening above is that the interruption operation is captured in a nested TRY/EXCEPT block. The outer block catches the interruption, while the inner one runs through different ways of performing the script termination. For the first termination function call, you need to call the SYS package and the second needs the OS one, so you need to declare these at the start of your script.

Of the two, SYS.EXIT is preferable because it invokes clean-up activities while OS._EXIT does not, which might explain the “_” prefix in the second of these. In fact, calling SYS.EXIT is not that different to issuing RAISE SYSTEMEXIT instead because that lies underneath it. Here OS._EXIT is the fallback for when SYS.EXIT fails, and it is not all that desirable given the abrupt action that it invokes.

The exit code of 130 is fed to both, since that is what is issued when you terminate a running instance of an application on Linux anyway. Using 0 could negate any means of picking up what has happened if you have downstream processing. In my use case, everything was standalone, so that did not matter so much.

Automating writing using R and Claude

16th April 2024

Automation of writing using AI has become prominent recently, especially since GPT came to everyone’s notice. It is more than automation of proofreading but of producing the content itself, as Mark Hinkle and Luke Matthews can testify. Figuring out how to use Generative AI needs more than one line prompts, so knowing what multi-line ones will work is what is earning six digit annual salaries for some.

Recently, I gave this a go when writing a post that used content from a Reddit post thread. The first step was to extract the content from the thread, and I found that I could use R to do this. That meant installing the RedditExtractoR package using the following command:

install.packages("RedditExtractoR")

Then, I created a short script containing the following lines of code with placeholders added in place of the actual locations:

library("RedditExtractoR")

write.csv(get_thread_content("<URL for Reddit post thread>"), "<location of text file>")

The first line above calls the RedditExtractoR package for use so that its get_thread_content function could be used to scape the thread’s textual content. This was then fed to write.csv for writing out a text file with content.

Once I had the text file, I could upload it to Anthropic’s Claude for the next steps. Firstly, I got it to give me a summary of the thread discussion before I asked it to give me the suggested solutions to the issue. Impressively, it capably provided me with the latter.

Now armed with these answers, I set to crafting the post from them. Claude did not do all the work for me, but it certainly helped with the writing. This is something that I fancy exploring further, especially given how business computing is likely to proceed in the next few years.

Using a BASH command to count the files in a directory

12th March 2024

As part of my backup workflow, I maintain a machine running OpenMediaVault that I only power up when backups are to be performed. Typically, this often happens when I have new photography images to load, and I have a NAS that acts as an online backup system. The OpenMediaVault machine is a near-offline counterpart to the NAS for added safety.

Recently, I needed to check on the number of image files in a directory from an SSH session because of a need to create a new repository for 2024. Some files from this year had ended up in the 2023 one, and I needed to be sure that nothing from last year ended in the 2024 folder, or vice versa. Getting a file count from a trusted source was a quick way of doing exactly this.

Due to clumsiness with the NAS, I had to do this using the OpenMediaVault machine. While I could go mounting drives on an interim basis, it was quicker to work from a BASH session. The trick was to use the wc command for counting the lines output by an invocation of the ls command. An example follows:

ls -l | wc -l

The -l (as in l for Lima) switch forces wc to count lines, while the counterpart (same letter) for ls forces it to list the contents in long form, one item per line. Thus, counting the number of lines gets you the count of the number of files. The call to the ls command can be customised to add other things life the number of dot files, but the above was enough for my purposes. When the files in both 2023 directories matched, I was satisfied that all was in order.

AttributeError: module ‘PIL’ has no attribute ‘Image’

11th March 2024

One of my websites has an online photo gallery. This has been a long-term activity that has taken several forms over the years. Once HTML and JavaScript based, it then was powered by Perl before PHP and MySQL came along to take things from there.

While that remains how it works, the publishing side of things has used its own selection of mechanisms over the same time span. Perl and XML were the backbone until Python and Markdown took over. There was a time when ImageMagick and GraphicsMagick handled image processing, but Python now does that as well.

That was when the error message gracing the title of this post came to my notice. Everything was working well when executed in Spyder, but the message appears when I tried running things using Python on the command line. PIL is the abbreviated name for the Python 3 pillow package; there was one called PIL in the Python 2 days.

For me, pillow loads, resizes and creates new images, which is handy for adding borders and copyright/source information to each image as well as creating thumbnails. All this happens in memory and that makes everything go quickly, much faster than disk-based tools like ImageMagick and GraphicsMagick.

Of course, nothing is going to happen if the package cannot be loaded, and that is what the error message is about. Linux is what I mainly use, so that is the context for this scenario. What I was doing was something like the following in the Python script:

import PIL

Then, I referred to PIL.Image when I needed it, and this could not be found when the script was run from the command line (BASH). The solution was to add something like the following:

from PIL import Image

That sorted it, and I must have run into trouble with PIL.ImageFilter too, since I now load it in the same manner. In both cases, I could just refer to Image or ImageFilter as I required and without the dot syntax. However, you need to make sure that there is no clash with anything in another loaded Python package when doing this.

Upgrading Julia packages

23rd January 2024

Whenever a new version of Julia is released, I have certain actions to perform. With Julia 1.10, installing and updating it has become more automated thanks to shell scripting or the use of WINGET, depending on your operating system. Because my environment predates this, I found that the manual route still works best for me, and I will continue to do that.

Returning to what needs doing after an update, this includes updating Julia packages. In the REPL, this involves dropping to the PKG subshell using the “]” key if you want to avoid using longer commands or filling your history with what is less important for everyday usage.

Previously, I often ran code to find a package was missing after updating Julia, so the add command was needed to reinstate it. That may raise its head again, but there also is the up command for upgrading all packages that were installed. This could be a time saver when only a single command is needed for all packages and not one command for each package as otherwise might be the case.

Using associative arrays in scripting for BASH version 4 and above

29th April 2023

Associated arrays get called different names in different computing languages: dictionaries, hash tables and so on. What is held in common is that they essentially are lists of key value pairs. In the case of BASH, you need at least version 4 to make use of this facility. In Linux Mint, I get 5.1.16, but macOS users apparently are still on BASH 3, so this post may not help them.

To declare an associative array in a later version of BASH, the following command gets issued:

declare -A hashtable

The code to add a key value pair then takes the following form:

hashtable[key1]=value1

Several values can be added to an empty array like this:

hashtable=( ["key1"]="value1" ["key2"]="value2" )

Declaration and instantiation of an associative can be done in the same line as follows:

declare -A hashtable=( ["key1"]="value1 ["key2"]="value2")

Handily, it is possible to loop through the entries in an associative array. It is possible to do this for keys and for values, once you expand out the appropriate list. The following expands a list of values:

"${hashtable[@]}"

Expanding a list of values needs something like this:

"${!hashtable[@]}"

Looping through a list of values needs something like the following:

for val in "${hashtable[@]}"; do echo "$val"; done;

The above has been placed on a single line with semicolon delimiters for brevity, but this can be put on several lines with no semicolons for added clarity as long as correct indentation is followed. It is also possible to similarly loop through a list of keys:

for key in "${!hashtable[@]}"; do echo "key: $key, value ${hashtable[$key]}"; done;

For the example associative array declared earlier, the last line produces this output, resolving the value using the supplied key:

key: key2, value value2
key: key1, value value1

All of this found a use in a script that I created for adding new Markdown files to a Hugo instance because there was more than one shortcode that I wished to apply due to my having more than one content directory in use.

String replacement in BASH scripting

28th April 2023

During creation of new posts for a Hugo deployed website, I found myself using the same directories again and again. Since I invariably ended up making typing mistakes when I did so, I fancied the idea of using shortcodes instead.

Because I wanted to turn the shortcode into the actual directory name, I chose the use of text replacement in BASH scripting. Thankfully, this is simple and avoids the use of regular expressions, which can bring their own problems. The essential syntax is as follows:

variable="${variable/search text/replacement}"

For the variable, the search text is substituted with the replacement very easily. It is even possible to include the search and replacement text in variables. In the example below, this is achieved using variables called original and replacement.

variable="${variable/$original/$replacement}"

Doing this got me my translatable shortcodes and converted them into actual directory names for the hugo command to process. There may be other uses yet.

Dealing with Error 1064 in MySQL queries

27th April 2023

Recently, I was querying a MySQL database table and got a response like the following:

ERROR 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use

The cause was that one of the data column columns was named using the MySQL reserved word key. While best practice is not to use reserved words like this at all, this was not something that I could alter. Therefore, I enclosed the offending keyword in backticks (`) to make the query work.

There is more information in the MySQL documentation on Schema Object Names and on Keywords and Reserved Words for dealing with or avoiding this kind of situation. While I realise that things change over time and that every implementation of SQL is a little different, it does look as if not using established keywords should be a minimum expectation when any database tables get created.

Another way to supply the terminal output of one BASH command to another

26th April 2023

My usual way for sending the output of one command to another is to be place one command after another, separated by the pipe (|) operator and adjusting the second command as needed. However, I recently found that this approach does not work well for docker pull commands and I then uncovered another option.

That is to enclose the input command in $( ) within the output command. Within the parentheses, any kind of command can be declared and includes anything with piping as part of it. As long as text is being printed to the terminal, it can be fed to the second command and used as required. Thus, you can have something like the following:

docker pull $([command outputting name of image to download])

This approach has helped with other kinds of automation of docker image and container use and deployment because it is so general. There may be other uses found for the approach yet.

Changing the number of lines produced by the tail command in a Linux or UNIX session

25th April 2023

Since I often use the tail command to look at the end of a log file and occasionally in combination with the watch command for constant updates, I got to wondering if the number of lines issued by the tail command could be changed. That is a simple thing to do with the -n switch. All you need is something like the following:

tail -n 20 logfile.log

Here the value of 20 is the number of lines produced when it would be 10 by default, and logfile.log gets replaced by the path and name of what you are examining. One thing to watch is that your terminal emulator can show all the lines being displayed. If you find that you are not seeing all the lines that you expect, then that might be the cause.

While you could find this by looking through the documentation, things do not always register with you during dry reading of something laden with lists of parameters or switches. That is an affliction with tools that do a lot and/or allow a lot of customisation.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.