Technology Tales

Adventures & experiences in contemporary technology

AttributeError: module ‘PIL’ has no attribute ‘Image’

11th March 2024

One of my websites has an online photo gallery. This has been a long-term activity that has taken several forms over the years. Once HTML and JavaScript based, it then was powered by Perl before PHP and MySQL came along to take things from there.

While that remains how it works, the publishing side of things has used its own selection of mechanisms over the same time span. Perl and XML were the backbone until Python and Markdown took over. There was a time when ImageMagick and GraphicsMagick handled image processing, but Python now does that as well.

That was when the error message gracing the title of this post came to my notice. Everything was working well when executed in Spyder, but the message appears when I tried running things using Python on the command line. PIL is the abbreviated name for the Python 3 pillow package; there was one called PIL in the Python 2 days.

For me, pillow loads, resizes and creates new images, which is handy for adding borders and copyright/source information to each image as well as creating thumbnails. All this happens in memory and that makes everything go quickly, much faster than disk-based tools like ImageMagick and GraphicsMagick.

Of course, nothing is going to happen if the package cannot be loaded, and that is what the error message is about. Linux is what I mainly use, so that is the context for this scenario. What I was doing was something like the following in the Python script:

import PIL

Then, I referred to PIL.Image when I needed it, and this could not be found when the script was run from the command line (BASH). The solution was to add something like the following:

from PIL import Image

That sorted it, and I must have run into trouble with PIL.ImageFilter too, since I now load it in the same manner. In both cases, I could just refer to Image or ImageFilter as I required and without the dot syntax. However, you need to make sure that there is no clash with anything in another loaded Python package when doing this.

Using associative arrays in scripting for BASH version 4 and above

29th April 2023

Associated arrays get called different names in different computing languages: dictionaries, hash tables and so on. What is held in common is that they essentially are lists of key value pairs. In the case of BASH, you need at least version 4 to make use of this facility. In Linux Mint, I get 5.1.16, but macOS users apparently are still on BASH 3, so this post may not help them.

To declare an associative array in a later version of BASH, the following command gets issued:

declare -A hashtable

The code to add a key value pair then takes the following form:

hashtable[key1]=value1

Several values can be added to an empty array like this:

hashtable=( ["key1"]="value1" ["key2"]="value2" )

Declaration and instantiation of an associative can be done in the same line as follows:

declare -A hashtable=( ["key1"]="value1 ["key2"]="value2")

Handily, it is possible to loop through the entries in an associative array. It is possible to do this for keys and for values, once you expand out the appropriate list. The following expands a list of values:

"${hashtable[@]}"

Expanding a list of values needs something like this:

"${!hashtable[@]}"

Looping through a list of values needs something like the following:

for val in "${hashtable[@]}"; do echo "$val"; done;

The above has been placed on a single line with semicolon delimiters for brevity, but this can be put on several lines with no semicolons for added clarity as long as correct indentation is followed. It is also possible to similarly loop through a list of keys:

for key in "${!hashtable[@]}"; do echo "key: $key, value ${hashtable[$key]}"; done;

For the example associative array declared earlier, the last line produces this output, resolving the value using the supplied key:

key: key2, value value2
key: key1, value value1

All of this found a use in a script that I created for adding new Markdown files to a Hugo instance because there was more than one shortcode that I wished to apply due to my having more than one content directory in use.

String replacement in BASH scripting

28th April 2023

During creation of new posts for a Hugo deployed website, I found myself using the same directories again and again. Since I invariably ended up making typing mistakes when I did so, I fancied the idea of using shortcodes instead.

Because I wanted to turn the shortcode into the actual directory name, I chose the use of text replacement in BASH scripting. Thankfully, this is simple and avoids the use of regular expressions, which can bring their own problems. The essential syntax is as follows:

variable="${variable/search text/replacement}"

For the variable, the search text is substituted with the replacement very easily. It is even possible to include the search and replacement text in variables. In the example below, this is achieved using variables called original and replacement.

variable="${variable/$original/$replacement}"

Doing this got me my translatable shortcodes and converted them into actual directory names for the hugo command to process. There may be other uses yet.

Accessing Julia REPL command history

4th October 2022

In the BASH shell used on Linux and UNIX, the history command calls up a list of recent commands used and has many uses. There is a .bash_history file in the root of the user folder that logs and provides all this information so there are times when you need to exclude some commands from there but that is another story.

The Julia REPL environment works similarly to many operating system command line interfaces, so I wondered if there was a way to recall or refer to the history of commands issued. So far, I have not come across an equivalent to the BASH history command for the REPL itself but there the command history is retained in a file like .bash_history. The location varies on different operating systems though. On Linux, it is ~/.julia/logs/repl_history.jl while it is %USERPROFILE%\.julia\logs\repl_history.jl on Windows. While I tend to use scripts that I have written in VSCode rather than entering pieces of code in the REPL, the history retains its uses and I am sharing it here for others. In the past, the location changed but these are the ones with Julia 1.8.2, the version that I have at the time of writing.

Changing the Ansible Vault editor from Vi to Nano

15th August 2022

Recently, I got to experimenting with Ansible after reading about the orchestration tool in a copy of Admin magazine. It came in handy for updating a few web servers that I have as well as updating my main Linux workstation. For the former, automated entry of SSH passwords sufficed but the same did not apply for sudo usage on my local machine. This meant that I needed to use Ansible Vault to store the administrator password and doing so opened up a file in the Vi editor. since I am not familiar with Vi and wanted to get things sorted quickly, I fancied using something more user-friendly like Nano.

Doing this meant adding the following line to .bashrc:

export EDITOR=nano

Saving and closing the file followed by reloading the session set me up for what was needed.

Limiting Google Drive upload & synchronisation speeds using Trickle

9th October 2021

Having had a mishap that lost me some photos in the early days of my dalliance with digital photography, I have been far more careful since then and that now applies to other files as well. Doing regular backups is a must that you find reiterated by many different authors and the current computing climate makes doing that more vital than it ever was.

So, as well as having various local backups, I also have remote ones in the form of OneDrive, Dropbox and Google Drive. These more correctly are file synchronisation services but disciplined use can make them useful as additional storage facilities in the interests of maintaining added resilience. There also are dedicated backup services that I have seen reviewed in the likes of PC Pro magazine but I have to make use of those.

Insync

Part of my process for dealing with new digital photo files is to back them up to Google Drive and I did that with a Windows client in the early days but then moved to Insync running on Linux Mint. One drawback to the approach is that this hogs the upload bandwidth of an internet connection that has yet to move to fibre from copper cabling. Having fibre connections to a local cabinet helps but a 100 KiB/s upload speed is easily overwhelmed and digital photo file sizes keep increasing. It does not help that I insist on using more flexible raw formats like DNG, CR2 or CR3 either.

Making fewer images could help to cut the load but I still come away from an excursion with many files because I get so besotted with my surroundings. This means that upload sessions take numerous hours and can extend across calendar days. Ultimately, this makes my internet connection far less usable so I want to throttle upload speed much like what is possible in the Transmission BitTorrent client or in the Dropbox client. Unfortunately, this is not available in Insync so I have tried using the trickle command instead and an example is below:

trickle -d 2000 -u 50 insync

Here, the upload speed is limited to 50 KiB/s while the download speed is limited to 2000 KiB/s. In my case, the latter of these hardly matters while the former leaves me with acceptable internet usability. Insync does not work smoothly with this, however, so occasional restarts are needed to keep file uploads progressing and CPU load also is higher. As rough as the user experience feels, uploads can continue in parallel with other work.

gdrive

One other option that I am exploring is the use of the command-line tool gdrive and this appears to work well with trickle. After downloading and installing the tool, getting going is a matter of issuing the following command and following the instructions:

gdrive about

On web servers, I even have the tool backing up things to Google Drive on a scheduled basis. Because of a Google Drive limitation that I have encountered not only with gdrive but also with Insync and Google’s own Windows Google Drive client, synchronisation only can happen with two new folders, one local and the other remote. Handily, gdrive supports the usual bash style commands for working with remote directories so something like the following will create a directory on Google Drive:

gdrive mkdir ttdc [ID for parent folder]

Here, the ID for the parent folder may be omitted but it can be obtained by going to Google Drive online and getting a link location by right-clicking on a folder and choosing the appropriate context menu item. This gets you something like the following and the required identifier is found between the last slash and the first question mark in the address string (so as not to share any real links, I made the address more general below):

https://drive.google.com/drive/folders/[remote folder ID]?usp=sharing

Then, synchronisation uses a command like the following:

gdrive sync upload [local folder or file path] [remote folder ID]

There also is the option to do a one-way upload and this is the form of the command used:

gdrive upload [local folder or file path] -p [remote folder ID]

Because every file or folder object has its own ID on Google Drive, it is possible to create two objects on there that appear to have the same name though that is sure to cause confusion even if you know what is happening. It is possible in each of the above to throttle them using trickle as well:

trickle -d 2000 -u 50 gdrive sync upload [local folder or file path] [remote folder ID]
trickle -d 2000 -u 50 gdrive upload [local folder or file path] -p [remote folder ID]

Handily, this works without the added drama seen with Insync and lends itself to scripting as well so it could be something that I will incorporate into my current workflow. One thing that needs to be watched is file upload failures but there may be ways to catch those and retry them so that would another thing that needs doing. This is built into Insync and it would be a learning opportunity if I was to stick with gdrive instead.

Reloading .bashrc within a BASH terminal session

3rd July 2016

BASH is a command-line interpretor that is commonly used by Linux and UNIX operating systems. Chances are that you will find find yourself in a BASH session if you start up a terminal emulator in many of these though there are others like KSH and SSH too.

BASH comes with its own configuration files and one of these is located in your own home directory, .bashrc. Among other things, it can become a place to store command shortcuts or aliases. Here is an example:

alias us=’sudo apt-get update && sudo apt-get upgrade’

Such a definition needs there to be no spaces around the equals sign and the actual command to be declared in single quotes. Doing anything other than this will not work as I have found. Also, there are times when you want to update or add one of these and use it without shutting down a terminal emulator and restarting it.

To reload the .bashrc file to use the updates contained in there, one of the following commands can be issued:

source ~/.bashrc

. ~/.bashrc

Both will read the file and execute its contents so you get those updates made available so you can continue what you are doing. There appears to be a tendency for this kind of thing in the world of Linux and UNIX because it also applies to remounting drives after a change to /etc/fstab and restarting system services like Apache, MySQL or Nginx. The command for the former is below:

sudo mount -a

Often, the means for applying the sorts of in-situ changes that you make are simple ones too and anything that avoids system reboots has to be good since you have less work interruptions.

Compressing a VirtualBox VDI file for a Windows guest running on a Linux Host

11th February 2016

Recently, I had a situation where my the VDI files for my Windows 10 virtual machine expanded in size all of a sudden and I needed to reduce them. My downloading maps for use with RouteBuddy may have been the cause so I moved the ISO installation files onto the underlying Linux Mint drives. With that space, I then set to uncovering how to compact the virtual disk file and the Sysinternals sdelete tool was recommended for clearing unused space. After downloading, I set it to work in a PowerShell session running on the guest operating system from its directory using the following command:

.\sdelete -z [drive letter designation; E: is an example]

From the command prompt, the following should do:

sdelete -z [drive letter designation; E: is an example]

Once, that had completed, I shut down the VM and executed a command like the following from a bash terminal session:

VBoxManage modifyhd [file location/file name].vdi --compact

Where there was space to release, VDI files were reduced in size to return more disk space. More could be done so I will look into the Windows 10 drives to see what else needs to be moved out of them.

Copying only updated new or updated files by command line in Linux or Windows

2nd August 2014

With a growing collection of photographic images, I often find myself making backups of files using copy commands and the data volumes are such that I don’t want to keep copying the same files over and over again so incremental file transfers are what I need. So commands like the following often get issued from a Linux command line:

cp -pruv [source] [destination]

Because this is in Linux, it the bash shell that I use so the switches may not apply to others like ssh, fish or ksh. For my case, p preserves file properties such as its time and date and the cp command does not do this always so it needs adding. The r switch is useful because the copy then in recursive so only a directory needs to specified as the source and the destination needs to be one level up from a folder with the same name there so as to avoid file duplication. It is the u switch that makes the file copy incremental and the v one issues messages to the shell that show how the copying is going. Seeing a file name issued by the latter does tell you how much more needs to be copied and that the files are going where they should.

What inspired this post though is my need to do the same in a Windows session and issuing xcopy commands will achieve the same end. Here are two that will do the needful:

xcopy [source] [destination] /d /s

xcopy [source] [destination] /d /e

In both cases, it is the d switch that ensures that the copy is incremental and you can add a date too, with a colon between it and the /d, if you see fit. The s switch copies only directories that contain files while the e one copies even empty directories. Using the d switch without either of those did not trigger any copying action when I tried so I reckon that you cannot do without either of them. By default, both of these commands issue output to the command line so you can keep an eye on what is happening and this especially is useful when ensuring that files are going to the right destination because the behaviour differs from that of the bash shell in Linux.

Turning on autocompletion for the bash shell in terminal sessions

26th June 2013

At some point, I managed to lose the ability to have tab key based autocompletion on terminal sessions on my Ubuntu GNOME machine. Wanting it back had me turn to the web for an answer and I found it on a Linux Mint forum; the bash shell is so pervasive in the UNIX and Linux worlds that you can look anywhere for a fix like this.

The problem centred around the .bashrc file in my home area. It does have quite a few handy custom aliases and I must have done a foolish spring clean on the file sometime. That is the only way that I can explain how the following lines got removed:

if [ -f /etc/bash_completion ]; then
. /etc/bash_completion
fi

What they do is look to see if /etc/bash_completion can be found on your system and to use it for tab-based autocompletion. With the lines not in .bashrc, it couldn’t happen. Others may replace bash_completion with bash.bashrc to got a fuller complement of features but I’ll stick with what I have for now.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.