Technology Tales

Adventures & experiences in contemporary technology

AttributeError: module ‘PIL’ has no attribute ‘Image’

11th March 2024

One of my websites has an online photo gallery. This has been a long-term activity that has taken several forms over the years. Once HTML and JavaScript based, it then was powered by Perl before PHP and MySQL came along to take things from there.

While that remains how it works, the publishing side of things has used its own selection of mechanisms over the same time span. Perl and XML were the backbone until Python and Markdown took over. There was a time when ImageMagick and GraphicsMagick handled image processing, but Python now does that as well.

That was when the error message gracing the title of this post came to my notice. Everything was working well when executed in Spyder, but the message appears when I tried running things using Python on the command line. PIL is the abbreviated name for the Python 3 pillow package; there was one called PIL in the Python 2 days.

For me, pillow loads, resizes and creates new images, which is handy for adding borders and copyright/source information to each image as well as creating thumbnails. All this happens in memory and that makes everything go quickly, much faster than disk-based tools like ImageMagick and GraphicsMagick.

Of course, nothing is going to happen if the package cannot be loaded, and that is what the error message is about. Linux is what I mainly use, so that is the context for this scenario. What I was doing was something like the following in the Python script:

import PIL

Then, I referred to PIL.Image when I needed it, and this could not be found when the script was run from the command line (BASH). The solution was to add something like the following:

from PIL import Image

That sorted it, and I must have run into trouble with PIL.ImageFilter too, since I now load it in the same manner. In both cases, I could just refer to Image or ImageFilter as I required and without the dot syntax. However, you need to make sure that there is no clash with anything in another loaded Python package when doing this.

Building a sitemap in XML

24th November 2022

While there are many tools that will build XML site maps, there is some satisfaction to be had in creating your own. This is in spite of there being a multitude of search engine optimisation plugins for content management systems like WordPress or what is built into static site generators like Hugo. Sometimes, building your own allows for added simplicity and that is shared with recent efforts in WordPress theme development.

The sitemap XML protocol is simple enough to offer a short coding project. The basis was what Hugo generates and I used Python to create the XML files. The only libraries that I needed were configparser, SQLAlchemy and pandas. the first two of these allowed databases to be queried and the last on the list was used for data processing. Otherwise, it was a case of using what is built into the Python language like file writing and looping.

Once the scripts were ready, they could be uploaded to web servers and executed by scheduled jobs using CRON to keep things up to date. along the way, I also uncovered a way to publicise the locations of the sitemap files to search engine bots using robots.txt.  The structure of the instruction is the following:

User-agent: *
Sitemap: sitemap.xml

This means that it announces to all bots the location of the sitemap file. In my case, I always included the full URL for the XML file and that clearly varies by website location.

Installing Perl modules using CPAN on Linux Mint 19.2

28th September 2019

My online travel photo gallery is a self-coded set of PHP scripts that read data from tables in a MySQL database. These tables are built from input XML files using a Perl script that itself creates and executes an SQL script. The Perl script also does some image processing using GraphicsMagick commands to resize images and to add copyright information and image framing. Because this processed one image at a time sequentially, it was taking several minutes to complete and only partly used the capacity of the PC that I used.

This led me to look at adding parallel processing and that is what brought me to looking at the Parallel::ForkManager Perl module. An alternative approach might have been to add new images in such a way as not to need the full run involving hundreds of image files, but that will take more work and I fancied having a look at parallelising things anyway.

If it was not there already, the first act would have been to install build-essential to get access to the cpan command. The following command accomplishes this:

sudo apt-get install build-essential

Once that is there, the cpan command needs to be run and some questions answered to get things going. The first question to answer is whether you want setup to be as automated as possible and the default answer of yes worked for me. The next question to answer regards the approach that cpan takes when installing modules and I chose sudo here (local::lib is the default value and manual is another option). After this, cpan drops into its own command shell. Here, I issued two more commands to continue the basic setup by updating CPAN.pm to the latest version and adding Bundle::CPAN to optimise the module further:

make install
install Bundle::CPAN

Continuing the last of these may need extra intervention to confirmation the suggested default of exit at one point in its operation and that takes a little time to complete. It is after this that Parallel::ForkManager can be installed using the following command:

install Parallel::ForkManager

That completed quickly and the cpan shell was exited using its exit command. Then, the new module was available in scripting after that. The actual use of this module is something that hope to describe in another post so I am ending this one here and the same process is just as applicable to setting up cpan and adding any other Perl CPAN module.

Moving a website from shared hosting to a virtual private server

24th November 2018

This year has seen some optimisation being applied to my web presences guided by the results of GTMetrix scans. It was then that I realised how slow things were, so server loads were reduced. Anything that slowed response times, such as WordPress plugins, got removed. Usage of Matomo also was curtailed in favour of Google Analytics while HTML, CSS and JS minification followed. What had yet to happen was a search for a faster server. Now, another website has been moved onto a virtual private server (VPS) to see how that would go.

Speed was not the only consideration since security was a factor too. After all, a VPS is more locked away from other users than a folder on a shared server. There also is the added sense of control, so Let’s Encrypt SSL certificates can be added using the Electronic Frontier Foundation’s Certbot. That avoids the expense of using an SSL certificate provided through my shared hosting provider and a successful transition for my travel website may mean that this one undergoes the same move.

For the VPS, I chose Ubuntu 18.04 as its operating system and it came with the LAMP stack already in place. Have offload development websites, the mix of Apache, MySQL and PHP is more familiar to me than anything using Nginx or Python. It also means that .htaccess files become more useful than they were on my previous Nginx-based platform. Having full access to the operating system by means of SSH helps too and should mean that I have fewer calls on technical support since I can do more for myself. Any extra tinkering should not affect others either, since this type of setup is well known to me and having an offline counterpart means that anything riskier is tried there beforehand.

Naturally, there were niggles to overcome with the move. The first to fix was to make the MySQL instance accept calls from outside the server so that I could migrate data there from elsewhere and I even got my shared hosting setup to start using the new database to see what performance boost it might give. To make all this happen, I first found the location of the relevant my.cnf configuration file using the following command:

find / -name my.cnf

Once I had the right file, I commented out the following line that it contained and restarted the database service afterwards using another command to stop the appearance of any error 111 messages:

bind-address 127.0.0.1
service mysql restart

After that, things worked as required and I moved onto another matter: uploading the requisite files. That meant installing an FTP server so I chose proftpd since I knew that well from previous tinkering. Once that was in place, file transfer commenced.

When that was done, I could do some testing to see if I had an active web server that loaded the website. Along the way, I also instated some Apache modules like mod-rewrite using the a2enmod command, restarting Apache each time I enabled another module.

Then, I discovered that Textpattern needed php-7.2-xml installed, so the following command was executed to do this:

apt install php7.2-xml

Then, the following line was uncommented in the correct php.ini configuration file that I found using the same method as that described already for the my.cnf configuration and that was followed by yet another Apache restart:

extension=php_xmlrpc.dll

Addressing the above issues yielded enough success for me to change the IP address in my Cloudflare dashboard so it pointed at the VPS and not the shared server. The changeover happened seamlessly without having to await DNS updates as once would have been the case. It had the added advantage of making both WordPress and Textpattern work fully.

With everything working to my satisfaction, I then followed the instructions on Certbot to set up my new Let’s Encrypt SSL certificate. Aside from a tweak to a configuration file and another Apache restart, the process was more automated than I had expected so I was ready to embark on some fine-tuning to embed the new security arrangements. That meant updating .htaccess files and Textpattern has its own, so the following addition was needed there:

RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This complemented what was already in the main .htaccess file and WordPress allows you to include http(s) in the address it uses, so that was another task completed. The general .htaccess only needed the following lines to be added:

RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://www.assortedexplorations.com/$1 [R,L]

What all these achieve is to redirect insecure connections to secure ones for every visitor to the website. After that, internal hyperlinks without https needed updating along with any forms so that a padlock sign could be shown for all pages.

With the main work completed, it was time to sort out a lingering niggle regarding the appearance of an FTP login page every time a WordPress installation or update was requested. The main solution was to make the web server account the owner of the files and directories, but the following line was added to wp-config.php as part of the fix even if it probably is not necessary:

define('FS_METHOD', 'direct');

There also was the non-operation of WP Cron and that was addressed using WP-CLI and a script from Bjorn Johansen. To make double sure of its effectiveness, the following was added to wp-config.php to turn off the usual WP-Cron behaviour:

define('DISABLE_WP_CRON', true);

Intriguingly, WP-CLI offers a long list of possible commands that are worth investigating. A few have been examined but more await attention.

Before those, I still need to get my new VPS to send emails. So far, sendmail has been installed, the hostname changed from localhost and the server restarted. More investigations are needed but what I have not is faster than what was there before, so the effort has been rewarded already.

Customising Nautilus (or Files) in Ubuntu GNOME 13.04

12th September 2013

The changes made to Nautilus, otherwise known as Files, in GNOME Shell 3.6 were contentious and the response of the Linux Mint was to create their own variant called Nemo from the previous version of the application. On the Cinnamon or MATE desktop environments, the then latest version of GNOME’s file manager would have looked like a fish out of water without its application menu in the top panel on the GNOME Shell desktop. It is possible to make a few modifications that help Nautilus to look more at home on those Linux Mint desktops and I have collected them here because they are useful for GNOME Shell users too. Here they are in turn.

Adding Application Menu entries to Location Options Menu

The Location Options menu is what you get on clicking the button with the cog icon on the right-hand side of the application’s location bar. Using Gsettings, it is possible to make that menu include the sort of entries that are in the application menu in the GNOME Shell panel at the top of the screen. These include an entry for closing the whole application as well as setting its preferences (or options). Running the following command does just that (if it does not work as it should, try changing the single and double quotes to those understood by a command shell):

gsettings set org.gnome.settings-daemon.plugins.xsettings overrides '@a{sv} {"Gtk/ShellShowsAppMenu": <int32 0>}'

Adding in the Remove App Menu GNOME Shell extension will clean up the GNOME Shell a little by removing the application menu altogether. If, for some reason, you wish to restore the default behaviour, then the following command does the required reset:

gsettings set org.gnome.settings-daemon.plugins.xsettings overrides '@a{sv} {}'

Stopping Hiding of the Application Title Bar When Maximised

By default, GNOME Shell can hide the application title bars of GNOME applications such as Nautilus on window maximisation and this is Nautilus now works by default. Changing the behaviour so that the title bar is kept on maximised windows can be as simple as adding in the ignore_request_hide_titlebar extension. The trouble with GNOME Shell extensions is that they can stop working when a new version of GNOME Shell is used, so there’s another option: editing metacity-theme-3.xml but /usr/share/themes/Adwaita/metacity-1. The file can be opened using superuser privileges using the following command:

gksudo gedit /usr/share/themes/Adwaita/metacity-1/metacity-theme-3.xml

With the file open, it is a matter of replacing instances of ' has_title="false" ' with ' has_title="true" ', saving it and reloading GNOME Shell. This may persevere across different versions of GNOME Shell should the extension not do so.

Disabling Recursive Search

This discovery is what led me to bundle these customisations in an entry on here in the first place. In Nemo and older versions of Nautilus, just typing with the application open would lead you down a list towards the file that you wanted. This behaviour was replaced by an automatic recursive search from GNOME Shell 3.6 where the search functionality was extended beyond the folder that was open in the file manager to its subdirectories. To change that to subsetting within the open folder or directory, you need to install a patch version of Nautilus using the following commands:

sudo add-apt-repository ppa:dr3mro/personal
sudo apt-get update && sudo apt-get upgrade

The first of these adds a new repository with the patched version of Nautilus while the second combination installs the patched version. With that done, it is time to issue the following command:

gsettings set org.gnome.nautilus.preferences enable-recursive-search false

That sets the value of the new enable-recursive-search option to false for searching within an open directory. It also can be found using Dconf-Editor in the following hierarchy: org -> gnome -> nautilus -> preferences. The obsession of the GNOME project team with minimalism is robbing users of some options and this would be a good one to have by default too. Maybe the others should be treated in the same way even if you need to use Gsettings or Dconf-Editor to change them to avoid clutter. Having GNOME Tweak Tool able to set them all would be even better.

Getting an Epson Pefection 4490 Photo scanner going with Ubuntu GNOME Remix 12.10

7th March 2013

My Epson Perfection 4490 Photo scanner has been in my possession for a while now and its impossible to justify any replacement given that it both works well and digital photography has taken over from its film predecessor for me. Every time I go installing an operating system afresh, I need to reinstate it again and last year’s installation of Ubuntu GNOME Remix 12.04 only saw me do the deed recently. When I did so, it was brought back to me that I’d never gone and documented on here how this was done. Given that I sometimes use this place as a repository of stuff to which I need to refer again in the future, it seemed remiss of me so here it is for you all.

Though I had XSane and SimpleScan already installed on the system, Sane wasn’t on there so I went and added it and a few other extras using the following command:

sudo apt-get install sane sane-utils libsane-extras

Then, it was onto the Epson website for their Perfection 4490 Photo Linux drivers since Sane’s support for this scanner seemingly remains incomplete even though it pre-dates my move to Linux in 2007. Three files were needed and the following commands install them (depending on when you do this, the file names may be different so just change them to whatever they are for you; it can be done with a single command too but there is not enough girth for that here):

sudo dpkg -i iscan-data_1.22.0-1_all.deb
sudo dpkg -i iscan_2.29.1-5~usb0.1.ltdl7_i386.deb
sudo dpkg -i iscan-plugin-gt-x750_2.1.2-1_i386.deb

With those in place, there was one other task that needed doing so that scanning could be done without resorting to running scanning software using sudo privileges. To free up the access to a normal user account, I needed a HAL device information file. These normally are in /usr/share/hal/fdi/ but they change every time an installation so any modifications that you may make are going to be lost. Therefore, there is no point modifying either /usr/share/hal/fdi/preprobe/10osvendor/20-libsane.fdi or /usr/share/hal/fdi/preprobe/10osvendor/20-libsane-extras.fdi where scanner information usually is to be found.

The first task in creating an fdi file was to issue the lsusb command and look for a line corresponding to my scanner. This is the one that I got:

Bus 001 Device 004: ID 04b8:0119 Seiko Epson Corp. Perfection 4490 Photo

From this, I gleaned the manufacturer ID and model ID as 04b8 and 0119, respectively. These are needed later on. Next I needed to create the hal/fdi/preprobe/ folder structure under /etc since it was there. Then, I created epson4490photo.fdi in the bottom folder of the tree (/etc/hal/fdi/preprobe/epson4490photo.fdi) as follows:

cd /etc/hal/fdi/preprobe/ && sudo touch epson4490photo.fdi

Then, I edited the new file using the following command:

gksu gedit epson4490photo.fdi &

When open, I added in the following text:

<?xml version="1.0" encoding="UTF-8"?>
<deviceinfo version="0.2">
<device>
<match key="info.subsystem" string="usb">
<!-- Epson Perfection 4490 Photo -->
<match key="usb.vendor_id" int="0x04b8">
<match key="usb.product_id" int="0x0119">
<append key="info.capabilities" type="strlist">scanner</append>
<merge key="scanner.access_method" type="string">proprietary</merge>
</match>
</match>
</match>
</device>
</deviceinfo>

It’s all in XML so the place to look is immediately beneath the scanner name comment. The int attributes of the two match elements immediately following the comment line are populated using the information from the lsusb command output with 0x prefixing both the manufacturer and model identifiers. The element with a key attribute of usb.vendor_id is the former and that with a key attribute of usb.product_id is the latter. With epson4490photo.fdi saved, I rebooted the machine to restart HAL and all was as I wanted it to be apart maybe from XSane making complaints that seemed not to be of any actual consequence. With Epson’s Image Scan! and Simple Scan on the PC, there’s no need to be bothered with those messages. Choice is good when you have it, especially when you have expended some effort to get that far.

Suffering from neglect?

6th March 2009

There have been several recorded instances of Google acquiring something and then not developing it to its full potential. FeedBurner is yet another acquisition where this sort of thing has been suspected. Changeovers by monolithic edict and lack of responsiveness from support fora are the sorts of things that breed resentment in some that share opinions on the web. Within the last month, I found that my FeedBurner feeds were not being updated as they should have been and it would not accept a new blog feed when I tried adding it. The result of both these was that I got to deactivating the FeedBurner FeedSmith plugin to take FeedBurner out of the way for my feed subscribers; those regulars on my hillwalking blog were greeted by a splurge of activity following something of a hiatus. There are alternatives such as RapidFeed and Pheedo but I will stay away from the likes of these for a little while and take advantage of the newly added FeedStats plugin to keep tabs on how many come to see the feeds. The downside to this is that IE6 users will see the pure XML rather than a version with a more friendly formatting.

java.net.MalformedURLException: unknown protocol: j

15th December 2007

I know that that there better things to call a blog post than to use part of an error message that I got from Saxonica‘s Saxon while I was converting XML files into PHP equivalents for the visitor information section of my main website. I use the open source Saxon-B rather than the commercial Saxon-SA and it fulfils all of my needs and version 8 and later (it has now reached 9.0.0.2) handle the XSLT 2.0 features that I need to make the transformations really clever. Also, because Saxon is available as a jar file, it is cross platform so long as you have Java on board. There are, however, some slight differences in behaviour. I now run thte thing in Linux and any Windows-style file locations are not recognised. I had the file path in a DTD declaration starting with "J:\" and that was thought to be a protocol like file, http, https, ftp and so on because of the colon. There’s no j protocol so Java gets confused and, voilà!, you get the rather obscure error that titles this post. Otherwise, the migration of the Perl script that creates XSLT files and fires off the required XML to PHP transformations was a fairly straightforward exercise once file locations and shebang line were set right.

Exploring AJAX

7th June 2007

When I first started it, my online photo gallery started out simply as a set of interlinked HTML pages. Over time, I discovered frames (yes, them!) and started to make use of JavaScript to make the slideshows slicker. In those days, I was working off free webspace provided by my ISP and client-side scripting was the only tool that I had for enhancing functionality. Having tired of the vagaries of client-side scripting -- the browser wars were in full swing and incompatibilities reigned supreme, I went with paid hosting in order to get access to tools like Perl and PHP for server-side processing; their flexibility compared to JavaScript was a breath of fresh air to me and I am still a fan of the server-side approach.

The journey that I have just described is one that I now know was followed by a lot of website builders around the same time. Nevertheless, I have still held onto JavaScript for some things, particularly for updating the DOM as part of making the pages more responsive to user interaction. In the last few years, a hybrid approach has been gaining currency: AJAX. This offers the ability to modify parts of a page without needing to reload the whole thing and that has generated a considerable amount of interest among web application developers.

The world of AJAX is evidently a complex one though the underlying principle can be explained in simple terms. The essential idea is that you use JavaScript to call a server-side script, PHP is as good an example as any, that returns either text or XML that can be used to update part of a web page in situ without the need to reload it as per the traditional way of working. It has opened up so many possibilities from the interface design point of view that AJAX became a hot topic that still receives much attention today. One bugbear is efficiency because I have seem an AJAX application lock up a PC with a little help from IE6. There will always remain times where server-side processing is the best route and that needs to balanced against the client-side and vice versa.

Like its forbear DHTML, AJAX is really a development approach using a number of different technologies in combination. The DHTML elements such as (X)HTML, CSS, DOM and JavaScript are very much part of the AJAX world but server-side elements such as HTTP, PHP, MySQL and XML are also very much part of the fabric of the landscape. In fact, while AJAX can use plain text as the transfer format, XML is the one implied by the AJAX acronym and XSLT is used to transform XML in HTML. However, AJAX is not limited to the aforementioned technologies; for instance, I cannot see why Perl cannot play a role in place of PHP and ASP can be used for the same things.

Even in these standards-compliant days, browser support for AJAX remains diverse, to say the least, and it is akin to having MSIE in one corner and the rest in the other. Mind you, Microsoft did introduce the tools in the first place but they used ActiveX and Mozilla created a new object type rather than continue this method of operation. Given that ActiveX is a Windows-only technology, I can see why Mozilla did what they did and it is a sensible decision. In fact, IE7 appears to have picked up the Mozilla way of doing things.

Even with the apparent convergence, there will continue to be a need for the AJAX JavaScript libraries that are currently out there. Incidentally, Adobe has included one called Spry with Dreamweaver CS3. Nevertheless, I still like to find out how things work at the basic level and feel somewhat obstructed when I cannot do this. I remember perusing Wrox’s Professional AJAX and found the constant references to the associated function library rather grating; the writing style didn’t help either.

My taking a more granular approach has got me reading SAMS Teach Yourself AJAX in 10 Minutes as a means for getting my foot in the door. As with their Teach Yourself … in 24 Hours series, the title is a little misleading since there are 22 lessons of 10 minutes in duration (the 24 Hours moniker refers to there being 24 lessons, each of one hour in length). Anything composed of 10 minute lessons, even 22 of them, is never going to be comprehensive but, as a means for getting started, I have to say that the approach seems effective on the basis of this volume. It has certainly whet my appetite for giving AJAX a go and it’ll be interesting to see how things progress from here.

Photo gallery trouble

4th June 2007

The recent woes at Zooomr (mustn’t forget that it is spelt with three O’s…), have prompted me to ponder photo galleries. My own is a self-hosted affair with Perl doing the honours of reading and processing data stored in an XML file. It may seem a unsophisticated system but it has worked well and, apart from the matter of server administration, I am in full control. Yes, there is a development and maintenance overhead but I enjoy programming and scripting anyway; I just to find the time for it. If this is not your idea of fun, then using a service like Flickr, Zooomr or Photobucket is attractive so long as things don’t go awry as they have for Zooomr and all of the bad publicity and user frustration can’t have done Zooomr’s future prospects any good at all.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.