TOPIC: SCRIPTING LANGUAGES
Harnessing the power of ImageMagick
26th October 2008Using the command line to process images might sound senseless, only for the tools offered by ImageMagick certainly prove that it has its place. I have always been wary of using bulk processing for my digital photo files (some digitised from film prints with a scanner) but I do agree that some of it is needed to free up some time for other more necessary things. With this in mind, it is encouraging to see the results from ImageMagick and I can see it making a major difference to how I maintain my online photo gallery.
For instance, making thumbnail images for the gallery certainly seems to be one of those operations where command line bulk processing comes into its own, and ImageMagick's own convert command is heaven sent for this one. For resizing images, all that's needed is the following:
convert -resize 40% input.jpg output.jpg
Add a spot of further shell scripting and even a dash of Perl and the possibilities for this sort of thing become clearer, and this is but the pinnacle of the proverbial iceberg. The -rotate
switch will do what the name suggests, while there are a whole plethora of other options on tap. So long as you have Ghostscript on your system, conversion of graphics to Postscript (and Encapsulated Postscript too) and PDF files is possible with the -page
option controlling the margin around the image itself in the resulting outputs. Unfortunately, portrait is the sole orientation on offer, yet a bit of judicious post-processing will turn things around. Here's a command that'll do the trick:
convert -page 792x612+72+72 input.png ps2:output.ps
For retrieving image metadata like its resolution and size, the identify command comes into play. The -verbose
option invokes the output of all manner of image metadata, so using grep
or egrep
is perhaps advisable, especially for bulking processing with the likes of Perl. Having the ability to stream image metadata makes loading databases like MySQL less of a chore than the manual data entry that has been my way of doing things until now.
JavaScript: write it yourself or use a library?
3rd July 2008I must admit that I have never been a great fan of JavaScript. For one thing, its need to interact with browser objects places you at the mercy of the purveyors of such pieces of software. Debugging is another fine art that can seem opaque to the uninitiated, since the amount and quality of the logging is determined by an interpreter not provided by the language's overseers. All in all, it seems to present a steep and obstacle-strewn learning curve to newcomers. As it happens, I have always found server side scripting languages like PHP and Perl to be more to my taste, and I have no aversion at all to writing SQL.
In the late 1990's when I was still using free web hosting, JavaScript probably was the best option for my then new online photo gallery. Whatever was the truth, it certainly was the way that I went. While learning Java or Flash might have been useful, I never managed to devote sufficient time to the task, so JavaScript turned out to be the way forward until I got a taste of server side scripting. Moving to paid hosting allowed for that to develop and the JavaScript option took a back seat.
Based on my experience of the browser wars and working with JavaScript throughout their existence, I was more than a little surprised at the buzz surrounding AJAX. Ploughing part of the way through WROX's Beginning AJAX did nothing to sell the technology to me; it came across as a very dry, jargon-blighted read. Nevertheless, I do see the advantages of web applications being as responsive as their desktop equivalents, but AJAX doesn't always guarantee this; as someone who has seen such applications crawling on IE6, I can certainly vouch for this. In fact, I suspect that may be behind the appearance of technologies such as AIR and Silverlight, so JavaScript may get usurped yet again, just like my move to a photo gallery powered on the server side.
Even with these concerns, using JavaScript to add a spot more interactivity is never a bad thing even if it can be overdone, hence the speed problems that I have witnessed. In fact, I have been known to use DOM scripting, but I need to have the use in mind before I can experiment with a technology; I cannot do it the other way around. Nevertheless, I am keen to see what JavaScript libraries such as jQuery and Prototype might have to offer (both have been used in WordPress). Since I have happened on their respective websites, they might make good places to start, and who knows where my curiosity might take me?
Automating FTP II: Windows
15th April 2008Having thought about automating command line FTP on UNIX/Linux, the same idea came to me for Windows too, and you can achieve much the same results, even if the way of getting there is slightly different. The first route to consider is running a script file with the ftp command at the command prompt (you may need %windir%system32ftp.exe to call the right FTP program in some cases):
ftp -s:script.txt
The contents of the script are something like the following:
open ftp.server.host
user
password
lcd destination_directory
cd source_directory
prompt
get filename
bye
It doesn't take much to turn your script into a batch file that takes the username as its first input and your password as its second for the sake of enhanced security and deletes any record thereof for the same reason:
echo open ftp.server.host > script.txt
echo %1 >> script.txt
echo %2 >> script.txt
echo cd htdocs >> script.txt
echo prompt >> script.txt
echo mget * >> script.txt
echo bye >> script.txt
%windir%system32ftp.exe -s:script.txt
del script.txt
The feel of the Windows command line (in Windows 2000, it feels very primitive, but Windows XP is better and there's PowerShell now too) can leave a lot to be desired by someone accustomed to its UNIX/Linux counterpart, yet there's still a lot of tweaking that you can do to the above, given a bit of knowledge of the Windows batch scripting language. Any escape from a total dependence on pointing and clicking can only be an advance.
Setting up a test web server on Ubuntu
1st November 2007Installing all the bits and pieces is painless enough so long as you know what's what; Synaptic does make it thus. Interestingly, Ubuntu's default installation is a lightweight affair with the addition of any additional components involving downloading the packages from the web. The whole process is all very well integrated and doesn't make you sweat every time you need to install additional software. In fact, it resolves any dependencies for you so that those packages can be put in place too; it lists them, you select them and Synaptic does the rest.
Returning to the job in hand, my shopping list included Apache, Perl, PHP and MySQL, the usual suspects in other words. Perl was already there, as it is on many UNIX systems, so installing the appropriate Apache module was all that was needed. PHP needed the base installation as well as the additional Apache module. MySQL needed the full treatment too, though its being split up into different pieces confounded things a little for my tired mind. Then, there were the MySQL modules for PHP to be set in place too.
The addition of Apache preceded all of these, but I have left it until now to describe its configuration, something that took longer than for the others; the installation itself was as easy as it was for the others. However, what surprised me were the differences in its configuration set up when compared with Windows. There are times when we get the same software but on different operating systems, which means that configuration files get set up differently. The first difference is that the main configuration file is called apache2.conf
on Ubuntu rather than httpd.conf
as on Windows. Like its Windows counterpart, Ubuntu's Apache does use subsidiary configuration files. However, there is an additional layer of configurability added courtesy of a standard feature of UNIX operating systems: symbolic links. Rather than having a single folder with the all configuration files stored therein, there are two pairs of folders, one pair for module configuration and another for site settings: mods-available/mods-enabled and sites-available/sites-enabled, respectively. In each pair, there is a folder with all the files and another containing symbolic links. It is the presence of a symbolic link for a given configuration file in the latter that activates it. I learned all this when trying to get mod_rewrite going and changing the web server folder from the default to somewhere less susceptible to wrecking during a re-installation or, heaven forbid, a destructive system crash. It's unusual, but it does work, even if it takes that little bit longer to get things sorted out when you first meet up with it.
Apart from the Apache set up and finding the right things to install, getting a test web server up and running was a fairly uneventful process. All's working well now, and I'll be taking things forward from here; making website Perl scripts compatible with their new world will be one of the next things that need to be done.
Filename autocompletion on the command line
19th October 2007The Windows 2000 command line feels an austere primitive when compared with the wonders of the UNIX/Linux equivalent. Windows XP feels a little better, and PowerShell is another animal altogether. With the latter pair, you do get file or folder autocompletion upon hitting the TAB key. What I didn't realise until recently was that continued tabbed cycled through the possibilities; I was hitting it once and retyping when I got the wrong folder or file. I stand corrected. With the shell in Linux/UNIX, you can get a listing of possibilities when you hit TAB for the second time and the first time only gives you completion as far as it can go with certainty; you'll never get to the wrong place, though you may not get anywhere at all. This works for bash, but not ksh88 as far as I can see. It's interesting how you can take two different approaches to reach the same end.
Numeric for loops in Korn shell scripting: from ksh88 to ksh93
18th October 2007The time-honoured syntax for a for
loop in a UNIX script is what you see below, and that is what works with the default shell in Sun's Solaris UNIX operating system, ksh88.
for i in 1 2 3 4 5 6 7 8 9 10
do
if [[ -d dir$i ]]
then
:
else
mkdir dir$i
fi
done
However, there is a much nicer syntax supported since the advent of ksh93. It follows C language conventions found in all sorts of places like Java, Perl, PHP and so on. Here is an example:
for (( i=1; i<11; i++ ))
do
if [[ -d dir$i ]]
then
:
else
mkdir dir$i
fi
done
Detecting file ownership in Korn shell scripts
17th October 2007I recently was having a play with using a shell script to do some folder creation to help me set up a system for testing, and I started to hit ownership issues that caused some shell script errors. At the time, I didn't realise that there is a test that you can perform for ownership. The "-o" in the code below kicks in the test condition and avoids the error in question.
if [[ -o $dirname ]]
then
cd test
for i in 1 2 3 4 5 6 7 8 9 10
do
if [[ -d study$i ]]
then
:
else
mkdir study$i
fi
done
ls
cd ~
fi
Previously, I shared a way to test for directory (-d operator) and file (-f operator) existence that follows the above coding convention. However, there are a plethora of others and I have made a list of them here:
Operator | Condition |
-e file |
File exists |
-L file |
File is a symbolic link |
-r file |
User has read access to file |
-s file |
File is non-empty |
-w file |
User has write access to file |
-x file |
User has execute-access to file |
-G file |
User's effective group ID is the same as that of the file |
file1 -nt file2 |
File 1 is newer than file2 |
file1 -ot file2 |
File 1 is older than file2 |
file1 -et file2 |
File 1 was created at the same time as file2 |
It's all useful stuff when you want to rid the command line output of errors in an above board way. These are the kinds of things that often make life easier...
Negative logic in Korn shell scripts
16th October 2007I was looking for a way to negative logic, doing something when a condition is not satisfied, that is, and found that the way to do it is to do nothing when the condition is satisfied and something when it isn't. Being used to saying do something when a condition is false, this does come as a surprise. In time, I may find another way on my UNIX shell scripting journey. Meanwhile, the code below will only create a directory when it doesn't already exist.
dirname=test
if [[ -d $dirname ]]
then
: # the colon operator means do nothing
else
mkdir test
fi
Tidying dynamic URL’s
15th June 2007A few years back, I came across a very nice article discussing how you would make a dynamic URL more palatable to a search engine, and I made good use of its content for my online photo gallery. The premise was that URL's that look like that below are no help to search engines indexing a website. Though this is received wisdom in some quarters, it doesn't seem to have done much to stall the rise of WordPress as a blogging platform.
http://www.mywebsite.com/serversidescript.php?id=394
That said, WordPress does offer a friendlier URL display option too, which you can see in use on this blog; they look a little like the example URL that you see below, and the approach is equally valid for both Perl and PHP. Since I have been using the same approach for the Perl scripts powering my online phone gallery, now want to apply the same thinking to a gallery written in PHP:
http://www.mywebsite.com/serversidescript.pl/id/394
The way that both expressions work is that a web server will chop pieces from a URL until it reaches a physical file. For a query URL, the extra information after the question mark is retained in its QUERY_STRING
variable, while extraneous directory path information is passed in the variable PATH_INFO
. For both Perl and PHP, these are extracted from the entries in an array; for Perl, this array is called is $ENV
and $_SERVER
is the PHP equivalent. Thus, $ENV{QUERY_STRING}
and $_SERVER{'QUERY_STRING'}
traps what comes after the ?
while $ENV{PATH_INFO}
and $_SERVER{'PATH_INFO'}
picks up the extra information following the file name (/id/394/
in the example). From there on, the usual rules apply regarding cleaning of any input but changing from one to another should be too arduous.
Perl vs. PHP: A Personal Experience
11th June 2007Ever since I converted it from a client-side JavaScript-powered affair, my online photo gallery has been written in Perl. There have been some challenges along the way, figuring out how to use hash tables has been one, but everything has worked as expected. However, I am now wondering if it is better to write things in PHP for the sake of consistency with the rest of the website. I had a go a rewriting the random photo page and, unless I have been missing something in the Perl world, things do seem more succinct with PHP. For instance, actions that formerly involved several lines of code can now be achieved in one. Reading the contents of a file into an array and stripping HTML/XML tags from a string fall into this category, and seeing the number of lines of code halving is a striking observation. I am not going to completely abandon Perl, it's a very nice language, but I do rather suspect that there is now an increased chance of my having a website whose server-side processing needs are served entirely by PHP.