TOPIC: SCRIPTING LANGUAGES
Shell swapping in Windows: PowerShell and the legacy command prompt
28th April 2010Until the advent of PowerShell, Windows had been the poor relation when it came to working from the command line when compared with UNIX, Linux and so on. A recent bit of fiddling had me trying to run FTP from the legacy command prompt when I ran into problems with UNC address resolution (it's unsupported by the old technology) and mapping of network drives. It turned out that my error 85 was being caused by an unavailable drive letter that the net use command didn't reveal as being in use. Reassuringly, this wasn't a Vista issue that I couldn't circumvent.
During this spot of debugging, I tried running batch files in PowerShell and discovered that you cannot run them there like you would from the old command prompt. In fact, you need a line like the following:
cmd /c script.bat
In other words, you have to call cmd.exe like perl.exe, wscript.exe and cscript.exe for batch files to execute. If I had time, I might have got to exploring the use ps1 files for setting up PowerShell commandlets, but that is something that needs to wait until another time. What I discovered though is that UNC addressing can be used with PowerShell without the need for drive letter mappings, not a bad development at all. While on the subject of discoveries, I discovered that the following command opens up a command prompt shell from PowerShell without any need to resort to the Start Menu:
cmd /k
Entering the exit command returns you to the PowerShell command line again, and entering cmd /? reveals the available options for the command, so you need never be constrained by your own knowledge or its limitations.
About Perl's Binding Operator
20th May 2009While this piece is as much an aide de memoire for myself as anything else, putting it here seems worthwhile if it answers questions for others. The binding operators, =~ or !~, come in handy when you are framing conditional statements in Perl using Regular Expressions, for example, testing whether x =~ /\d+/ or not. The =~ variant is also used for changing strings using the s/[pattern1]/[pattern2]/ regular expression construct (here, s stands for "substitute"). What has brought this to mind is that I wanted to ensure that something was done for strings that did not contain a certain pattern, and that's where the !~ binding operator came in useful; ^~ might have come to mind for some reason, but it wasn't what I needed.
Are ten seconds enough?
27th April 2009Fasthosts, the hosting provider for what you find here, has, in their wisdom, decided to limit the execution time for ASP scripts to 15 seconds and 10 seconds for any others. I haven't used Perl sufficiently in this shared hosting set up to determine how that is affected. In contrast, I can share my experiences on the PHP side and you may have noticed occasional glitches. They have also disabled the set_time_limit PHP function, so you cannot easily address the matter yourself where you need to do it. You almost get the feeling that they don't trust the abilities, actions and oversight of their users. Personally, I reckon that the ten-second limit is too short and that something of the order of 20 or 30 seconds would be better. If it all gets too restrictive, I suppose that there are other providers, though I think that I would avoid resellers after a previous less than glorious experience. There's the dedicated server option too if I was feeling flush, not so likely given the economic times in which we live.
Harnessing the power of ImageMagick
26th October 2008Using the command line to process images might sound senseless, only for the tools offered by ImageMagick certainly prove that it has its place. I have always been wary of using bulk processing for my digital photo files (some digitised from film prints with a scanner) but I do agree that some of it is needed to free up some time for other more necessary things. With this in mind, it is encouraging to see the results from ImageMagick and I can see it making a major difference to how I maintain my online photo gallery.
For instance, making thumbnail images for the gallery certainly seems to be one of those operations where command line bulk processing comes into its own, and ImageMagick's own convert command is heaven sent for this one. For resizing images, all that's needed is the following:
convert -resize 40% input.jpg output.jpg
Add a spot of further shell scripting and even a dash of Perl and the possibilities for this sort of thing become clearer, and this is but the pinnacle of the proverbial iceberg. The -rotate switch will do what the name suggests, while there are a whole plethora of other options on tap. So long as you have Ghostscript on your system, conversion of graphics to Postscript (and Encapsulated Postscript too) and PDF files is possible with the -page option controlling the margin around the image itself in the resulting outputs. Unfortunately, portrait is the sole orientation on offer, yet a bit of judicious post-processing will turn things around. Here's a command that'll do the trick:
convert -page 792x612+72+72 input.png ps2:output.ps
For retrieving image metadata like its resolution and size, the identify command comes into play. The -verbose option invokes the output of all manner of image metadata, so using grep or egrep is perhaps advisable, especially for bulking processing with the likes of Perl. Having the ability to stream image metadata makes loading databases like MySQL less of a chore than the manual data entry that has been my way of doing things until now.
JavaScript: write it yourself or use a library?
3rd July 2008I must admit that I have never been a great fan of JavaScript. For one thing, its need to interact with browser objects places you at the mercy of the purveyors of such pieces of software. Debugging is another fine art that can seem opaque to the uninitiated, since the amount and quality of the logging is determined by an interpreter not provided by the language's overseers. All in all, it seems to present a steep and obstacle-strewn learning curve to newcomers. As it happens, I have always found server side scripting languages like PHP and Perl to be more to my taste, and I have no aversion at all to writing SQL.
In the late 1990's when I was still using free web hosting, JavaScript probably was the best option for my then new online photo gallery. Whatever was the truth, it certainly was the way that I went. While learning Java or Flash might have been useful, I never managed to devote sufficient time to the task, so JavaScript turned out to be the way forward until I got a taste of server side scripting. Moving to paid hosting allowed for that to develop and the JavaScript option took a back seat.
Based on my experience of the browser wars and working with JavaScript throughout their existence, I was more than a little surprised at the buzz surrounding AJAX. Ploughing part of the way through WROX's Beginning AJAX did nothing to sell the technology to me; it came across as a very dry, jargon-blighted read. Nevertheless, I do see the advantages of web applications being as responsive as their desktop equivalents, but AJAX doesn't always guarantee this; as someone who has seen such applications crawling on IE6, I can certainly vouch for this. In fact, I suspect that may be behind the appearance of technologies such as AIR and Silverlight, so JavaScript may get usurped yet again, just like my move to a photo gallery powered on the server side.
Even with these concerns, using JavaScript to add a spot more interactivity is never a bad thing even if it can be overdone, hence the speed problems that I have witnessed. In fact, I have been known to use DOM scripting, but I need to have the use in mind before I can experiment with a technology; I cannot do it the other way around. Nevertheless, I am keen to see what JavaScript libraries such as jQuery and Prototype might have to offer (both have been used in WordPress). Since I have happened on their respective websites, they might make good places to start, and who knows where my curiosity might take me?
Automating FTP II: Windows
15th April 2008Having thought about automating command line FTP on UNIX/Linux, the same idea came to me for Windows too, and you can achieve much the same results, even if the way of getting there is slightly different. The first route to consider is running a script file with the ftp command at the command prompt (you may need %windir%system32ftp.exe to call the right FTP program in some cases):
ftp -s:script.txt
The contents of the script are something like the following:
open ftp.server.host
user
password
lcd destination_directory
cd source_directory
prompt
get filename
bye
It doesn't take much to turn your script into a batch file that takes the username as its first input and your password as its second for the sake of enhanced security and deletes any record thereof for the same reason:
echo open ftp.server.host > script.txt
echo %1 >> script.txt
echo %2 >> script.txt
echo cd htdocs >> script.txt
echo prompt >> script.txt
echo mget * >> script.txt
echo bye >> script.txt
%windir%system32ftp.exe -s:script.txt
del script.txt
The feel of the Windows command line (in Windows 2000, it feels very primitive, but Windows XP is better and there's PowerShell now too) can leave a lot to be desired by someone accustomed to its UNIX/Linux counterpart, yet there's still a lot of tweaking that you can do to the above, given a bit of knowledge of the Windows batch scripting language. Any escape from a total dependence on pointing and clicking can only be an advance.
Setting up a test web server on Ubuntu
1st November 2007Installing all the bits and pieces is painless enough so long as you know what's what; Synaptic does make it thus. Interestingly, Ubuntu's default installation is a lightweight affair with the addition of any additional components involving downloading the packages from the web. The whole process is all very well integrated and doesn't make you sweat every time you need to install additional software. In fact, it resolves any dependencies for you so that those packages can be put in place too; it lists them, you select them and Synaptic does the rest.
Returning to the job in hand, my shopping list included Apache, Perl, PHP and MySQL, the usual suspects in other words. Perl was already there, as it is on many UNIX systems, so installing the appropriate Apache module was all that was needed. PHP needed the base installation as well as the additional Apache module. MySQL needed the full treatment too, though its being split up into different pieces confounded things a little for my tired mind. Then, there were the MySQL modules for PHP to be set in place too.
The addition of Apache preceded all of these, but I have left it until now to describe its configuration, something that took longer than for the others; the installation itself was as easy as it was for the others. However, what surprised me were the differences in its configuration set up when compared with Windows. There are times when we get the same software but on different operating systems, which means that configuration files get set up differently. The first difference is that the main configuration file is called apache2.conf on Ubuntu rather than httpd.conf as on Windows. Like its Windows counterpart, Ubuntu's Apache does use subsidiary configuration files. However, there is an additional layer of configurability added courtesy of a standard feature of UNIX operating systems: symbolic links. Rather than having a single folder with the all configuration files stored therein, there are two pairs of folders, one pair for module configuration and another for site settings: mods-available/mods-enabled and sites-available/sites-enabled, respectively. In each pair, there is a folder with all the files and another containing symbolic links. It is the presence of a symbolic link for a given configuration file in the latter that activates it. I learned all this when trying to get mod_rewrite going and changing the web server folder from the default to somewhere less susceptible to wrecking during a re-installation or, heaven forbid, a destructive system crash. It's unusual, but it does work, even if it takes that little bit longer to get things sorted out when you first meet up with it.
Apart from the Apache set up and finding the right things to install, getting a test web server up and running was a fairly uneventful process. All's working well now, and I'll be taking things forward from here; making website Perl scripts compatible with their new world will be one of the next things that need to be done.
Filename autocompletion on the command line
19th October 2007The Windows 2000 command line feels an austere primitive when compared with the wonders of the UNIX/Linux equivalent. Windows XP feels a little better, and PowerShell is another animal altogether. With the latter pair, you do get file or folder autocompletion upon hitting the TAB key. What I didn't realise until recently was that continued tabbed cycled through the possibilities; I was hitting it once and retyping when I got the wrong folder or file. I stand corrected. With the shell in Linux/UNIX, you can get a listing of possibilities when you hit TAB for the second time and the first time only gives you completion as far as it can go with certainty; you'll never get to the wrong place, though you may not get anywhere at all. This works for bash, but not ksh88 as far as I can see. It's interesting how you can take two different approaches to reach the same end.
Numeric for loops in Korn shell scripting: from ksh88 to ksh93
18th October 2007The time-honoured syntax for a for loop in a UNIX script is what you see below, and that is what works with the default shell in Sun's Solaris UNIX operating system, ksh88.
for i in 1 2 3 4 5 6 7 8 9 10
do
if [[ -d dir$i ]]
then
:
else
mkdir dir$i
fi
done
However, there is a much nicer syntax supported since the advent of ksh93. It follows C language conventions found in all sorts of places like Java, Perl, PHP and so on. Here is an example:
for (( i=1; i<11; i++ ))
do
if [[ -d dir$i ]]
then
:
else
mkdir dir$i
fi
done
Detecting file ownership in Korn shell scripts
17th October 2007I recently was having a play with using a shell script to do some folder creation to help me set up a system for testing, and I started to hit ownership issues that caused some shell script errors. At the time, I didn't realise that there is a test that you can perform for ownership. The "-o" in the code below kicks in the test condition and avoids the error in question.
if [[ -o $dirname ]]
then
cd test
for i in 1 2 3 4 5 6 7 8 9 10
do
if [[ -d study$i ]]
then
:
else
mkdir study$i
fi
done
ls
cd ~
fi
Previously, I shared a way to test for directory (-d operator) and file (-f operator) existence that follows the above coding convention. However, there are a plethora of others and I have made a list of them here:
| Operator | Condition |
-e file |
File exists |
-L file |
File is a symbolic link |
-r file |
User has read access to file |
-s file |
File is non-empty |
-w file |
User has write access to file |
-x file |
User has execute-access to file |
-G file |
User's effective group ID is the same as that of the file |
file1 -nt file2 |
File 1 is newer than file2 |
file1 -ot file2 |
File 1 is older than file2 |
file1 -et file2 |
File 1 was created at the same time as file2 |
It's all useful stuff when you want to rid the command line output of errors in an above board way. These are the kinds of things that often make life easier...