A feast of plugins
Here's a useful idea: get your blog login page to look like it's part of your blog. It does work well on my hillwalking blog, but you do have to watch how it behaves with whatever theme you are using. Strangely, I couldn't make it work on my offline blog, the development mirror of what you see online. The ability to set what page is displayed after logging in or logging out is an especially useful inclusion.
These sound like wonderful ideas: being able to control the running order of things on your blog sidebar is a good thing. What scuppered my using them is that you need widgets turned on for the effect to work, and I have seen issues with how ID's have been set when things are widgetised.
When JavaScript table generation becomes a performance bottleneck
Recently, I have seen a web application that displays thousands of records in a scrollable table (please bear with me, there is a decent reason for this). From the appearance of the table, it would be reasonable to assume that the table is generated by the server and output directly to the screen, but this isn't the case. What actually happens is that the server more or less outputs JavaScript code that is then executed.
This takes the form of large arrays that are slotted into the DOM as the contents of the required table by a JavaScript function. With the large amounts of data involved, this means that the browser fully loads the client CPU while the JavaScript processing takes place, something that takes up to a minute to complete.
Admittedly, the browser is IE6, but this was all on a PC with a 2.53 GHz Pentium 4 and 512 MB of memory. Getting the server to deliver standards-compliant (X)HTML for what is needed in the first place seems a much, much better approach to me.
Ditching PC Plus?
When I start to lose interest in the features in a magazine that I regularly buy, then it's a matter of time before I stop buying the magazine altogether. Such a predicament is facing PC Plus, a magazine that I have been buying every month over the last ten years. The fate has already befallen titles like Web Designer, Amateur Photographer and Trail, all of which I now buy sporadically.
Returning to PC Plus, I get the impression that it feels more of a lightweight these days. Over the last decade, Future Publishing has been adding titles to its portfolio that take actually from its long-established stalwart. Both Linux Format and .Net are two that come to mind, while there are titles covering Windows Vista and computer music as well. In short, there may be sense in having just a single title for all things computing.
Being a sucker for punishment, I did pick up this month's PC Plus, only for the issue to be as good an example of the malaise as any. Reviews, once a mainstay of the title, are now less prominent than they were. In place of comparison tests, we now find discussions of topics like hardware acceleration, with some reviews mixed in. Topics such as robotics and artificial intelligence do rear their heads in feature articles, when I cannot say that I have a great deal of time for such futurology. The section containing tutorials remains, even if it has been hived off into a separate mini-magazine, and seemingly fails to escape the lightweight revolution.
All this is leading me to dump PC Plus in favour of PC Pro from Dennis Publishing. This feels reassuringly more heavyweight and, while the basic format has remained unchanged over the years, it still managed to remain fresh. Reviews, of both software and hardware, are very much in evidence while it manages to have those value-adding feature articles; this month, digital photography and rip-off Britain come under the spotlight. Add the Real Word Computing section, and it all makes a good read in these times of behemoths like Microsoft, Apple and Adobe delivering new things on the technology front. While I don't know if I have changed, PC Pro does seem better than PC Plus these days.
Another Olympus E-system review

I don't buy Amateur Photographer much these days, but sight of a review of Olympus' E-410 and E-510 SLR's got a copy into my possession. Amateur Photographer review features are usually comprehensive and this was no exception; there was none of the vitriol directed towards the Live View feature by Practical Photography, a defining feature of what I consider a lop-sided and none too useful review. The verdict was positive in the main, with the E-510 getting the nod over the E-410 because it fared better on the usability side of things. Image quality, my major concern, was said to be impressive with only dynamic range counting against the results. The Live View feature didn't attract the harsh commentary devoted to it by Practical Photography. Following this review, I have to say that the E-510 does tempt me with its combination of good image quality, dust removal and image stabilisation.
Trouble with my Canon CanoScan 5000F
Having had my Canon CanoScan 5000F scanner for nearly four years now, it has performed faultlessly until yesterday. However, it has now developed a fault that may hasten its replacement, and I have to say that my eye is on Epson's Perfection V350 Photo. Looking on the web, I did find scanners hidden away and that the selection available wasn't what I might have expected it to be. Maybe, the digital photography revolution has made the humble scanner a less essential item. And the fault? Scan results are featuring an unacceptably strong magenta cast. In fact, the first scans result in nothing except pitch black, though allowing things to stay on for a while does improve things. That suggests a hardware fault to me. I have raised the issue with Canon and will await their reply, even though it is stopping me from adding any new photos to my online photo gallery. If Canon comes back to me with the "uneconomical to repair" response, I will be ready to go out and buy the Epson. Time will tell with this one...
Update 1: A spot of further exploration has left me wondering if it is the lamp that's on the way out. If that's replaceable at a reasonable price, then the CanoScan might live on after a spot of repair.
Update 2: Canon's advice included reinstalling the scanner driver and, surprisingly given the symptoms, that seems to have helped. While I'll continue to keep an eye on things, it looks like I'll be hanging onto my money for now.
More on mod_rewrite
Today, I caught sight of an article on anti-plagiarism tools at The Blog Herald, and among the tricks was to use mod-rewrite to stop people "borrowing" both your images and your bandwidth. The gist is that you set up one or more conditions that exclude websites from the application of a rule forbidding access to images; the logic is that if the website referencing an image is not one of the websites listed in the conditions, then it doesn't get to display any of your images.
RewriteCond %{HTTP_REFERER} !^http://(www\.)?awebsite.com(/)?.*$ [NC]
RewriteRule .*\.(gif|jpe?g|png|bmp)$ [F,NC]
The wonders of mod_rewrite
When I wrote about tidying dynamic URL's a little while back, I had no inkling that that would be a second part to the tale. My discovery of mod_rewrite, an Apache module that facilitates URL translation. The effect is that one URL is presented to the user in the browser address bar, and the very same URL is also seen by search engines, while another is passed to the server for processing. Though it might sound like subterfuge, it works very well once you manage to get it set up properly. While the web host for my hillwalking blog/photo gallery has everything configured such that it is ready to go, the same did not apply to the offline Apache 2.2.x server that I have going on my own Windows XP box. There were two parts to getting it working there:
- Activating mod-rewrite on the server: this is as easy as uncommenting a line in the
httpd.conffile for the site (the line in question is:LoadModule rewrite_module modules/mod_rewrite.so). - Ensuring that the
.htaccessfile in the root of the web server directory is active. You need to set the values of theAllowOverridedirectives for the server root and CGI directories toAllso that.htaccessis active. Not doing it for the latter will result in an error beginning with the following:Options FollowSymLinks or SymLinksIfOwnerMatch is off which implies that. HavingRewriteRuledirective is forbiddenAllow from Allset for the required directories is another option to consider when you see errors like that.
Once you have got the above sorted, add this line to .htaccess: RewriteEngine On. Preceding it with an Options directive to ensure that FollowSymLinks and SymLinksIfOwnerMatch are switched on does no harm at all and may even be needed to get things running. That done, you can set about putting mod_write to work with lines like this:
RewriteRule ^pages/(.*)/?$ pages.php?query=$1
The effect of this is to take http://www.website.com/pages/input and convert it into a form for action by the server; in this case, that is http://www.website.com/pages.php?query=input. Anything contained by a bracket is assigned to the value of a system-named variable. If you have several bracketed sections, they are assigned to sequentially numbered variables as follows: $1 for the first, $2 for the second and so on. It's all good stuff when you get it going, and not only does it make things look much neater, but it also possesses an advantage when it comes to future-proofing too. Web addresses can be kept constant over time, even if things change behind the scenes. It means that any returning visitors will find what they saw the last time that they visited and surely must ensure good karma in the eyes of those all important search engines.
Forcing FAVICON.ICO to appear on the browser address bar
Here's a piece of code that should really be unnecessary when you put the favico.ico into the root of your website directory:
<link rel="shortcut icon" href="/favicon.ico" type="image/x-icon" />
The favico.ico should automatically appear there and in your browser bookmarks (favourites in IE) but there are occasions when the above has to sit in the header section of your web pages. I know because I am doing it for this blog.
IE6 and JavaScript performance
Having been exposed to an application at work that uses a lot of JavaScript, I fully appreciate what some mean when they discuss IE6's inefficient handling of JavaScript. After seeing a web page taking an age to reload and your CPU taking a hammering because of JavaScript processing, the penny does tend to drop...
Needless to say, this very much impacts the world of AJAX-driven web applications with their heavy dependence on client-side JavaScript. While IE7 does come to the rescue, there remain plenty of IE6 users still out there, and this is reflected in website statistics. This demonstrates a certain level of inertia in the browser market that not only afflicts the uptake of IE7 but also the likes of Mozilla, Opera and Safari. It also means that anyone developing AJAX applications very much needs to continue testing in IE6, especially if the product of their labours is for wider public use.
An example of such an application is Zimbra, an open-source web application for messaging and collaboration, and the people behind it have generously shared the results of their browser performance benchmarking. They did comparisons of IE6 vs. IE7 and Firefox 2 vs. IE7. IE6 easily came out as the worst of these, while Firefox 2 was the best.
The next question to be asked could centre around the type of code that is processed inefficiently by IE6. While I wouldn't be at all surprised if a list emerged, here's one: using Microsoft's proprietary innerHTML object to update the DOM for a web page format. Having a quick trawl on Google, this came up for mention as a cause of memory leaks. It is also a Microsoft innovation that never got taken up by those overseeing web standards, hardly a surprise since a spot of DOM scripting achieves the same end. It may be faster to code than any alternatives, and it does have some support from other browsers, but it does seem to have got a bad name, so it should be avoided if possible. That said, it would be interesting to see a performance comparison between innerHTML and DOM methods in IE6.
Two reviews of the Olympus E-410 tell very different stories

Recently, I encountered two very different reviews of the newly launched Olympus E-410 DSLR, in Which Digital Camera and Practical Photography, respectively. The review in the former was a positive affair, though it was a first look at the camera, but the impression formed by the latter reviewer was lukewarm in nature.
The camera features a live electronic viewer on its back, a carry-over from digital compacts and a feature that I may never use. While that might be the unique selling point for the camera, good image quality and the fact that it possesses a cleaning mechanism for its sensor are of much more interest to me.
Ironically, the Practical Photography review spent most of its time talking about the very feature of the camera that interests me the least, with only a scant mention of quality; to be frank, I didn't find it a very useful appraisal even if the electronic viewfinder may not be all that it's cracked up to be, and it's picture quality and camera handling that ultimately matter to the photography enthusiast.
In contrast, Which Digital Camera seemed to give a more rounded view and proved to be of more interest, and I'd be interested to see what the likes of Photography Monthly and Amateur Photographer might have to say. Incidentally, I also shall be awaiting the Which Digital Camera appraisal of Ricoh's Caplio GX100 in their next issue.