Technology Tales

Adventures & experiences in contemporary technology

Installing Citrix Receiver 13.0 in Ubuntu GNOME 13.10 64-bit

28th November 2013

Installing the latest version of Citrix Receiver (13.0 at the time of writing) on 64-bit Ubuntu should be as simple as downloading the required DEB package and double-clicking on the file so that Ubuntu Software Centre can work its magic. Unfortunately, the 64-bit DEB file is faulty so that means that the Ubuntu community how-to guide for Citrix still is needed. In fact, any user of Linux Mint or another distro that uses Ubuntu as its base would do well to have a look at that Ubuntu link.

For sake of completeness, I still am going to let you in on the process that worked for me. Once the DEB file has been downloaded, the first task is to creating a temporary folder where the DEB file’s contents can be extracted:

mkdir ica_temp

With that in place, it then is time to do the extraction and it needs two commands with the second of these need to extract the control file while the first extracts everything else.

sudo dpkg-deb -x icaclient- ica_temp
sudo dpkg-deb --control icaclient- ica_temp/DEBIAN

It is the control file that has been the cause of all the bother because it refers to unavailable dependencies that it really doesn’t need anyway. To open the file for editing, issue the following command:

sudo gedit ica_temp/DEBIAN/control

Then change line 7 (it should begin with Depends:) to: Depends: libc6-i386 (>= 2.7-1), lib32z1, nspluginwrapper. There are other software packages in there that Ubuntu no longer supports and they are not needed anyway. With the edit made and the file saved, the next step is to build a new DEB package with the corrected control file:

dpkg -b ica_temp icaclient-modified.deb

Once you have the package, the next step is to install it using the following command:

sudo dpkg -i icaclient-modified.deb

If it fails, then you have missing dependencies and the following command should sort these before a re-run of the above command again:

sudo apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386

With Citrix Receiver installed, there is one more thing that is needed before you can use it freely. This is to put Thawte security certificate files into /opt/Citrix/ICAClient/keystore/cacerts. What I had not realised until recently was that many of these already are in /usr/share/ca-certificates/mozilla and linking to them with the following command makes them available to Citrix receiver:

sudo ln -s /usr/share/ca-certificates/mozilla/* /opt/Citrix/ICAClient/keystore/cacerts/

Another approach is to download the Thawte certificates and extract the archive to /tmp/. From there they can be copied to /opt/Citrix/ICAClient/keystore/cacerts and I copied the Thawte Personal Premium certificate as follows:

sudo cp /tmp/Thawte Root Certificates/Thawte Personal Premium CA/Thawte Personal Premium CA.cer /opt/Citrix/ICAClient/keystore/cacerts/

Until I found out about what was in the Mozilla folder, I simply picked out the certificate mentioned in the Citrix error message and copied it over like the above. Of course, all of this may seem like a lot of work to those who are non-tinkerers and I have added a repaired 64-bit DEB package that incorporates all of the above and should not need any further intervention aside from installing it using GDebi, Ubuntu’s Software Centre, dpkg or anything else that does what’s needed.

A reappraisal of Windows 8 and 8.1 licensing

15th November 2013

With the release of Windows 8 around this time last year, I thought that the full retail version that some of us got for fresh installations on PC’s, real or virtual, had become a thing of the past. In fact, it did seem that every respecting technology news website and magazine was saying just that. The release that you would buy from Microsft or from mainstream computer stores was labelled as an upgrade. That made it look as if you needed the OEM or System Builder edition for those PC’s that needed a new Windows installation and that the licence that you bought was then attached to the machine from when it got installed on there.

As is usual with Microsoft, the situation is less clear cut than that. For instance, there was some back-pedalling to allow OEM editions of Windows to be licensed for personal use on real or virtual PC’s. With Windows & and its predecessors, it even was possible to be able to install afresh on a PC without Windows by first installing on inactivated copy on there and then upgrading that as if it was a previous version of Windows. Of course, an actual licence was of the previous version of Windows was needed for full compliance if not the actual installation. At times, Microsoft muddies waters so as to keep its support costs down.

Even with Microsoft’s track record in mind, it still did surprise me when I noticed that Amazon was selling what appeared to be full versions of both Windows 8.1 and Windows 8.1 Pro. Having set up a 64-bit VirtualBox virtual machine for Windows 8.1, I got to discovering the same for software purchased from the Microsoft web site. However, unlike the DVD versions, you do need an active Windows installation if you fancy a same day installation of the downloaded software. For those without Windows on a machine, this can be as simple as downloading either the 32-bit or the 64-bit 90 day evaluation editions of Windows 8.1 Enterprise and using that as a springboard for the next steps. This not only be an actual in-situ installation but there options to create an ISO or USB image of the installation disk for later installation.

In my case, I created a 64-bit ISO image and used that to reboot the virtual machine that had Windows 8.1 Enterprise on there before continuing with the installation. By all appearances, there seemed to be little need for a pre-existing Windows instance for it to work so it looks as if upgrades have fallen by the wayside and only full editions of Windows 8.1 are available now. The OEM version saves money so long as you are happy to stick with just one machine and most users probably will do that. As for the portability of the full retail version, that is not something that I have tested and I am unsure that I will go beyond what I have done already.

My main machine has seen a change of motherboard, CPU and memory so it could have de-activated a pre-existing Windows licence. However, I run Linux as my main operating system and, apart possibly from one surmountable hiccup, this proves surprisingly resilient in the face of such major system changes. For running Windows, I turn to virtual machines and there were no messages about licence activation during the changeover either. Microsoft is anything but confiding when it comes to declaring what hardware changes inactivate a licence. Changing a virtual machine from VirtualBox to VMware or vice versa definitely so does it so I tend to avoid doing that. One item that is fundamental to either a virtual or a real PC is the mainboard and I have seen suggestions that this is the critical component for Windows licence activation and it would make sense if that was the case.

However, this rule is not hard and fast either since there appears to be room for manoeuvre should your PC break. It might be worth calling Microsoft after a motherboard replacement to see if they can help you and I have seen that it is. All in all, Microsoft often makes what appear to be simple rules only to override them when faced with what happens in the real world. Is that why they can be unclear about some matters at times? Do they still hanker after how they want things to be even when they are impossible to keep like that?

A look at Ubuntu GNOME 13.10

12th October 2013

With its final release being near at hand, I decided to have a look at the beta release of Ubuntu GNOME 13.10 to get a sense of what might be coming. A misstep along the way had me inadvertently download and install the 64-bit edition of 13.04 into a VirtualBox virtual machine. The intention to update that to its soon to be released successor was scuppered by instability so I never did get to try out an in situ upgrade to 13.10. What I had in mind was to issue the following command:

gksu update-manager -d

However, I found another one when considering how Ubuntu Server might be upgraded without the GUI application that is the Update Manager. To update to a development version, the following command is what you need:

sudo do-release-upgrade -d

To upgrade to a final release of of a new version of Ubuntu, drop the -d switch from the above to use the following:

sudo do-release-upgrade

There is one further option that isn’t recommended for moving between Ubuntu versions but I use it to get updates such as new kernel subversions that are released:

sudo apt-get dist-upgrade

Rather than trying out the above, I downloaded the latest ISO image for the beta release of Ubuntu GNOME 13.10 and installed onto a VM that instead. Though it is the 32 bit version of the distro that is installed on my main home PC, it has been the 64 bit version that I have been trying. So far, that seems to be behaving itself even if it feels a little sluggish but that could be down to the four year old PC that hosts the virtual machine. For a while, I have been playing with the possibility of an upgrade involving an Intel Core i5 4670K CPU and 16 GB of RAM (useful for running multiple virtual machines at a time) along with any motherboard that supports those so looking at a 64 bit operating system has its uses.

The Linux kernel may be 3.11 but that is not my biggest concern. Neither is the fact that LibreOffice 4.1.2.3 was included and GIMP wasn’t, especially when that could be added easily anyway and it is version 2.8.6 that you get. The move to GNOME Shell 3.8 was what drew me to seeing what was coming because I have been depending on a number extensions. As with WordPress and plugins, GNOME Shell seems to have a tempestuous relationship with some of its extensions and I wanted to see which ones still worked. There also has been a change to the backstage application view in that you either get all installed applications displayed when you browse them or you have to start typing the name of the one you want to select it. Losing the categorical view that has been there until GNOME Shell 3.6 is a step backwards and I hope that version 3.10 has seen some sort of a reinstatement. There is a way to add these categories and the result is not as it once was either; also, it shouldn’t be necessary for anyone to dive into a systems innards to address things like this. With all the constant change, it is little wonder that Cinnamon has become a standalone entity with the release of its version 2.0 and that Debian’s toyed with not going with GNOME for its latest version (7.1 at the time of writing and it picked a good GNOME Shell version in 3.4).

Having had a look at other distribution that already have GNOME Shell 3.8, I knew that a few of my extensions worked with it. The list includes Frippery Bottom Panel, Frippery Move Clock, Places Status Indicator, Removable Drive Menu, Remove Rounded Corners (not really needed with the GNOME Shell theme that I use, Elementary Luna 3.4, but I retain it anyway), Show Desktop Button, User Themes and Ignore_Request_Hide_Titlebar. Because of the changes to the backstage view, I added Frippery Applications Menu in preference to Applications Menu because I have found that to be unstable. Useful new discoveries have included Curtains Up and GNOME Shell Open Terminal while Shell Restart User Menu Entry has made a return and found a use this time around too.

There have been some extensions that were not updated to work with GNOME Shell 3.8 that I have got working. In some cases, it was as simple as updating the metadata.json file for an extension with new version numbers of 3.8 and 3.84 to the list associated with the shell version property. All extensions are to be found in the .local/share/gnome-shell/extensions location in your home directory and each has a dedicated file containing the aforementioned file.

With others, it was a matter of looking in the Looking Glass (execute lg in the box that ALT + F2 brings up on your screen to access this) and seeing what error messages were to be found in there before attempting to correct these in either the extensions’ extension.js files or whatever JavaScript (*.js) file was causing the problem. With either or both of these remedies, I managed to port the four extensions below to GNOME Shell 3.8. In fact, you can download these zip files and install them yourself to see how you get on with them.

Advanced Settings in User Menu

Antisocial Menu

Remove App Menu

Restart Shell Entry

There is a Remove Panel App Menu that works with GNOME Shell 3.8 but I found that it got rid of the Places menu instead of the panel’s App Menu so I tried porting the older extension to see if it behaved itself and it does. With these in place, I have bent Ubuntu GNOME 13.10 to my will ahead of its final release next week and that includes customising Nautilus too. Other than a new version of GNOME Shell, it looks as if it will come with less in the way of drama and a breather like that is no bad thing given that personal computing continues to remain in a state of flux these days.

Creating a test web server using Ubuntu Server 13.04 and VirtualBox

1st September 2013

Having seen Linux Format cover tools like Vagrant and Puppet that manage virtual machines, I have been attracted by the prospect of a virtual web server running on my own PC. Certainly, having the LAMP software stack in a VM means that the corresponding tools don’t need to be added to a host system should its operating system need a fresh installation.

As intriguing as tools like Vagrant may be, I decided that I needed to learn a bit more about getting server instances set up in VirtualBox anyway. Thus, I went and downloaded the latest version of Ubuntu Server and gave that a go. One lesson that I learned was that Bridged Networking needs to be added to the VM before installation of the operating system unless you fancy overcoming the challenge of getting Ubuntu Server to recognise an altered or additional network interface. In my case, I added an extra adapter for the Bridged Networking and left the original in place as NAT. The reason for having Bridged Networking set up is that it allows access to the virtual web server from the host once you know the IP address and that information can be obtained by executing the ifconfig command on the virtual machine.

With the networking sorted, the next step was to install the 64-bit edition of Ubuntu Server. Unlike its desktop counterpart, this is all driven by text menus but remains fairly intuitive and there is hardly anything there that you wouldn’t see with another Linux distribution. A useful addition is the addition of a menu to select the type of server services that you’d like to see installed. From this, I chose the web server and SSH options and I seem to remember that there was a database server option too. If there was an FTP server option, I would have chosen that too but it was no ordeal to add ProFTPd later on anyway.

All of this set was done through the VirtualBox GUI just to keep life more straightforward. Even so, I only selected 12 MB of video memory and was tempted to cut the overall memory back from 512 MB but leaving things be for now. However, what I have begun to do is start and stop the virtual machine from the command line since servers are headless operations anyway. With SSH enabled, there is little need to have the VirtualBox GUI going. The command for starting the server is below:

VBoxManage startvm "Ubuntu Server" --type=headless

There is a VBoxHeadless command for the same end too but VBoxManage does what I need. The startvm option is what tells VBoxManage to start the server and the virtual machine’s name is enclosed in quotes. The --type=headless ensures that no window pops up. To stop the virtual web server cleanly, a command like the following is needed:

VBoxManage controlvm "Ubuntu Server" acpipowerbutton

Again, the VBoxManage command gets used and the acpipowerbutton option ensures that a clean shutdown is performed. Not doing so results in the server not fully starting up according to my experiences thus far. Getting the virtual web server to start and stop with the host machine itself starting and stopping but this looks more complex so I plan to leave things a while before trying that experiment.

Restoring GNU Parallel Functionality in Ubuntu GNOME 13.04

31st July 2013

There is a handy comand line utility called GNU Parallel that allows you to run Linux commands on more than one CPU core at a time to perform parallel processing of the task at hand. Here is a form of the command that is similar to one that I often use:

ls *.* | parallel gm convert -sharpen 1×3 {} sharpened_images/{}

What it does is pipe a list of files in a folder to GraphicsMagick for sharpening and outputting to a sharpened_images directory. The {} in the command is where the filenames go in the sharpening command.

This worked fine in Ubuntu GNOME 12.10 but stopped doing so after I upgraded to the next version. A look on the web set me to running the following command:

parallel --version

That produced output that included the following line:

WARNING: YOU ARE USING --tollef. IF THINGS ARE ACTING WEIRD USE --gnu.

Rerunning the original command with the --gnu option worked but there was a more permanent solution than using something like this:

ls *.* | parallel --gnu gm convert -sharpen 1×3 {} sharpened_images/{}

That was editing /etc/parallel/config with root privileges to delete the --tollef option from there. With that completed, all was as it should again and it makes me wonder why the change was made in the first place. Perhaps because of it, there even is a discussion about the possibility of removing the --tollef option altogether since it is raising more questions than it answers.

Adding Microsoft Core Fonts to Fedora 19

6th July 2013

While I have a previous posting from 2009 that discusses adding Microsoft’s Core Fonts to the then current version of Fedora, it did strike me that I hadn’t laid out the series of command that were used. Instead, I referred to an external and unofficial Fedora FAQ. That’s still there but I also felt that I was leaving things a little to chance given how websites can disappear quite suddenly.

Even after next to four years, it still amazes me that you cannot install Microsoft’s Core Fonts in Fedora as you would in Ubuntu, Linux Mint or even Debian. Therefore, the following series of steps is as necessary now as it was then.

The first step is to add in a number of precursor applications such as wget for command line file downloading from websites, cabextract for extracting the contents of Windows CAB files, rpmbuild for creating RPM installers and utilities for the XFS file system that chkfontpath needs:

sudo yum -y install rpm-build cabextract ttmkfdir wget xfs

Here, I have gone with terminal commands that use sudo but you could become the superuser (root) for all of this and there are those who believe you should. The -y switch tells yum to go ahead with prompting you for permission before it does any installations. The next step is to download the Microsoft fonts package with wget:

sudo wget http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec

Once that is done, you need to install the chkfontpath package because the RPM for the fonts cannot be built without it:

sudo rpm -ivh http://dl.atrpms.net/all/chkfontpath

Once that is in place, you are ready to create the RPM file using this command:

sudo rpmbuild -ba msttcorefonts-2.0-1.spec

After the RPM has been created, it is time to install it:

sudo yum install --nogpgcheck ~/rpmbuild/RPMS/noarch/msttcorefonts-2.0-1.noarch.rpm

When installation has completed, the process is done. Because I used sudo, all of this happened in my own home area so there was a need for some housekeeping afterwards. If you did it by becoming the root user, then the files would be there instead and that’s the scenario in the online FAQ.

A need to update graphics hardware

16th June 2013

Not being a gaming enthusiast, having to upgrade graphics cards in PC’s is not something that I do very often or even rate as a priority. However, two PC’s in my possession have had that very piece of hardware upgraded on them and it’s not because anything was broken either. My backup machine has seen quite a few Linux distros on there since I built it nearly four years ago. The motherboard is an ASRock K10N78 that sourced from MicroDirect and it has onboard an NVIDIA graphics chip that has performed well if not spectacularly. One glitch that always existed was a less than optimal text rendering in web browsers but that never was enough to get me to add a graphics card to the machine.

More recently, I ran into trouble with Sabayon 13.04 with only the 2D variant of the Cinnamon desktop environment working on it and things getting totally non-functional when a full re-installation of the GNOME edition was attempted. Everything went fine until I added the latest updates to the system when a reboot revealed that it was impossible to boot into a desktop environment. Some will relish this as a challenge but I need to admit that I am not one of those. In fact, I tried out two Arch-based distros on the same PC and got the same results following a system update on each. So, my explorations of Antergos and Manjaro have to continue in virtual machines instead.

To get a working system, I gave Linux Mint 15 Cinnamon a go and that worked a treat. However, I couldn’t ignore that the cutting edge distros that I tried before it all took exception to the onboard NVIDIA graphics. systemd has been implemented in all of these and it seems reasonable to think that it is coming to Linux Mint at some stage in the future so I went about getting a graphics card to add into the machine. Having had good experiences with ATi’s Radeon in the past, I stuck with it even though it now is in the hands of AMD. Not being that fussed so long there was Linux driver support, I picked up a Radeon HD 6450 card from PC World. Adding it into the PC was a simpler of switching off the machine, slotting in the card, closing it up and powering it on again. Only later on did I set the BIOS to look for PCI Express graphics before anything else and I could have got away without doing that. Then, I made use of the Linux Mint Additional Driver applet in its setting panel to add in the proprietary driver before restarting the machine to see if there were any visual benefits. To sort out the web browser font rendering, I used the Fonts applet in the same settings panel and selected full RGBA hinting. The improvement was unmissable if not still like the appearance of fonts on my main machine. Overall, there had been an improvement and a spot of future proofing too.

That tinkering with the standby machine got me wondering about what I had on my main PC. As well as onboard Radeon graphics, it also gained a Radeon 4650 card for which 3D support wasn’t being made available by Ubuntu GNOME 12.10 or 13.04 to VMware Player and it wasn’t happy about this when a virtual machine was set to have 3D support. Adding the latest fglrx driver only ensured that I got a command line instead of a graphical interface. Issuing one of the following commands and rebooting was the only remedy:

sudo apt-get remove fglrx

sudo apt-get remove fglrx-updates

Looking at the AMD website revealed that they no longer support 2000, 3000 or 4000 series Radeon cards with their latest Catalyst driver the last version that did not install on my machine since it was built for version 3.4.x of the Linux kernel. A new graphics card then was in order if I wanted 3D graphics in VWware VM’s and both GNOME and Cinnamon appear to need this capability. Another ASUS card, a Radeon HD 6670, duly was acquired and installed in a manner similar to the Radeon HD 6450 on the standby PC. Apart from not needing to alter the font rendering (there is a Font tab on Gnome Tweak Tool where this can be set), the only real exception was to add the Jockey software to my main PC for installation of the proprietary Radeon driver. The following command does this:

sudo apt-get install jockey-kde

When that was done I issue the jockey-kde command and selected the first entry on the list. The machine worked as it should on restarting apart from an AMD message at the bottom right hand corner bemoaning unrecognised hardware. There had been two entries on that Jockey list with exactly the same name so it was time to select the second of these and see how it went. On restarting, the incompatibility message had gone and all was well. VMware even started virtual machines with 3D support without any messages so the upgrade did the needful there.

Hearing of someone doing two PC graphics card upgrades in a weekend may make you see them as an enthusiast but my disinterest in computer gaming belies this. Maybe it highlights that Linux operating systems need 3D more than might be expected. The Cinnamon desktop environment now issues messages if it is operating in 2D mode with software 3D rendering and GNOME always had the tendency to fall back to classic mode, as it had been doing when Sabayon was installed on my standby PC. However, there remain cases where Linux can rejuvenate older hardware and I installed Lubuntu onto a machine with 10 year old technology on there (an 1100 MHz Athlon CPU, 1GB of RAM and 60GB of hard drive space in case dating from 1998) and it works surprisingly well too.

It seems that having fancier desktop environments like GNOME Shell and Cinnamon means having the hardware on which it could run. For a while, I have been tempted by the possibility of a new PC since even my main machine is not far from four years old either. However, I also spied a CPU, motherboard and RAM bundle featuring an Intel Core i5-4670 CPU, 8GB of Corsair Vengence Pro Blue memory and a Gigabyte Z87-HD3 ATX motherboard included as part of a pre-built bundle (with a heatsink and fan for the CPU) for around £420. Even for someone who has used AMD CPU’s since 1998, that does look tempting but I’ll hold off before making any such upgrade decisions. Apart from exercising sensible spending restraint, waiting for Linux UEFI support to mature a little more may be no bad idea either.

Update 2013-06-23: The new graphics card in my main machine is working as it should and has reduced the number of system error report messages turning up too; maybe Ubuntu GNOME 13.04 didn’t fancy the old graphics card all that much. A rogue .fonts.conf file was found in my home area on the standby machine and removing it has improved how fonts are displayed on there immeasurably. If you find one on your system, it’s worth doing the same or renaming it to see if it helps. Otherwise, tinkering with the font rendering settings is another beneficial act and it even helps on Debian 6 too and that uses GNOME 2! Seeing what happens on Debian 7.1 could be something that I go testing sometime.

Getting an Epson Pefection 4490 Photo scanner going with Ubuntu GNOME Remix 12.10

7th March 2013

My Epson Perfection 4490 Photo scanner has been in my possession for a while now and its impossible to justify any replacement given that it both works well and digital photography has taken over from its film predecessor for me. Every time I go installing an operating system afresh, I need to reinstate it again and last year’s installation of Ubuntu GNOME Remix 12.04 only saw me do the deed recently. When I did so, it was brought back to me that I’d never gone and documented on here how this was done. Given that I sometimes use this place as a repository of stuff to which I need to refer again in the future, it seemed remiss of me so here it is for you all.

Though I had XSane and SimpleScan already installed on the system, Sane wasn’t on there so I went and added it and a few other extras using the following command:

sudo apt-get install sane sane-utils libsane-extras

Then, it was onto the Epson website for their Perfection 4490 Photo Linux drivers since Sane’s support for this scanner seemingly remains incomplete even though it pre-dates my move to Linux in 2007. Three files were needed and the following commands install them (depending on when you do this, the file names may be different so just change them to whatever they are for you; it can be done with a single command too but there is not enough girth for that here):

sudo dpkg -i iscan-data_1.22.0-1_all.deb
sudo dpkg -i iscan_2.29.1-5~usb0.1.ltdl7_i386.deb
sudo dpkg -i iscan-plugin-gt-x750_2.1.2-1_i386.deb

With those in place, there was one other task that needed doing so that scanning could be done without resorting to running scanning software using sudo privileges. To free up the access to a normal user account, I needed a HAL device information file. These normally are in /usr/share/hal/fdi/ but they change every time an installation so any modifications that you may make are going to be lost. Therefore, there is no point modifying either /usr/share/hal/fdi/preprobe/10osvendor/20-libsane.fdi or /usr/share/hal/fdi/preprobe/10osvendor/20-libsane-extras.fdi where scanner information usually is to be found.

The first task in creating an fdi file was to issue the lsusb command and look for a line corresponding to my scanner. This is the one that I got:

Bus 001 Device 004: ID 04b8:0119 Seiko Epson Corp. Perfection 4490 Photo

From this, I gleaned the manufacturer ID and model ID as 04b8 and 0119, respectively. These are needed later on. Next I needed to create the hal/fdi/preprobe/ folder structure under /etc since it was there. Then, I created epson4490photo.fdi in the bottom folder of the tree (/etc/hal/fdi/preprobe/epson4490photo.fdi) as follows:

cd /etc/hal/fdi/preprobe/ && sudo touch epson4490photo.fdi

Then, I edited the new file using the following command:

gksu gedit epson4490photo.fdi &

When open, I added in the following text:

<?xml version="1.0" encoding="UTF-8"?>
<deviceinfo version="0.2">
<device>
<match key="info.subsystem" string="usb">
<!-- Epson Perfection 4490 Photo -->
<match key="usb.vendor_id" int="0x04b8">
<match key="usb.product_id" int="0x0119">
<append key="info.capabilities" type="strlist">scanner</append>
<merge key="scanner.access_method" type="string">proprietary</merge>
</match>
</match>
</match>
</device>
</deviceinfo>

It’s all in XML so the place to look is immediately beneath the scanner name comment. The int attributes of the two match elements immediately following the comment line are populated using the information from the lsusb command output with 0x prefixing both the manufacturer and model identifiers. The element with a key attribute of usb.vendor_id is the former and that with a key attribute of usb.product_id is the latter. With epson4490photo.fdi saved, I rebooted the machine to restart HAL and all was as I wanted it to be apart maybe from XSane making complaints that seemed not to be of any actual consequence. With Epson’s Image Scan! and Simple Scan on the PC, there’s no need to be bothered with those messages. Choice is good when you have it, especially when you have expended some effort to get that far.

Using a variant of Debian’s Iceweasel that keeps pace with Firefox

5th February 2013

Left to its own devices, Debian will leave you with an ever ageing re-branded version of Firefox that was installed at the same time as the rest of the operating system. From what I have found, the main cause of this was that Mozilla’s wanting to retain control of its branding and trademarks in a manner not in keeping with Debian’s Free Software rules. This didn’t affect just Firefox but also Thunderbird, Sunbird and Seamonkey with Debian’s equivalents for these being IceDove, IceOwl and IceApe, respectively.

While you can download a tarball of Firefox from the web and use that, it’d be nice to get a variant that updated through Debian’s normal apt-get channels. In fact, IceWeasel does get updated whenever there is a new release of Firefox even if these updates never find their way into the usual repositories. While I have been know to take advantage of the more frozen state of Debian compared with other Linux distributions, I don’t mind getting IceWeasel updated so it isn’t a security worry.

The first step in so doing is to add the following lines to /etc/apt/sources.list using root access (using sudo, gksu or su to assume root privileges) since the file normally cannot be edited by normal users:

deb http://backports.debian.org/debian-backports squeeze-backports main
deb http://mozilla.debian.net/ squeeze-backports iceweasel-release

With the file updated and saved, the next step is to update the repositories on your machine using the following command:

sudo apt-get update

With the above complete, it is time to overwrite the existing IceWeasel installation with the latest one using an apt-get command that specifies the squeeze-backports repository as its source using the -t switch. While IceWeasel is installed from the iceweasel-release squeeze-backports repository, there dependencies that need to be satisfied and these come from the main squeeze-backports one. The actual command used is below:

sudo apt-get install -t squeeze-backports iceweasel

While that was all that I needed to do to get IceWeasel 18.0.1 in place, some may need the pkg-mozilla-archive-keyring package installed too. For those needing more information that what’s here, there’s always the Debian Mozilla team.

Installing the Cinnamon Desktop Environment on Sabayon Linux

26th January 2013

During the week, I did an update on my Sabayon system and GNOME 3.6 came on board without to much of a bother. There was no system meltdown or need for an operating system re-installation. However, there was one matter that rankled: adding and updating extensions from extensions. gnome.org was impossible. The process would create a new folder in ~/.local/share/gnome-shell/extensions/  but not fill it with anything at all. Populating from another my Ubuntu GNOME Remix 12.10 machine didn’t seem to achieve the needful and I am left wondering if it is down to the version of GNOME Shell being 3.6.2. However, even adding an entry for the current version of GNOME Shell to metadata.json for one plugin didn’t appear to do what I wanted so resolving this issue needs further enquiry.

In the meantime, I added the Cinnamon desktop environment using the following command and will be using that from now on. If the GNOME Shell extension issue ever gets sorted I may move back but there is no rush. GNOME 3.8 sounds like it’s bringing an interesting option that makes use of the approach Linux Mint took for version 12 of that distribution and I can await that, especially if it avoids the need for adding extension on a personal basis like now.

sudo equo update && sudo equo install cinnamon

With the installation completed by the above command, it was a matter of logging out and choosing the Cinnamon entry (there is a 2D version too) from the session dropdown menu on the login screen to get it going. Then, it was a matter of tweaking Cinnamon to my heart’s content. Getting a two panel layout required logging out and in again as well as choosing the appropriate setting in the Cinnamon Panel options tab. Next, I decided to check on what themes are available at cinnamon.linuxmint.org before settling on Cinnamint 1.6. It all feels very comfortable apart from not having an automatically growing list of workspaces that are a default offering in GNOME Shell. That goes against the design principles of Cinnamon though so only hopes of someone making an extension that does that are left.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.