Technology Tales

Adventures & experiences in contemporary technology

A way to get Rigo working again in Sabayon

23rd October 2013

After having Sabayon running on a PC until it came to pieces after an attempted version upgrade, I went away from the Linux distro for a while and Linux Mint now runs on the aforementioned machine. It only was a certain curiosity that got me installing it into a virtual machine on VirtualBox to see if my command line method of keeping the system up to date was the cause or whether rolling or partially-rolling distros have a certain fragility that is not seen in their discrete release counterparts.

Recently that ran into a hitch, the Sabayon package manager Rigo failed to start up for me. After waiting to see if it sorted itself on its own, I looked into returning to those command line ways and that line of enquiry led me to a method of restoring Rigo’s functionality from Sabayon’s wiki page on the underlying Entropy. The first step was to issue a command to become root:

su

That needed the appropriate password and the next command issued updated Sabayon’s repositories:

equo update

Once that had done its thing, it was time to install new versions of Entropy and Rigo:

equo install entropy rigo

With that complete, it was time to exit the root session with the exit command. Then, it was time to try running Rigo and it worked as expected. Any thoughts of adding in the superseded Sulfur (Rigo’s predecessor) were banished on seeing that success.

Creating a test web server using Ubuntu Server 13.04 and VirtualBox

1st September 2013

Having seen Linux Format cover tools like Vagrant and Puppet that manage virtual machines, I have been attracted by the prospect of a virtual web server running on my own PC. Certainly, having the LAMP software stack in a VM means that the corresponding tools don’t need to be added to a host system should its operating system need a fresh installation.

As intriguing as tools like Vagrant may be, I decided that I needed to learn a bit more about getting server instances set up in VirtualBox anyway. Thus, I went and downloaded the latest version of Ubuntu Server and gave that a go. One lesson that I learned was that Bridged Networking needs to be added to the VM before installation of the operating system unless you fancy overcoming the challenge of getting Ubuntu Server to recognise an altered or additional network interface. In my case, I added an extra adapter for the Bridged Networking and left the original in place as NAT. The reason for having Bridged Networking set up is that it allows access to the virtual web server from the host once you know the IP address and that information can be obtained by executing the ifconfig command on the virtual machine.

With the networking sorted, the next step was to install the 64-bit edition of Ubuntu Server. Unlike its desktop counterpart, this is all driven by text menus but remains fairly intuitive and there is hardly anything there that you wouldn’t see with another Linux distribution. A useful addition is the addition of a menu to select the type of server services that you’d like to see installed. From this, I chose the web server and SSH options and I seem to remember that there was a database server option too. If there was an FTP server option, I would have chosen that too but it was no ordeal to add ProFTPd later on anyway.

All of this set was done through the VirtualBox GUI just to keep life more straightforward. Even so, I only selected 12 MB of video memory and was tempted to cut the overall memory back from 512 MB but leaving things be for now. However, what I have begun to do is start and stop the virtual machine from the command line since servers are headless operations anyway. With SSH enabled, there is little need to have the VirtualBox GUI going. The command for starting the server is below:

VBoxManage startvm "Ubuntu Server" --type=headless

There is a VBoxHeadless command for the same end too but VBoxManage does what I need. The startvm option is what tells VBoxManage to start the server and the virtual machine’s name is enclosed in quotes. The --type=headless ensures that no window pops up. To stop the virtual web server cleanly, a command like the following is needed:

VBoxManage controlvm "Ubuntu Server" acpipowerbutton

Again, the VBoxManage command gets used and the acpipowerbutton option ensures that a clean shutdown is performed. Not doing so results in the server not fully starting up according to my experiences thus far. Getting the virtual web server to start and stop with the host machine itself starting and stopping but this looks more complex so I plan to leave things a while before trying that experiment.

A little look at Debian 7.0

12th May 2013

Having a virtual machine with Debian 6 on there, I was interested to hear that Debian 7.0 is out. In another VM, I decided to give it a go. Installing it on there using the Net Install CD image took a little while but proved fairly standard with my choice of the GUI-based option. GNOME was the desktop environment with which I went and all started up without any real fuss after the installation was complete; it even disconnnected the CD image from the VM before rebooting, a common failing in many Linux operating installations that lands into the installation cycle again unless you kill the virtual machine.

Though the GNOME desktop looked familiar, a certain amount of conservatism reigned too since the version was 3.4.2. That was no bad thing since raiding the GNOME Extension site for a set of mature extensions was made all the more easy. In fact, a certain number of these was included in the standard installation anyway and the omission of a power off entry on the user menu was corrected as a matter of course without needing any intervention from this user. Adding to what already was there made for a more friendly desktop experience in a short period of time.

Debian’s variant of Firefox , Iceweasel, is version 10 so a bit of tweaking is needed to get the latest version. LibreOffice is there now too and it’s version 3.5 rather than 4. Shotwell too is the older 0.12 and not the 0.14 that is found in the likes of Ubuntu 13.04. As it happens, GIMP is about the only software with a current version and that is 2.8; a slower release cycle may be the cause of that though. All in all, the general sense is that older versions of current software are being included for the sake of stability and that is sensible too so I am not complaining very much about this at all.

The reason for not complaining is that the very reason for having a virtual machine with Debian 6 on there is to have Zinio and Dropbox available too. Adobe’s curtailment of support for Linux means that any application needing Adobe Air may not work on a more current Linux distribution. That affects Zinio so I’ll be retaining a Debian 6 instance for a while yet unless a bout of testing reveals that a move to the newer version is possible. As for Dropbox, I am sure that I can recall why I moved it onto Debian but it’s working well on there so I am in no hurry to move it over either. There are times when slower software development cycles are better…

Using a variant of Debian’s Iceweasel that keeps pace with Firefox

5th February 2013

Left to its own devices, Debian will leave you with an ever ageing re-branded version of Firefox that was installed at the same time as the rest of the operating system. From what I have found, the main cause of this was that Mozilla’s wanting to retain control of its branding and trademarks in a manner not in keeping with Debian’s Free Software rules. This didn’t affect just Firefox but also Thunderbird, Sunbird and Seamonkey with Debian’s equivalents for these being IceDove, IceOwl and IceApe, respectively.

While you can download a tarball of Firefox from the web and use that, it’d be nice to get a variant that updated through Debian’s normal apt-get channels. In fact, IceWeasel does get updated whenever there is a new release of Firefox even if these updates never find their way into the usual repositories. While I have been know to take advantage of the more frozen state of Debian compared with other Linux distributions, I don’t mind getting IceWeasel updated so it isn’t a security worry.

The first step in so doing is to add the following lines to /etc/apt/sources.list using root access (using sudo, gksu or su to assume root privileges) since the file normally cannot be edited by normal users:

deb http://backports.debian.org/debian-backports squeeze-backports main
deb http://mozilla.debian.net/ squeeze-backports iceweasel-release

With the file updated and saved, the next step is to update the repositories on your machine using the following command:

sudo apt-get update

With the above complete, it is time to overwrite the existing IceWeasel installation with the latest one using an apt-get command that specifies the squeeze-backports repository as its source using the -t switch. While IceWeasel is installed from the iceweasel-release squeeze-backports repository, there dependencies that need to be satisfied and these come from the main squeeze-backports one. The actual command used is below:

sudo apt-get install -t squeeze-backports iceweasel

While that was all that I needed to do to get IceWeasel 18.0.1 in place, some may need the pkg-mozilla-archive-keyring package installed too. For those needing more information that what’s here, there’s always the Debian Mozilla team.

Getting rid of a Dropbox error message on a Linux-powered PC

24th September 2012

One of my PC’s has ended up becoming a testing ground for a number of Linux distributions. The list has included openSUSE, Fedora, Arch and LMDE with Sabayon being the latest incumbent. From Arch onwards in that list though, a message has appeared on loading the desktop with every one of these when I have Dropbox’s client set up on there:

Unable to monitor entire Dropbox folder hierarchy. Please run “echo 100000 | sudo tee /proc/sys/fs/inotify/max_user_watches” and restart Dropbox to correct the problem.

Even applying the remedy that the message suggests won’t permanently fix the problem. For that, you need to edit /etc/sysctl.conf with superuser access and add the following line to it:

fs.inotify.max_user_watches = 100000

With that in place, you can issue the following command to fix the problem in the current session (assuming your user account is listed in /etc/sudoers):

sudo sysctl -p & dropbox stop & dropbox start

A reboot should demonstrate that the messages no longer appear again. For a good while, I had ignored it but curiosity eventually got me to find out how it could be stopped and led to what you find above.

Removing VMware Player from Linux Mint Debian Edition

4th August 2012

A whole slew of updates have appear for my Linux Mint Debian Edition PC. However, to instate them, I needed to remove VMware Player and this is the command to do so:

sudo vmware-installer -u vmware-player

It worked in my case and my system updates are in progress as I write this. The same command should work for other Linux distros where VMware Player was installed using the *.bundle installer. VMware Player remains in place on my main PC though so I am not ditching it just yet, even if I have to be careful when running it on Linux Mint 13 so as not to freeze the system on myself.

Adding Software to Arch Linux from the AUR

3rd December 2011

There are packages absent from the Arch Linux repositories that could come in useful. When you are after one of these, then it’s time to search the Arch User Repository (AUR). In here, I have found the likes of Microsoft Core Fonts, Adobe Reader and Dropbox. There may be others but these examples are what comes to mind as I write this. In time, it may be that packages make if from the AUR into the Arch community repository but you have to use the former if you cannot wait.

Just search the AUR for what you want and download the tarball (tar.gz file) from the webpage where you find it. Then, I recommend extracting it to /tmp where clearance  at boot time means that you don’t need to do it yourself. Then, going into the appropriate subfolder in /tmp (acroread for Adobe Reader, for instance) and issue the following command:

makepkg

This will attempt to create a package file where you are working for installation by pacman. If dependencies are absent, you will be told and these may need another AUR search in some cases though most are included in the repositories. Once dependencies, have been sorted, just issue the makepkg command again to create the xz file that pacman needs to perform the installation. To do so, issue the following command from the same directory either as root or by using sudo if your user account has such privileges:

pacman -U *.xz

There usually is but one xz archive in a package folder so I have been taking the easy route of not looking up the name all of the time. Of course, you can do that for safety if you want.

With pacman not looking at the AUR, you have to do more work to get upgrades to happen if you want to avoid without having to repeat the above process all of the time. There is a package in the AUR called yaourt that needs package-query from the same place as well. Before any of these, yajl needs to installed from one of the default repositories. Once yaourt is in place, then the following does the updates for you:

yaourt -Syu --aur

Again, it might be best run this as root or using sudo though that gives messages from makepkg about not running it as a privileged user. However, I reckon that those might need to be ignored. When I tried it, the Citrix update failed though the Dropbox one succeeded. This experience might be worth bearing in mind. Saying that, I have found installing and updating software from the AUR not to be too onerous a process so far. Anything that gives a little more freedom only can be a good thing.

Sorting out MySQL on Arch Linux

5th November 2011

Seeing Arch Linux running so solidly in a VirtualBox virtual box has me contemplating whether I should have it installed on a real PC. Saying that, recent announcements regarding the implementation of GNOME 3 in Linux Mint have caught my interest even if the idea of using a rolling distribution as my main home operating system still has a lot of appeal for me. Having an upheaval come my way every six months when a new version of Linux Mint is released is the main cause of that.

While remaining undecided, I continue to evaluate the idea of Arch Linux acting as my main OS for day-to-day home computing. Towards that end, I have set up a working web server instance on there using the usual combination of Apache, Perl, PHP and MySQL. Of these, it was MySQL that went the least smoothly of all because the daemon wouldn’t start for me.

It was then that I started to turn to Google for inspiration and a range of actions resulted that combined to give the result that I wanted. One problem was a lack of disk space caused by months of software upgrades. Since tools like it in other Linux distros allow you to clear some disk space of obsolete installation files, I decided to see if it was possible to do the same with pacman, the Arch Linux command line package manager. The following command, executed as root, cleared about 2 GB of cruft for me:

pacman -Sc

The S in the switch tells pacman to perform package database synchronization while the c instructs it to clear its cache of obsolete packages. In fact, using the following command as root every time an update is performed both updates software and removes redundant or outmoded packages:

pacman -Syuc

So I don’t forget the needful housekeeping, this will be what I use in future with the y being the switch for a refresh and the u triggering a system upgrade. It’s nice to have everything happen together without too much effort.

To do the required debugging that led me to the above along with other things, I issued the following command:

mysqld_safe --datadir=/var/lib/mysql/ &

This starts up the MySQL daemon in safe mode if all is working properly and it wasn’t in my case. Nevertheless, it creates a useful log file called myhost.err in /var/lib/mysql/. This gave me the messages that allowed the debugging of what was happening. It led me to installing net-tools and inettools using pacman; it was the latter of these that put hostname on my system and got the MySQL server startup a little further along. Other actions included unlocking the ibdata1 data file and removing the ib_logfile0 and ib_logfile1 files so as to gain something of a clean sheet. The kill command was used to shut down any lingering mysqld sessions too. To ensure that the ibdata1 file was unlocked, I executed the following commands:

mv ibdata1 ibdata1.bad
cp -a ibdata1.bad ibdata1

These renamed the original and then crated a new duplicate of it with the -a switch on the cp command forcing copying with greater integrity than normal. Along with the various file operations, I also created a link to my.cnf, the MySQL configuration file on Linux systems, in /etc using the following command executed by root:

ln -s /etc/mysql/ my.cnf /etc/my.cnf

While I am unsure if this made a real difference, uncommenting the lines in the same file that pertained to InnoDB tables. What directed me to these were complaints from mysqld_safe in the myhost.err log file. All I did was to uncomment the lines beginning with “innodb” and these were 116-118, 121-122 and 124-127 in my configuration file but it may be different in yours.

After all the above, the MySQL daemon ran happily and, more importantly, started when I rebooted the virtual machine. Thinking about it now, I believe that was a lack of disk space, the locking of a data file and the lack of InnoDB support that was stopping the MySQL service from running.Running commands like mysqld start weren’t yielding useful messages so a lot of digging was needed to get the result that I needed. In fact, that’s one of the reasons why I am sharing my experiences here.

In the end, creating databases and loading them with data was all that was needed for me to start see functioning websites on my (virtual) Arch Linux system. It turned out to be another step on the way to making it workable as a potential replacement for the Linux distributions that I use most often (Linux Mint, Fedora and Ubuntu).

All Change?

19th September 2011

Could 2011 be remembered as the year when the desktop computing interface got a major overhaul? One part of this, Windows 8, won’t be with us until next year but there has been enough happening so far this year that has resulted in a lot of comment. With many if not all of the changes, it is possible to detect the influence of interfaces used on smartphones. After all, the carryover from Windows Phone 7 to the new Metro interface is unmistakeable.

Two developments in the Linux world have spawned a hell of an amount of comment: Canonical’s decision to develop Unity for Ubuntu and the arrival of GNOME 3. While there have been many complaints about the changes made in both, there must be a fair few folk who are just getting on with using them without complaint. Maybe there are many who even quietly like the new interfaces. While I am not so sure about Unity, I surprised myself by taking to GNOME Shell so much that I installed it on Linux Mint. It remains a work in progress as does Unity but it’ll be very interesting to see it mature. Perhaps a good number of the growing collection of GNOME Shell plugins could make it into the main codebase. If that were to happen, I could see it being welcomed by a good few folk.

There was little doubt that the changes in GNOME 3 looked daunting so Ubuntu’s taking a different approach is understandable until you come to realise how change that involves anyway. With GNOME 3 working so well for me, I feel disinclined to dally very much with Unity at all. In fact, I am writing these words on a Toshiba laptop running UGR, effectively Ubuntu running GNOME 3, and that could become my main home computing operating system in time.

For those who find these changes not to their taste, there are alternatives. Some Linux distributions are sticking with GNOME 2 as long as they can and there apparently has been some mention of a fork to keep a GNOME 2 interface available indefinitely. However, there are other possibilities such as LXDE and XFCE out there too. In fact, until GNOME 3 won me over, LXDE was coming to mind as a place of safety until I learned that Linux Mint was retaining its desktop identity. As always, there’s KDE too but I have never warmed to that for some reason.

The latest version of OS X, Lion, also included some changes inspired by iOS, the operating system that powers both the iPhone and iPad. However, while the current edition of PC Pro highlights some disgruntlement in professional circles regarding Apple’s direction, they do not seem to have aroused the kind of ire that has been abroad in the world of Linux. Is it because Linux users want to feel that they are in charge and that iMac and MacBook users are content to have decisions made for them so long as everything just works? Speaking for myself, the former description seems to fit me though having choices means that I can reject decisions that I do not like so much.

At the time of writing, the release of a developer preview of the next version of Windows has been generating a lot of attention. It also appears that changes are headed for the Windows user too. However, I get the sense that a more conservative interface option will be retained and that could be essential for avoiding the alienation of corporate users. After all, I cannot see the Metro interface gaining much favour in the working environment when so many of us have so much to do. Nevertheless, I plan to get my hands on the developer preview to have a look (the weekend proved too short for this). It will be very interesting to see how the next version of Windows develops and I plan to keep an eye on it as it does so.

It now looks as if many will have their work cut out if they are to avoid where desktop computing interfaces are going. Established paradigms are being questioned, particularly as a result of touch interfaces on smartphones and tablets. Wii and Kinect have involved other ways of interacting with computers too so there’s a lot of mileage in rethinking how we work with computers. So far, I have been able to deal with the changes in the world of Linux but I am left wondering at the changes that Microsoft is making. After Vista, they need to be careful and they know that. Maybe, they’ll be better at getting users through changes in computing interfaces than others but it’ll be very interesting to see what happens. Unlike open source community projects, they have the survival of a massive multinational at stake.

On Upgrading to Linux Mint 11

31st May 2011

For a Linux distribution that focuses on user-friendliness, it does surprise me that Linux Mint offers no seamless upgrade path. In fact, the underlying philosophy is that upgrading an operating system is a risky business. However, I have been doing in-situ upgrades with both Ubuntu and Fedora for a few years without any real calamities. A mishap with a hard drive that resulted in lost data in the days when I mainly was a Windows user places this into sharp relief. These days, I am far more careful but thought nothing of sticking a Fedora DVD into a drive to move my Fedora machine from 14 to 15 recently. Apart from a few rough edges and the need to get used to GNOME 3 together with making a better fit for me, there was no problem to report. The same sort of outcome used to apply to those online Ubuntu upgrades that I was accustomed to doing.

The recommended approach for Linux Mint is to back up your package lists and your data before the upgrade. Doing the former is a boon because it automates adding the extras that a standard CD or DVD installation doesn’t do. While I did do a little backing up of data, it wasn’t total because I know how to identify my drives and take my time over things. Apache settings and the contents of MySQL databases were my main concern because of where these are stored.

When I was ready to do so, I popped a DVD in the drive and carried out a fresh installation into the partition where my operating system files are kept. Being a Live DVD, I was able to set up any drive and partition mappings with reference to Mint’s Disk Utility. What didn’t go so well was the GRUB installation, and it was due to the choice that I made on one of the installation screens. Despite doing an installation of version 10 just over a month ago, I had overlooked an intricacy of the task and placed GRUB on the operating system files partition rather than at the top level for the disk where it is located. Instead of trying to address this manually, I took the easier and more time-consuming step of repeating the installation like I did the last time. If there was a graphical tool for addressing GRUB problems, I might have gone for that instead, but am left wondering at why there isn’t one included at all. Maybe it’s something that the people behind GRUB should consider creating unless there is one out there already about which I know nothing.

With the booting problem sorted, I tried logging in only to find a problem with my desktop that made the system next to unusable. It was back to the DVD and I moved many of the configuration files and folders (the ones with names beginning with a “.”) from my home directory in the belief that there might have been an incompatibility. That action gained me a fully usable desktop environment but I now think that the cause of my problem may have been different to what I initially suspected. Later I discovered that ownership of files in my home area elsewhere wasn’t associated with my user ID though there was no change to it during the installation. As it happened, a few minutes with the chown command were enough to sort out the permissions issue.

The restoration of the extra software that I had added beyond what standardly gets installed was took its share of time but the use of a previously prepared list made things so much easier. That it didn’t work smoothly because some packages couldn’t be found the first time around, so another one was needed. Nevertheless, that is nothing compared to the effort needed to do the same thing by issuing an installation command at a time. Once the usual distribution software updates were in place, all that was left was to update VirtualBox to the latest version, install a Citrix client and add a PHP plugin to NetBeans. Then, next to everything was in place for me.

Next, Apache settings were restored as were the databases that I used for offline web development. That nearly was all that was needed to get offline websites working but for the need to add an alias for localhost.localdomain. That required installation of the Network Settings tool so that I could add the alias in its Hosts tab. With that out of the way, the system had been settled in and was ready for real work.

Network Settings on Linux Mint 11

In light of some of the glitches that I saw, I can understand the level of caution regarding a more automated upgrade process on the part of the Linux Mint team. Even so, I still wonder if the more manual alternative that they have pursued brings its own problems in the form of those that I met. The fact that the whole process took a few hours in comparison to the single hour taken by the in-situ upgrades that I mentioned earlier is another consideration that makes you wonder if it is all worth it every six months or so. Saying that, there is something to letting a user decide when to upgrade rather than luring one along to a new version, a point that is more than pertinent in light of the recent changes made to Ubuntu and Fedora. Whichever approach you care to choose, there are arguments in favour as well as counterarguments too.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.