Technology Tales

Adventures & experiences in contemporary technology

Getting rid of a Dropbox error message on a Linux-powered PC

24th September 2012

One of my PC’s has ended up becoming a testing ground for a number of Linux distributions. The list has included openSUSE, Fedora, Arch and LMDE with Sabayon being the latest incumbent. From Arch onwards in that list though, a message has appeared on loading the desktop with every one of these when I have Dropbox’s client set up on there:

Unable to monitor entire Dropbox folder hierarchy. Please run “echo 100000 | sudo tee /proc/sys/fs/inotify/max_user_watches” and restart Dropbox to correct the problem.

Even applying the remedy that the message suggests won’t permanently fix the problem. For that, you need to edit /etc/sysctl.conf with superuser access and add the following line to it:

fs.inotify.max_user_watches = 100000

With that in place, you can issue the following command to fix the problem in the current session (assuming your user account is listed in /etc/sudoers):

sudo sysctl -p & dropbox stop & dropbox start

A reboot should demonstrate that the messages no longer appear again. For a good while, I had ignored it but curiosity eventually got me to find out how it could be stopped and led to what you find above.

Listing hardware information for Linux systems

3rd August 2012

Curiosity about the graphics card on my backup PC caused me to look for ways of getting this information without opening up the machine or searching for a manual. In the end, a solitary command did the job:

sudo lshw

If you are running it as root, the “sudo” piece can be dropped but the result is the same. As it happened, it gave me the information that I needed.

Installing VMware Player 4.04 on Linux Mint 13

15th July 2012

Curiosity about the Release Preview of Windows 8 saw me running into bother when trying to see what it’s like in a VirtualBox VM. While doing some investigations on the web, I saw VMware Player being suggested as an alternative. Before discovering VirtualBox, I did have a licence for VMware Workstation and was interested in seeing what Player would have to offer. The, it was limited to running virtual machines that were created using Workstation. Now, it can create and manage them itself and without any need to pay for the tool either. Registration on VMware’s website is a must for downloading it though but that’s no monetary cost.

One I had downloaded Player from the website, I needed to install it on my machine. There are Linux and Windows versions and it was the former that I needed and there are 32-bit and 64-bit variants so you need to know what your system is running. With the file downloaded, you need to set it as executable and the following command should do the trick once you are in the right directory:

chmod +x VMware-Player-4.0.4-744019.i386.bundle

Then, it needs execution as a superuser. With sudo access for my user account, it was a matter of issuing the following command and working through the installation screens to instate the Player software on the system:

sudo ./VMware-Player-4.0.4-744019.i386.bundle

Those screens proved easy for me to follow so life would have been good if that were all that was needed to get Player working on my PC. Having Linux Mint 13 means that the kernel is of the 3,2 stock and that means using a patch to finish off the Player installation because the required VMware kernel modules seem to silently fail to compile during the installation process. This only manifests itself when you attempt to start Player afterwards to find a module installation screen appear. That wouldn’t be an issue of itself were it not for the compilation failure of the vmnet module and subsequent inability to start VMware services on the machine. There is a prompt to peer into the log file for the operation and that is a little uninformative for the non-specialist.

Rummaging around the web brought me to the requisite patch and it will work for Player 4.0.3 and Workstation 8.0.2 by default. Doing some tweaking allowed me to make it work for Player 4.04 too. My first step was to extract the contents of the tarball to /tmp where I could edit patch-modules_3.2.0.sh. Line 8 was changed to the following:

plreqver=4.0.4

With the amendment saved, it was time to execute the shell script as a superuser having made it executable before hand. This can be accomplished using the following command:

chmod +x patch-modules_3.2.0.sh && sudo ./patch-modules_3.2.0.sh

With that completed successfully, VMware Player ran as it should. An installation of Windows 8 into a new VM ran very smoothly and I was impressed with performance and responsiveness of the operating system within a Player VM. There are a few caveats though. First, it doesn’t run at all well with VMware Tools so it’s best to leave them uninstalled and it doesn’t seem to need them either; it was possible to set the resolution to the same as my screen and use the CTRL+ALT+ENTER shortcut to drop in and out of full screen mode anyway. Second, the unattended Windows installation wasn’t the way forward for setting up the VM but it was no big deal to have that experiment thwarted. The feature remains an interesting one though.

With Windows 8 running so well in Player, I was reminded of the sluggish nature of my Windows 7 VM and an issue with a Fedora 17 one too. The result was that I migrated the Windows 7 VM from VirtualBox to VMware and all is so much more responsive. Getting it there took not a little tinkering so that’s a story for another entry. On the basis of my experiences so far, I reckon that VMware Player will remain useful to me for a little while yet. Resolving the installation difficulty was worth that extra effort.

Changing from to Nvidia Graphics Drivers on Linux Mint Debian Edition 64-bit

22nd April 2012

One way of doing this is to go to the Nvidia website and download the latest file from the relevant page on there. Then, the next stage is to restart your PC and choose rescue mode instead of the more usual graphical option. This drops you onto a command shell that is requesting your root password. Once this is done, you can move onto the next stage of the exercise. Migrate to the directory where the *.run file is located and issuing a command similar to the following:

bash NVIDIA-Linux-x86_64-295.40.run

The above was the latest file available at the time of writing so the name may have changed by the time that you read this. If the executable asks to modify your X configuration file, I believe that the best course is to let it do that. Editing it yourself or running nvidia-xconfig are alternative approaches if you so prefer.

Proprietary Nvidia drivers are included the repositories for Linux Mint Debian Edition so that may be a better course of action since you will get updates through normal system update channels. Then, the course of action is to start by issuing the following installation comands:

sudo apt-get install module-assistant
sudo apt-get install nvidia-kernel-common
sudo apt-get install nvidia-glx
sudo apt-get install kernel-source-NVIDIA
sudo apt-get install nvidia-xconfig

Once those have completed, issuing the following in turn will complete the job ahead of a reboot:

sudo m-a a-i nvidia
sudo modprobe nvidia
sudo nvidia-xconfig

If you reboot before running the above like I did, you will get a black screen with a flashing cursor instead of a full desktop because X failed to load. Then, the remedy is to reboot the machine and choose the rescue mode option, provide the root password and issue the three commands (at this point, the sudo prefix can be dropped because it’s unneeded) then. Another reboot will see order restored and the new driver in place. Running the following at that point will do a check on things as will be the general appearance of everything:

glxinfo | grep render

Ridding Fedora of Unwanted Software Repositories

4th November 2010

Like other Linux distributions, Fedora has the software repository scheme of things for software installation and updating. However, it could do with having the ability to remove unwanted repositories through a GUI but it doesn’t. What you need to do instead is switch to root in a terminal using the the command su -- and entering you root password before navigating to /etc/yum.repos.d/ to delete the troublesome [file name].repo file. Recently, I needed to do this after upgrading to Fedora 14 or Yum wouldn’t work from the command line, which is the way that I tend to update Fedora (yum -y update is command that I use and it automatically does all installations unattended until it is finished doing what’s needed). The offending repository, or “Software Source” as these things are called in the GUI, was belonging to Dropbox and even disabling it didn’t make Yum operate from the command like it should so it had to go. Maybe Dropbox haven’t caught up with the latest release of Fedora but that can be resolved another day.

Taking SUDO beyond Ubuntu

27th October 2010

Though some may call it introducing a security risk, being able to execute administrator commands in Ubuntu using SUDO and GKSU by default is handy. It’s not the only Linux distribution with the facility though because the /etc/sudoers file is found in Debian and I plan to have a look into Fedora. The thing that is needs to be done is to add the following line to the aforementioned file (you will need to do this as root):

[your user name] ALL=(ALL) ALL

One that is done, you are all set. Just make sure that you’re using a secure password though and removing the SUDO/GKSU permissions is as simple as reversing the change.

Update on 2011-12-03: The exact same can be done for both Arch Linux and Fedora, The same file locations apply too.

Ubuntu upgrades: do a clean installation or use Update Manager?

9th April 2009

Part of some recent “fooling” brought on by the investigation of what turned out to be a duff DVD writer was a fresh installation of Ubuntu 8.10 on my main home PC. It might have brought on a certain amount of upheaval but it was nowhere near as severe as that following the same sort of thing with a Windows system. A few hours was all that was needed but the question as to whether it is better to do an upgrade every time a new Ubuntu release is unleashed on the world or to go for a complete virgin installation instead. With Ubuntu 9.04 in the offing, that question takes on a more immediate significance than it otherwise might do.

Various tricks make the whole reinstallation idea more palatable. For instance, many years of Windows usage have taught me the benefits of separating system and user files. The result is that my home directory lives on a different disk to my operating system files. Add to that the experience of being able to reuse that home drive across different Linux distros and even swapping from one distro to another becomes feasible. From various changes to my secondary machine, I can vouch that this works for Ubuntu, Fedora and Debian; the latter is what currently powers the said PC. You might have to user superuser powers to attend to ownership and access issues but the portability is certainly there and it applies anything kept on other disks too.

Naturally, there’s always the possibility of losing programs that you have had installed but losing the clutter can be liberating too. However, assembling a script made up up of one of more apt-get install commands can allow you to get many things back at a stroke. For example, I have a test web server (Apache/MySQL/PHP/Perl) set up so this would be how I’d get everything back in place before beginning further configuration. It might be no bad idea to back up your collection of software sources either; I have yet to add all of the ones that I have been using back into Synaptic. Then there are closed source packages such as VirtualBox (yes, I know that there is an open source edition) and Adobe Reader. After reinstating the former, all my virtual machines were available for me to use again without further ado. Restoring the latter allowed me to grab version 9.1 (probably more secure anyway) and it inveigles itself into Firefox now too so the number of times that I need to go through the download shuffle before seeing the contents of a PDF are much reduced, though not completely eliminated by the Windows-like ability to see a PDF loaded in a browser tab. Moving from software to hardware for a moment, it looks like any bespoke actions such as my activating an Epson Perfection 4490 Photo scanner need to be repeated but that was all that I needed to do. Getting things back into order is not so bad but you need to allow a modicum of time for this.

What I have discussed so far are what might be categorised as the common or garden aspects of a clean installation but I have seen some behaviours that make me wonder if the usual Ubuntu upgrade path is sufficiently complete in its refresh of your system. The counterpoint to all of this is that I may not have been looking for some of these things before now. That may apply to my noticing that DSLR support seems to be better with my Canon and Pentax cameras both being picked up and mounted for me as soon as they are connected to a PC, the caveat being that they are themselves powered on for this to happen. Another surprise that may be new is that the BBC iPlayer’s Listen Again works without further work from the user, a very useful development. It very clearly wasn’t that way before I carried out the invasive means. My previous tweaking might have prevented the in situ upgrade from doing its thing but I do see the point of not upsetting people’s systems with an overly aggressive update process, even if it means that some advances do not make themselves known.

So what’s my answer regarding which way to go once Ubuntu Jaunty Jackalope appears? For sake of avoiding initial disruption, I’d be inclined to go down the Update Manager route first while reserving the right to do a fresh installation later on. All in all, I am left with the gut feeling is that the jury is still out on this one.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.