Technology Tales

Adventures & experiences in contemporary technology

Customising Nautilus (or Files) in Ubuntu GNOME 13.04

12th September 2013

The changes made to Nautilus, otherwise known as Files, in GNOME Shell 3.6 were contentious and the response of the Linux Mint was to create their own variant called Nemo from the previous version of the application. On the Cinnamon or MATE desktop environments, the then latest version of GNOME’s file manager would have looked like a fish out of water without its application menu in the top panel on the GNOME Shell desktop. It is possible to make a few modifications that help Nautilus to look more at home on those Linux Mint desktops and I have collected them here because they are useful for GNOME Shell users too. Here they are in turn.

Adding Application Menu entries to Location Options Menu

The Location Options menu is what you get on clicking the button with the cog icon on the right-hand side of the application’s location bar. Using Gsettings, it is possible to make that menu include the sort of entries that are in the application menu in the GNOME Shell panel at the top of the screen. These include an entry for closing the whole application as well as setting its preferences (or options). Running the following command does just that (if it does not work as it should, try changing the single and double quotes to those understood by a command shell):

gsettings set org.gnome.settings-daemon.plugins.xsettings overrides '@a{sv} {"Gtk/ShellShowsAppMenu": <int32 0>}'

Adding in the Remove App Menu GNOME Shell extension will clean up the GNOME Shell a little by removing the application menu altogether. If, for some reason, you wish to restore the default behaviour, then the following command does the required reset:

gsettings set org.gnome.settings-daemon.plugins.xsettings overrides '@a{sv} {}'

Stopping Hiding of the Application Title Bar When Maximised

By default, GNOME Shell can hide the application title bars of GNOME applications such as Nautilus on window maximisation and this is Nautilus now works by default. Changing the behaviour so that the title bar is kept on maximised windows can be as simple as adding in the ignore_request_hide_titlebar extension. The trouble with GNOME Shell extensions is that they can stop working when a new version of GNOME Shell is used, so there’s another option: editing metacity-theme-3.xml but /usr/share/themes/Adwaita/metacity-1. The file can be opened using superuser privileges using the following command:

gksudo gedit /usr/share/themes/Adwaita/metacity-1/metacity-theme-3.xml

With the file open, it is a matter of replacing instances of ' has_title="false" ' with ' has_title="true" ', saving it and reloading GNOME Shell. This may persevere across different versions of GNOME Shell should the extension not do so.

Disabling Recursive Search

This discovery is what led me to bundle these customisations in an entry on here in the first place. In Nemo and older versions of Nautilus, just typing with the application open would lead you down a list towards the file that you wanted. This behaviour was replaced by an automatic recursive search from GNOME Shell 3.6 where the search functionality was extended beyond the folder that was open in the file manager to its subdirectories. To change that to subsetting within the open folder or directory, you need to install a patch version of Nautilus using the following commands:

sudo add-apt-repository ppa:dr3mro/personal
sudo apt-get update && sudo apt-get upgrade

The first of these adds a new repository with the patched version of Nautilus while the second combination installs the patched version. With that done, it is time to issue the following command:

gsettings set org.gnome.nautilus.preferences enable-recursive-search false

That sets the value of the new enable-recursive-search option to false for searching within an open directory. It also can be found using Dconf-Editor in the following hierarchy: org -> gnome -> nautilus -> preferences. The obsession of the GNOME project team with minimalism is robbing users of some options and this would be a good one to have by default too. Maybe the others should be treated in the same way even if you need to use Gsettings or Dconf-Editor to change them to avoid clutter. Having GNOME Tweak Tool able to set them all would be even better.

Adding Microsoft Core Fonts to Fedora 19

6th July 2013

While I have a previous posting from 2009 that discusses adding Microsoft’s Core Fonts to the then current version of Fedora, it did strike me that I hadn’t laid out the series of command that were used. Instead, I referred to an external and unofficial Fedora FAQ. That’s still there but I also felt that I was leaving things a little to chance given how websites can disappear quite suddenly.

Even after next to four years, it still amazes me that you cannot install Microsoft’s Core Fonts in Fedora as you would in Ubuntu, Linux Mint or even Debian. Therefore, the following series of steps is as necessary now as it was then.

The first step is to add in a number of precursor applications such as wget for command line file downloading from websites, cabextract for extracting the contents of Windows CAB files, rpmbuild for creating RPM installers and utilities for the XFS file system that chkfontpath needs:

sudo yum -y install rpm-build cabextract ttmkfdir wget xfs

Here, I have gone with terminal commands that use sudo but you could become the superuser (root) for all of this and there are those who believe you should. The -y switch tells yum to go ahead with prompting you for permission before it does any installations. The next step is to download the Microsoft fonts package with wget:

sudo wget http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec

Once that is done, you need to install the chkfontpath package because the RPM for the fonts cannot be built without it:

sudo rpm -ivh http://dl.atrpms.net/all/chkfontpath

Once that is in place, you are ready to create the RPM file using this command:

sudo rpmbuild -ba msttcorefonts-2.0-1.spec

After the RPM has been created, it is time to install it:

sudo yum install --nogpgcheck ~/rpmbuild/RPMS/noarch/msttcorefonts-2.0-1.noarch.rpm

When installation has completed, the process is done. Because I used sudo, all of this happened in my own home area so there was a need for some housekeeping afterwards. If you did it by becoming the root user, then the files would be there instead and that’s the scenario in the online FAQ.

Relocating the Apache web server document root directory in Fedora 12

9th April 2010

So as not to deface anything that is available online on the web, I have a tendency to set up an offline Apache server on a home PC to do any tinkering away from the eyes of the unsuspecting public. Though Ubuntu is my mainstay for home computing, I do have a PC with Fedora installed and I have been trying to get an Apache instance starting automatically on there without success for a few months. While I can start it by running the following command as root, I’d rather not have more manual steps than is necessary.

httpd -k start

The command used by the system when it starts is different and, even when manually run as root, it failed with messages saying that it couldn’t find the directory while the web server files are stored. Here it is:

service httpd start

The default document root location on any Linux distribution that I have seen is /var/www and all is very well with this but it isn’t a safe place to leave things if ever a re-installation is needed. Having needed to wipe /var after having it on a separate disk or partition for the sake of one installation, it doesn’t look so persistent to me. In contrast, you can safeguard /home by having it on another disk or in a dedicated partition and it can be retained even when you change the distro that you’re using. Thus, I have got into the habit of having the root of the web server document root folder in my home area and that is where I have been seeing the problem.

Because of the access message, I tried using chmod and chgrp but to no avail. The remedy has to do with reassigning the security contexts used by SELinux. In Fedora, Apache will not work with the context user_home_t that is usually associated with home directories but needs httpd_sys_content_t instead. To find out what contexts are associated with particular folders, issue the following command:

ls -Z

The final solution was to create a user account whose home directory hosts the root of the web server file system, called www in my case. Then, I executed the following command as root to get things going:

chcon -R -h -t httpd_sys_content_t /home/www

It seems that even the root of the home directory has to have an appropriate security context (/home has home_root_t so that might do the needful too). Without that, nothing will work even if all is well at the next level down. The switches for chcon command translate as follows:

-R : recursive; applies changes to all files and folders within a directory.

-h : changes apply only to symbolic links and not to where they refer in the file system.

-t : alters context type.

It took a while for all of this stuff about SELinux security contexts to percolate through to the point where I was able to solve the problem. A spot of further inspiration was needed too and even guided my search for the information that I needed. It’s well worth trying Linux Home Networking if you need more information. There are references to an earlier release of Fedora but the content still applies to later versions of Fedora right up to the current release if my experience is typical.

Seeing how things develop

7th October 2009

One of the things that I do out of curiosity and self-interest is to keep tabs on what is happening with development versions of software that I use. It is for this reason that I always have a development version of WordPress on the go so as to ensure that the next stable version doesn’t bring my blog to its knees. There have been contributions from my own self to the development effort, mainly in the form of bug reports with the occasional bug fix too.

In the same vein, I have had a development version of Ubuntu installed in a VirtualBox virtual machine. There have been breakages and reinstallations along the way when an update results in disruption but it is intriguing too to see how a Linux distribution comes to fruition. In the early days of Karmic Koala (9.10), everything was thrown together more loosely and advances looked less obvious. It is true to say the ext4 file systems support was already in place but the interface looked like a tweaked version of the standard GNOME desktop. Over time, the desktop has been customised and boot messages hidden out of sight. Eye candy like new icons and backgrounds have begun to entice while other features such as an encrypted home folder, Software Store and Ubuntu One came into place. Installation screens became slicker and boot times reduced. All of this may seem incremental but revolutions can break things and you only have to look at the stuttering progress of Windows to see that. Even with all of these previews, I still plan to do a test run of the final revision of 9.10 before committing to putting it in place on my main home PC. Bearing the scars of misadventures over the years has taught me well.

Windows development is a less open process but I have been partial fo development versions there too. In fact, beta and release candidate installations of Windows 7 have convinced me to upgrade from Windows XP for those times when a Windows VM needs to fired up in anger. A special offer has had me ordering in advance and sitting back and waiting. With my Windows needs being secondary to my Linux activities, I am not so fussed about taking my time and I have no intention of binning Windows XP just yet anyway.

The trouble with all of this previewing is that you get buffeted by the ongoing development. That is very true of Ubuntu 9.10 and has been very much part and parcel of the heave that brought WordPress 2.7 into being last year. Things get added and then removed as development tries to fins that sweet spot or a crash results and you need to rebuild things. It is small wonder that you are told not to put unfinished software on a production system. Another consequence might be that you really question why you are watching all of this and come to decide that what you already have is a place of safety in comparison to what’s coming. So far, that has never turned out to be true but there’s no harm in looking before you leap either.

/sbin/mount.vboxsf: mounting failed with the error: Protocol error

19th April 2009

These times, my virtualisation needs are being well served by VirtualBox 2.2. It may be the closed source variant but I have no complaints about it. Along with a number Windows VM’s, I also have one running Ubuntu 9.04 and, for the first time, I seem to have VirtualBox’s Guest Additions playing with a Linux guest as they should. Even the Shared Folders functionality is working.

However, I did get one problem when I tried out the last feature for the first time. The procedure is to issue a command like the following in a terminal session after creating the requisite directory in the file system and adding a host directory as a shared folder:

sudo mount -t vboxsf Music /mnt/host_music/

Above, Music is the name of the folder in the VirtualBox manager and /mnt/host_music in the directory in the guest file system. However, this returned the message at the head of this post at that first attempt:

/sbin/mount.vboxsf: mounting failed with the error: Protocol error

The solution thankfully turns out to be an easy one: reinstalling the Guest Additions and that certainly did the trick for me. The cause would appear to have been an update to Ubuntu and 9.04 is understandably in a state of flux at the moment (I suspect kernel upgrades because of my previous experiences). Regardless of this, it is good to know that it’s a problem with a simple fix and I am seeing the niceties of a larger virtual screen system together automatic grabbing and releasing of the mouse cursor too. There may be a chance to explore the availability of these sorts of features to other Linux guests but I have other things that I should be doing and there’s sunshine outside to be enjoyed.

Work locally, update remotely

4th December 2008

Here’s a trick that might have its uses: using a local WordPress instance to update your online blog (yes, there are plenty of applications that promise to edit your online blog but these need file permissions to the likes of xmlrpc.php to be opened up). Along with the right database access credentials and the ability to log in remotely, adding the following two lines to wp-config.php does the trick:

define('WP_SITEURL', 'http://localhost/blog');

define('WP_HOME', 'http://localhost/blog');

These two constants override what is in the database and allow to update the online database from your own PC using WordPress running on a local web server (Apache or otherwise). One thing to remember here is that both online and offline directory structures are similar. For example, if your online WordPress files are in blog in the root of the online web server file system (typically htdocs for Linux), then they need to be contained in the same directory in the root of the offline server too. Otherwise, things could get confusing and perhaps messy. Another thing to consider is that you are modifying your online blog so the usual rules about care and attention apply, particularly with respect to using the same version of WordPress both locally and remotely. This is especially a concern if you, like me, run development versions of WordPress to see if there are any upheavals ahead of us like the overhaul that is coming in with WordPress 2.7.

An alternative use of this same trick is to keep a local copy of your online database in case of any problems while using a local WordPress instance to work with it. I used to have to edit the database backup directly (on my main Ubuntu system), first with GEdit but then using a sed command like the following:

sed -e s/www\.onlinewebsite\.com/localhost/g backup.sql > backup_l.sql

The -e switch uses regular expression substitution that follows it to edit the input with the output being directed to a new file. It’s slicker than the interactive GEdit route but has been made redundant by defining constants for a local WordPress installation as described above.

Removing files for which you have no write access from the GNOME Wastebasket in Ubuntu 8.04

2nd September 2008

It might be that GNOME contains a small trap awaiting the unwary: moving files for which you have no write permissions to the Wastebasket using Nautilus. This happened to me in Ubuntu 8.04 and I couldn’t clear the Wastebasket using the normal means. To resolve the situation, I thought of finding where the Wastebasket in the normal file system and that isn’t as easy as it might be. One place to look is ~/.Trash but I didn’t have that at all because the location in Hardy Heron is ~/.local/share/Trash/Files. Armed with this knowledge, I turned to the command line and performed the required erasure using sudo. It was all over very quickly once I knew where to look.

Watch where you store your virtual machines when using VMware on Linux

12th July 2008

My experience is with Ubuntu on this one but I have found that you need to be careful as regards the file system used by the drive where you keep your virtual machines. If it is NTFS, VMware can fail to start a VM because it cannot create a virtual memory file while it presents as physical memory to a guest operating system. Use ext2 or ext3 and there should be no problem, even if that means formatting a drive to fulfill the need. That’s what I did and all was well thereafter.

The irritation of a 4 GB file size limitation

20th November 2007

I recently got myself a 500GB Western Digital My Book, an external hard drive in other words. Bizarrely, the thing is formatted using the FAT32 file system. I appreciate that backward compatibility for Windows 9x might seem desirable but using NTFS would be more understandable, particularly given that the last of the 9x line, Windows ME, is now eight years old (there cannot be anybody who still uses that, can there?). The result is that I got core dump messages from cp commands issued from the terminal on my Ubuntu system to copy files of size in excess of 4GB last night. It surprised me at first but it now seems to be a FAT32 limitation. The idea of formatting the drive as NTFS did occur to me but GParted would not do that, at least not with my current configuration. The ext3 file system is an option but I have a spare PC with Windows 2000 so that will be a step too far for now, unless I take the plunge and bring that into the Linux universe too.

Other than the 4GB irritation, the new drive works well and was picked up and supported by Ubuntu without any hassle beyond getting it out of the box, finding a place for it on my desk and plugging in a few cables. While needing judiciousness about file sizes, it played an important role while I converted a 320 GB internal WD drive from NTFS to ext3 and may yet be vital if my Windows 2000 box gets a migration to Linux. In interim, 500 GB is a lot of space and having an external drive that size is a bonus these days. That is especially the case when you consider that the 1 terabyte threshold is on the verge of getting crossed. It certainly makes DVD’s, flash drives and other multi-gigabyte media less impressive than they otherwise might appear.

Turning the world on its head: running VMware on Ubuntu

2nd November 2007

When Windows XP was my base operating system, I used VMware Workstation to peer into the worlds of Windows 2000, Solaris and various flavours of Linux, including Ubuntu. Now that I am using Ubuntu instead of what became a very flaky XP instance, VMware is still with me and I am using it to keep a foot in the Windows universe. In fact, I have Windows 2000 and Windows XP virtual machines available to me and they should supply my Windows needs.

A evaluation version of Workstation 6 is what I am using to power them and I must admit that I am likely to purchase a license before the evaluation period expires. Installation turned out to be a relatively simple affair, starting with my downloading a compressed tarball from the VMware website. The next steps were to decompress the tarball (Ubuntu has an excellent tool, replete with a GUI, for this) and run vmware-install.pl. I didn’t change any of the defaults and everything was set up without a bother.

In use, a few things have come to light. The first is that virtual machines must be stored on drives formatted with EXt3 or some other native Linux file system rather than on NTFS. Do the latter and you get memory errors when you try starting a virtual machine; I know that I did and that every attempt resulted in failure. After a spot of backing up files, I converted one of my SATA drives from NTFS to Ext3. For sake of safety, I also mounted it as my home directory; the instructions on Ubuntu Unleashed turned out to be invaluable for this. I moved my Windows 2000 VM over and it worked perfectly.

Next on the list was a serious of peculiar errors that cam to light when I was attempting to install Windows XP in a VM created for it. VMware was complaining about a CPU not being to run fast enough; 2 MHz was being stated for an Athlon 64 3000+ chip running at 1,58 GHz! Clearly, something was getting confused. Also, my XP installation came to a halt with a BSOD stating that a driver had gone into a loop with Framebuf fingered as the suspect. I was seeing two symptoms of the same problem and its remedy was unclear. A message on a web forum put the idea of rebooting Ubuntu into my head and that resolved the problem. I’ll be keeping an eye on it, though.

Otherwise, everything  seems to be going well with this approach and that’s an encouraging sign. It looks as my current Linux-based set up is one with which I am going to stay. This week has been an interesting one already and I have no doubt that I’ll continue to learn more as time goes on.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.