TOPIC: FILESYSTEM HIERARCHY STANDARD
A Practical Linux Administration Toolkit: Kernels, Storage, Filesystems, Transfers and Shell Completion
Linux command-line administration has a way of beginning with a deceptively simple question that opens into several possible answers. Whether the task is checking which kernels are installed before an upgrade, mounting an NFS share for backup access, diagnosing low disk space, throttling a long-running sync job or wiring up tab completion, the right answer depends on context: the distribution, the file system type, the transport protocol and whether the need is a one-off action or a persistent configuration. This guide draws those everyday administrative themes into a single continuous reference.
Identifying Your System and Installed Kernels
Reading Distribution Information
A sensible place to begin any administration session is knowing exactly what you are working with. One quick approach is to read the release files directly:
cat /etc/*-release
On systems where bat is available (sometimes installed as batcat), the same files can be read with syntax highlighting using batcat /etc/*-release. Typical output on Ubuntu includes /etc/lsb-release and /etc/os-release, with values such as DISTRIB_ID=Ubuntu, VERSION_ID="20.04" and PRETTY_NAME="Ubuntu 20.04.6 LTS". Three additional commands, cat /etc/os-release, lsb_release -a and hostnamectl, each present the same underlying facts in slightly different formats, while uname -r reports the currently running kernel release in isolation. Adding more flags with uname -mrs extends the output to include the kernel name and machine hardware class, which on an older RHEL system might return something like Linux 2.6.18-8.1.14.el5 x86_64.
Querying Installed Kernels by Package Manager
On Red Hat Enterprise Linux, CentOS, Rocky Linux, AlmaLinux, Oracle Linux and Fedora, installed kernels are managed by the RPM package database and are queried with:
rpm -qa kernel
This may return entries such as kernel-5.14.0-70.30.1.el9_0.x86_64. The same information is also accessible through yum list installed kernel or dnf list installed kernel. On Debian, Ubuntu, Linux Mint and Pop!_OS the package manager differs, so the command changes accordingly:
dpkg --list | grep linux-image
Output may include versioned packages, such as linux-image-2.6.20-15-generic, alongside the metapackage linux-image-generic. Arch Linux users can query with pacman -Q | grep linux, while SUSE Enterprise Linux and openSUSE users can turn to rpm -qa | grep -i kernel or use zypper search -i kernel, which presents results in a structured table. Alpine Linux takes yet another approach with apk info -vvv | grep -E 'Linux' | grep -iE 'lts|virt', which may return entries such as linux-virt-5.15.98-r0 - Linux lts kernel.
Finding Kernels Outside the Package Manager
Package databases do not always tell the whole story, particularly where custom-compiled kernels are involved. A kernel built and installed manually will not appear in any package manager query at all. In that case, /lib/modules/ is a useful place to look, since each installed kernel generally has a corresponding module directory. Running ls -l /lib/modules/ may show entries such as 4.15.0-55-generic, 4.18.0-25-generic and 5.0.0-23-generic. A further check is:
sudo find /boot/ -iname "vmlinuz*"
This may return files such as /boot/vmlinuz-5.4.0-65-generic and /boot/vmlinuz-5.4.0-66-generic, confirming precisely which versions exist on disk.
A Brief History of vmlinuz
That naming convention is worth understanding because it appears on virtually every Linux system. vmlinuz is the compressed, bootable Linux kernel image stored in /boot/. The name traces back through computing history: early Unix kernels were simply called /unix, but when the University of California, Berkeley ported Unix to the VAX architecture in 1979 and added paged virtual memory, the resulting system, 3BSD, was known as VMUNIX (Virtual Memory Unix) and its kernel images were named /vmunix. Linux inherited vmlinuz as a mutation of vmunix, with the trailing z denoting gzip compression (though other algorithms such as xz and lzma are also supported). The counterpart vmlinux refers to the uncompressed, non-bootable kernel file, which is used for debugging and symbol table generation but is not loaded directly at boot. Running ls -l /boot/ will show the full set of boot files present on any given system.
Examining and Investigating Disk Usage
Why ls Is Not the Right Tool for Directory Sizes
Storage management is an area where a familiar command can mislead. Running ls -l on a directory typically shows it occupying 4,096 bytes, which reflects the directory entry metadata rather than the combined size of its contents. For real space consumption, du is the appropriate tool.
sudo du -sh /var
The above command produces a summarised, human-readable total such as 85G /var. The -s flag limits output to a single grand total and -h formats values in K, M or G units. For an individual file, du -sh /var/log/syslog might report 12M /var/log/syslog, while ls -lh /var/log/syslog adds ownership and timestamps to the same figure.
Drilling Down to Find Where Space Has Gone
When a file system is full and the need is to locate exactly where the space has accumulated, du can be made progressively more revealing. The command sudo du -h --max-depth=1 /var lists first-level subdirectories with sizes, potentially showing 77G /var/lib, 5.0G /var/cache and 3.3G /var/log. To surface the biggest consumers quickly, piping to sort and head works well:
sudo du -h /var/ | sort -rh | head -10
Adding the -a flag includes individual files alongside directories in the same output:
sudo du -ah /var/ | sort -rh | head -10
Apparent Size Versus Allocated Disk Space
There is a subtle distinction that sometimes causes confusion. By default, du reports allocated disk usage, which is governed by the file system block size. A single-byte file on a file system with 4 KB blocks still consumes 4 KB of disk. To see the amount of data actually stored rather than allocated, sudo du -sh --apparent-size /var reports the apparent size instead. The df command answers a different question altogether: it shows free and used space per mounted file system, such as /dev/sda1 at 73 per cent usage or /dev/sdb1 mounted on /data with 70 GB free. In practice, du is for locating what consumes space and df is for checking how much remains on each volume.
gdu: A Faster Interactive Alternative
Some administrators prefer a more modern tool for storage investigations, and gdu is a notable option. It is a fast disk usage analyser written in Go with an interactive console interface, designed primarily for SSDs where it can exploit parallel processing to full effect, though it functions on hard drives too with less dramatic speed gains. The binary release can be installed by extracting its .tgz archive:
curl -L https://github.com/dundee/gdu/releases/latest/download/gdu_linux_amd64.tgz | tar xz
chmod +x gdu_linux_amd64
mv gdu_linux_amd64 /usr/bin/gdu
It can also be run directly via Docker without installation:
docker run --rm --init --interactive --tty --privileged
--volume /:/mnt/root ghcr.io/dundee/gdu /mnt/root
In use, gdu scans a directory interactively when run without flags, summarises a target with gdu -ps /some/dir, shows top results with gdu -t 10 / and runs without interaction using gdu -n /. It supports apparent size display, hidden file inclusion, item counts, modification times, exclusions, age filtering and database-backed analysis through SQLite or BadgerDB. The project documentation notes that hard links are counted only once and that analysis data can be exported as JSON for later review.
Unpacking TGZ Archives
A brief note on the tar command is useful here, since it appears throughout Linux administration, including in the gdu installation step above. A .tgz file is simply a GZIP-compressed tar archive, and the standard way to extract one is:
tar zxvf archive.tgz
Modern GNU tar can detect the compression type automatically, so the -z flag is often optional:
tar xvf archive.tgz
To extract into a specific directory rather than the current working directory, the -C option takes a destination path:
tar zxvf archive.tgz -C /path/to/destination/
To inspect the contents of a .tgz file without extracting it, the t (list) flag replaces x (extract):
tar ztvf archive.tgz
The tar command was first introduced in the seventh edition of Unix in January 1979 and its name comes from its original purpose as a Tape ARchiver. Despite that origin, modern tar reads from and writes to files, pipes and remote devices with equal facility.
Mounting NFS Shares and Optical Media
Installing NFS Client Tools
NFS remains common on Linux and Unix-like systems, allowing remote directories to be mounted locally and treated as though they were native file systems. Before a client can mount an NFS export, the client packages must be installed. On Ubuntu and Debian, that means:
sudo apt update
sudo apt install nfs-common
On Fedora and RHEL-based distributions, the equivalent is:
sudo dnf install nfs-utils
Once installed, showmount -e 10.10.0.10 can list available exports from a server, returning output such as /backups 10.10.0.0/24 and /data *.
Mounting an NFS Share Manually
Mounting an NFS share follows the same broad pattern as mounting any other file system. First, create a local mount point:
sudo mkdir -p /var/backups
Then mount the remote export, specifying the file system type explicitly:
sudo mount -t nfs 10.10.0.10:/backups /var/backups
A successful command produces no output. Verification is done with mount | grep nfs or df -h, after which the local directory acts as the root of the remote file system for all practical purposes.
Persisting NFS Mounts Across Reboots
Since a manual mount does not survive a reboot, persistent setups use /etc/fstab. An appropriate entry looks like:
10.10.0.10:/backups /var/backups nfs defaults,nofail,_netdev 0 0
The nofail option prevents a boot failure if the NFS server is unavailable when the machine starts. The _netdev flag marks the mount as network-dependent, ensuring the system defers the operation until the network stack is available. Running sudo mount -a tests the entry without rebooting.
Troubleshooting Common NFS Errors
NFS problems are often predictable. A "Permission denied" error usually means the server export in /etc/exports does not include the client, and reloading exports with sudo exportfs -ar is frequently the remedy. "RPC: Program not registered" indicates the NFS service is not running on the server, in which case sudo systemctl restart nfs-server applies. A "Stale file handle" error generally follows a server reboot or a deleted file and is cleared by unmounting and remounting. Timeouts and "Server not responding" messages call for checking network connectivity, confirming that firewall rules permit access to port 111 (rpcbind, required for NFSv3) and port 2049 (NFS itself), and verifying NFS version compatibility using the vers=3 or vers=4 mount option. NFSv4 requires only port 2049, while NFSv2 and NFSv3 also require port 111. To detach a share, sudo umount /var/backups is the standard route, with fuser -m /var/backups helping identify processes that are blocking the unmounting process.
Mounting Optical Media
CDs and DVDs are less central than they once were, but some systems still need to read them. After inserting a disc, blkid can identify the block device path, which is typically /dev/sr0, and will report the file system type as iso9660. With a mount point created using sudo mkdir /mnt/cdrom, the disc is mounted with:
sudo mount /dev/sr0 /mnt/cdrom
The warning device write-protected, mounted read-only is expected for optical media and can be disregarded. CDs and DVDs use the ISO 9660 file system, a data-exchange standard designed to be readable across operating systems. Once mounted, the disc contents are accessible under /mnt/cdrom, and sudo umount /mnt/cdrom detaches it cleanly when work is complete.
Transferring Files Securely and Efficiently
Copying Files with scp
scp (Secure Copy) transfers files and directories between hosts over SSH, encrypting both data and authentication credentials in transit. Its basic syntax is:
scp [OPTIONS] [[user@]host:]source [[user@]host:]destination
The colon is how scp distinguishes between local and remote paths: a path without a colon is local. A typical upload from a local machine to a remote host looks like:
scp file.txt remote_username@10.10.0.2:/remote/directory
A download from a remote host to the local machine reverses the argument order:
scp remote_username@10.10.0.2:/remote/file.txt /local/directory
Commonly used options include -r for recursive directory copies, -p to preserve metadata such as modification times and permissions, -C for compression, -i for a specific private key, -l to cap bandwidth in Kbit/s and the uppercase -P to specify a non-standard SSH port. It is also possible to copy between two remote hosts directly, routing the transfer through the local machine with the -3 flag.
The Protocol Change in OpenSSH 9.0
There is an important change in modern OpenSSH that administrators should be aware of. From OpenSSH 9.0 onward, the scp command uses the SFTP protocol internally by default rather than the older SCP/RCP protocol, which is now considered outdated. The command behaves identically from the user's perspective, but if an older server requires the legacy protocol, the -O flag forces it. For advanced requirements such as resumable transfers or incremental directory synchronisation, rsync is generally the better fit, particularly for large directory trees.
Throttling rsync to Protect Bandwidth
Even with rsync, raw speed is not always desirable. A backup script consuming all available bandwidth can disrupt other services on the same network link, so --bwlimit is often essential. The basic syntax is:
rsync --bwlimit=KBPS source destination
The value is in units of 1,024 bytes unless an explicit suffix is added. A fractional value is also valid: --bwlimit=1.5m sets a cap of 1.5 MB/s. A local transfer capped at 1,000 KB/s looks like:
rsync --bwlimit=1000 /path/to/source /path/to/dest/
And a remote backup:
rsync --bwlimit=1000 /var/www/html/ backups@server1.example.com:~/mysite.backups/
The man page for rsync explains that --bwlimit works by limiting the size of the blocks rsync writes and then sleeping between writes to achieve the target average. Some volume undulation is therefore normal in practice.
Managing I/O Priority with ionice
Bandwidth is only one dimension of the load a transfer places on a system. Disk I/O scheduling may also need attention, particularly on busy servers running other workloads. The ionice utility adjusts the I/O scheduling class and priority of a process without altering its CPU priority. For instance:
/usr/bin/ionice -c2 -n7 rsync --bwlimit=1000 /path/to/source /path/to/dest/
This runs the rsync process in best-effort I/O class (-c2) at the lowest priority level (-n7), combining transfer rate limiting with reduced I/O priority. The scheduling classes are: 0 (none), 1 (real-time), 2 (best-effort) and 3 (idle), with priority levels 0 to 7 available for the real-time and best-effort classes.
Together, --bwlimitand ionice provide complementary controls over exactly how much resource a routine transfer is permitted to consume at any given time.
Setting Up Bash Tab Completion
On Ubuntu and related distributions, Bash programmable completion is provided by the bash-completion package. If tab completion does not function as expected in a new installation or container environment, the following commands will install the necessary support:
sudo apt update
sudo apt upgrade
sudo apt install bash-completion
The package places a shell script at /etc/profile.d/bash_completion.sh. To ensure it is loaded in shell startup, the following appends the source line to .bashrc:
echo "source /etc/profile.d/bash_completion.sh" >> ~/.bashrc
A conditional form avoids duplicating the line on repeated runs:
grep -wq '^source /etc/profile.d/bash_completion.sh' ~/.bashrc
|| echo 'source /etc/profile.d/bash_completion.sh' >> ~/.bashrc
The script is typically loaded automatically in a fresh login shell, but source /etc/profile.d/bash_completion.sh activates it immediately in the current session. Once active, pressing Tab after partial input such as sudo apt i or cat /etc/re completes commands and paths against what is actually installed. Bash also supports simple custom completions: complete -W 'google.com cyberciti.biz nixcraft.com' host teaches the shell to offer those three domains after typing host and pressing Tab, which illustrates how the feature can be extended to match the patterns of repeated daily work.
Installing Snap on Debian
Snap is a packaging format developed by Canonical that bundles an application together with all of its dependencies into a single self-contained package. Snaps update automatically, roll back gracefully on failure and are distributed through the Snap Store, which carries software from both Canonical and independent publishers. The background service that manages them, snapd, is pre-installed on Ubuntu but requires a manual setup step on Debian.
On Debian 9 (Stretch) and newer, snap can be installed directly from the command line:
sudo apt update
sudo apt install snapd
After installation, logging out and back in again, or restarting the system, is necessary to ensure that snap's paths are updated correctly in the environment. Once that is done, install the snapd snap itself to obtain the latest version of the daemon:
sudo snap install snapd
To verify that the setup is working, the hello-world snap provides a straightforward test:
sudo snap install hello-world
hello-world
A successful run prints Hello World! to the terminal. Note that snap is not available on Debian versions before 9. If a snap installation produces an error such as snap "lxd" assumes unsupported features, the resolution is to ensure the core snap is present and current:
sudo snap install core
sudo snap refresh core
On desktop systems, the Snap Store graphical application can then be installed with sudo snap install snap-store, providing a point-and-click interface for browsing and managing snaps alongside the command-line tools.
Increasing the Root Partition Size on Fedora with LVM
Fedora's default installer has used LVM (Logical Volume Manager) for many years, dividing the available disk into a volume group containing separate logical volumes for root (/), home (/home) and swap. This arrangement makes it straightforward to redistribute space between volumes without repartitioning the physical disk, which is a significant advantage over a fixed partition layout. Note that Fedora 33 and later default to Btrfs without LVM for new installations, so the steps below apply to systems that were installed with LVM, including pre-Fedora 33 installs and any system where LVM was selected manually.
Because the root file system is in active use while the system is running, resizing it safely requires booting from a Fedora Live USB stick rather than the installed system. Once booted from the live environment, open a terminal and begin by checking the volume group:
sudo vgs
Output such as the following shows the volume group name, total size and, crucially, how much free space (VFree) is unallocated:
VG #PV #LV #SN Attr VSize VFree
fedora 1 3 0 wz--n- <237.28g 0
Before proceeding, confirm the exact device mapper paths for the root and home logical volumes by running fdisk -l, since the volume group name varies between installations. Common names include /dev/mapper/fedora-root and /dev/mapper/fedora-home, though some systems use fedora00 or another prefix.
When Free Space Is Already Available
If VFree shows unallocated space in the volume group, the root logical volume can be extended directly and the file system resized in a single command:
lvresize -L +5G --resizefs /dev/mapper/fedora-root
The --resizefs flag instructs lvresize to resize the file system at the same time as the logical volume, removing the need to run resize2fs separately.
When There Is No Free Space
If VFree is zero, space must first be reclaimed from another logical volume before it can be given to root. The most common approach is to shrink the home logical volume, which typically holds the most available headroom. Shrinking a file system involves data moving on disk, so the operation requires the volume to be unmounted, which is why the live environment is essential. To take 10 GB from home:
lvresize -L -10G --resizefs /dev/mapper/fedora-home
Once that completes, the freed space appears as VFree in vgs and can be added to the root volume:
lvresize -L +10G --resizefs /dev/mapper/fedora-root
Both steps use --resizefs so that the file system boundaries are updated alongside the logical volume boundaries. After rebooting back into the installed system, df -h will confirm the new sizes are in effect.
Keeping a Linux System Well Maintained
The commands and configurations covered above form a coherent body of everyday Linux administration practice. Knowing where installed kernels are recorded, how to measure real disk usage rather than directory metadata, how to attach local and network file systems correctly, how to extract archives and move data securely without disrupting shared resources, how to make the shell itself more productive, how to extend a Debian system with snap packages and how to redistribute disk space between LVM volumes on Fedora converts a scattered collection of one-liners into a reliable working toolkit. Each topic interconnects naturally with the others: a kernel query clarifies what system you are managing, disk investigation reveals whether a file system has room for what you plan to transfer, NFS mounting determines where that transfer will land and bandwidth control determines what impact it will have while it runs.
Avoiding repeated token requests by installing the Git credential helper on Linux Mint
On a new machine, I found asking for the same access token repeatedly. Since this is a long string, that is convenient and does not take long to become irritating. Thus, I sought a way to make it more streamlined. My initial attempt produced the following message:
git: 'credential-libsecret' is not a git command
The main cause for the above was the absence of the libsecret credential helper, crucial for managing credentials securely in a keyring, from my system. The solution was to install the required packages from the command line:
sudo apt install libsecret-1-0 libsecret-1-dev
Following installation, the next step was to navigate to the appropriate directory and execute the make command to compile the files within the directory, transforming them into an executable credential helper:
cd /usr/share/doc/git/contrib/credential/libsecret; sudo make
With the credential helper fully built, Git needed to be configured to use it by executing the following:
git config --global credential.helper /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret
Since one error message is enough for any new activity, it made sense to confirm that the credential helper resided in the correct location. That was accomplished by issuing this command:
ls -l /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret
All was well in my case, saving the need to reinstall Git or repeat the manual compilation of the credential helper. When all was done, I was ready to automate things further.
Sorting out sluggish start-up and shutdown times in Linux Mint 19
The Linux Mint team never forces users to upgrade to the latest version of their distribution, but curiosity often provides a strong enough impulse for me to do so. When I encounter rough edges, the wisdom of leaving things unchanged becomes apparent. Nevertheless, the process brings learning opportunities, which I am sharing in this post. It also allows me to collect various useful titbits that might help others.
Again, I went with the in-situ upgrade option, though the addition of the Timeshift backup tool means that it is less frowned upon than once would have been the case. It worked well too, apart from slow start-up and shutdown times, so I set about tracking down the causes on the two machines that I have running Linux Mint. As it happens, the cause was different on each machine.
On one PC, it was networking that holding up things. The cause was my specifying a fixed IP address in /etc/network/interfaces instead of using the Network Settings GUI tool. Resetting the configuration file back to its defaults and using the Cinnamon settings interface took away the delays. It was inspecting /var/log/boot.log that highlighted problem, so that is worth checking if I ever encounter slow start times again.
As I mentioned earlier, the second PC had a very different problem, though it also involved a configuration file. What had happened was that /etc/initramfs-tools/conf.d/resume contained the wrong UUID for my system's swap drive, so I was seeing messages like the following:
W: initramfs-tools configuration sets RESUME=UUID=<specified UUID for swap partition>
W: but no matching swap device is available.
I: The initramfs will attempt to resume from <specified file system location>
I: (UUID=<specified UUID for swap partition>)
I: Set the RESUME variable to override this.
Correcting the file and executing the following command resolved the issue by updating the affected initramfs image for all installed kernels and speeded up PC start-up times:
sudo update-initramfs -u -k all
Though it was not a cause of system sluggishness, I also sorted another message that I kept seeing during kernel updates and removals on both machines. This has been there for a while and causes warning messages about my system locale not being recognised. The problem has been described elsewhere as follows: /usr/share/initramfs-tools/hooks/root_locale is expecting to see individual locale directories in /usr/lib/locale, but locale-gen is configured to generate an archive file by default. Issuing the following command sorted that:
sudo locale-gen --purge --no-archive
Following these, my new Linux Mint 19 installations have stabilised with more speedy start-up and shutdown times. That allows me to look at what is on Flathub to see what applications and if they get updated to the latest version on an ongoing basis. That may be a topic for another entry on here, but the applications that I have tried work well so far.
Restoring GRUB for dual booting of Linux and Windows
Once you end up with Windows overwriting your master boot record (MBR), you have lost the ability to use GRUB. Therefore, it would be handy to get it back if you want to start up Linux again. Though the loss of GRUB from the MBR was a deliberate act of mine, I knew that I'd have to restore GRUB to get Linux working again. So, I have been addressing the situation with a Live DVD for the likes of Ubuntu or Linux Mint. Once one of those had loaded its copy of the distribution, issuing the following command in a terminal session gets things back again:
sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda
When there were error messages, I tried this one to see if I could get additional information:
sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda --recheck
Also, it is possible to mount a partition on the boot drive and use that in the command to restore GRUB. Here is the required combination:
sudo mount /dev/sda1 /mnt
sudo grub-install --root-directory=/mnt /dev/sda
Either of these will get GRUB working without a hitch, and they are far more snappy than downloading Boot-Repair and using that; I was doing that for a while until a feature on triple booting appeared in an issue of Linux User & Developer that reminded me of the more readily available option. Once, there was a need to manually add an entry for Windows 7 to the GRUB menu too and, with that instated, I was able to dual-boot Ubuntu and Windows using GRUB to select which one was to start for me. Since then, I have been able to dual boot Linux Mint and Windows 8.1, with GRUB finding the latter all by itself. Since your experiences too may show this variation, it's worth bearing in mind.
How to compile and install Nightingale when PPA repositories fail on Ubuntu 13.10
When I upgraded to Ubuntu GNOME 13.10 and went for the 64-bit variant, I tried a previously tried and tested approach for installing Nightingale that used a PPA, only for it not to work. At that point, the repository had not caught up with the latest Ubuntu release (it has by the time of writing) and other pre-compiled packages would not work either. However, there was one further possibility left, and that was downloading a copy of the source code and compiling that. My previous experiences of doing that kind of thing have not been universally positive, so it was not my first choice, but I gave it a go anyway.
To get the source code, I first needed to install Git so I could take a copy from the version controlled repository and the following command added the tool and all its dependencies:
sudo apt-get install git autoconf g++ libgtk2.0-dev libdbus-glib-1-dev libtag1-dev libgstreamer-plugins-base0.10-dev zip unzip
With that lot installed, it was time to check out a copy of the latest source code, and I went with the following:
git clone https://github.com/nightingale-media-player/nightingale-hacking.git
The next step was to go into the nightingale-hacking sub-folder and issue the following command:
./build.sh
That should produce a subdirectory named nightingale that contains the compiled executable files. If this exists, it can be copied into /opt. If not, then create a folder named nightingale under /opt using copy the files from ~/nightingale-hacking/compiled/dist into that location. Ubuntu GNOME 13.10 comes with GNOME Shell 3.8, the next step took a little fiddling before it was sorted: adding an icon to the application menu or dashboard. This involved adding a file called nightingale.desktop in /usr/share/applications/ with the following contents:
[Desktop Entry]
Name=Nightingale
Comment=Play music
TryExec=/opt/nightingale/nightingale
Exec=/opt/nightingale/nightingale
Icon=/usr/share/pixmaps/nightingale.xpm
Type=Application
X-GNOME-DocPath=nightingale/index.html
X-GNOME-Bugzilla-Bugzilla=Nightingale
X-GNOME-Bugzilla-Product=nightingale
X-GNOME-Bugzilla-Component=BugBuddyBugs
X-GNOME-Bugzilla-Version=1.1.2
Categories=GNOME;Audio;Music;Player;AudioVideo;
StartupNotify=true
OnlyShowIn=GNOME;Unity;
Keywords=Run;
Actions=New
X-Ubuntu-Gettext-Domain=nightingale
[Desktop Action New]
Name=Nightingale
Exec=/opt/nightingale/nightingale
OnlyShowIn=Unity
It was created from a copy of another *.desktop file and the categories in there together with the link to the icon were as important as the title and took a little tinkering before all was in place. Also, you may find that /opt/nightingale/chrome/icons/default/default.xpm needs to be become /usr/share/pixmaps/nightingale.xpm using the cp command before your new menu entry gains an icon to go with it. While the steps that I describe here worked for me, there is more information on the Nightingale wiki if you need it.
Getting an Epson Pefection 4490 Photo scanner going with Ubuntu GNOME Remix 12.10
My Epson Perfection 4490 Photo scanner has been in my possession for a while now, and it is impossible to justify any replacement given that it both works well and digital photography has taken over from its film predecessor for me. Every time I go installing an operating system afresh, I need to reinstate it again; last year's installation of Ubuntu GNOME Remix 12.04 only saw me do the deed recently. When I did so, it was brought back to me that I'd never gone and documented on here how this was done. Given that I sometimes use this place as a repository of stuff to which I need to refer again in the future, it seemed remiss of me, so here it is for you all.
Though I had XSane and SimpleScan already installed on the system, Sane wasn't on there. Hence, I went and added it and a few other extras using the following command:
sudo apt-get install sane sane-utils libsane-extras
Then, it was onto the Epson website for their Perfection 4490 Photo Linux drivers, since Sane's support for this scanner seemingly remains incomplete even though it pre-dates my move to Linux in 2007. Three files were needed, and the following commands install them (depending on when you do this, the file names may be different, so just change them to whatever they are for you):
sudo dpkg -i iscan-data_1.22.0-1_all.deb
sudo dpkg -i iscan_2.29.1-5~usb0.1.ltdl7_i386.deb
sudo dpkg -i iscan-plugin-gt-x750_2.1.2-1_i386.deb
With those in place, there was one other task that needed doing so that scanning could be done without resorting to running scanning software using sudo privileges. To free up the access to a normal user account, I needed a HAL device information file. These normally are in, but /usr/share/hal/fdi/ they change every time an installation, so any modifications that you may make will be lost. Therefore, there is no point modifying either /usr/share/hal/fdi/preprobe/10osvendor/20-libsane.fdi or /usr/share/hal/fdi/preprobe/10osvendor/20-libsane-extras.fdi where scanner information usually is to be found.
The first task in creating an FDI file was to issue the lsusb command and look for a line corresponding to my scanner. This is the one that I got:
Bus 001 Device 004: ID 04b8:0119 Seiko Epson Corp. Perfection 4490 Photo
From this, I gleaned the manufacturer ID and model ID as 04b8 and 0119, respectively. These are needed later on. Next I needed to create the hal/fdi/preprobe/ folder structure under /etc since it was there. Then, I created epson4490photo.fdi in the bottom folder of the tree (/etc/hal/fdi/preprobe/epson4490photo.fdi) as follows:
cd /etc/hal/fdi/preprobe/ && sudo touch epson4490photo.fdi
Then, I edited the new file using the following command:
gksu gedit epson4490photo.fdi &
With the file open, I added in the following text:
<?xml version="1.0" encoding="UTF-8"?>
<deviceinfo version="0.2">
<device>
<match key="info.subsystem" string="usb">
<!-- Epson Perfection 4490 Photo -->
<match key="usb.vendor_id" int="0x04b8">
<match key="usb.product_id" int="0x0119">
<append key="info.capabilities" type="strlist">scanner</append>
<merge key="scanner.access_method" type="string">proprietary</merge>
</match>
</match>
</match>
</device>
</deviceinfo>
Since it's all in XML, the place to look is immediately beneath the scanner name comment. The int attributes of the two match elements immediately following the comment line are populated using the information from the lsusb command output, with 0x prefixing both the manufacturer and model identifiers. The element with a key attribute of usb.vendor_id is the former, and that with a key attribute of usb.product_id is the latter. With epson4490photo.fdi saved, I rebooted the machine to restart HAL and all was as I wanted it to be, apart maybe from XSane making complaints that seemed not to be of any actual consequence. With Epson's Image Scan! and Simple Scan on the PC, there's no need to be bothered with those messages. Choice is good when you have it, especially when you have expended some effort to get that far.
Sorting a kernel upgrade error in Linux Mint 13
Linux Mint 14 may be out now, but I'll be sticking with its predecessor for now. Being a user of GNOME Shell instead of Cinnamon or Mate, I'll wait for extensions to get updated for 3.6 before making a move away from 3.4 where the ones that I use happily work. Given that Linux Mint 13 is set to get support until 2017, it's not as if there is any rush either. Adding the back-ported packages repository to my list of software sources means that I will not miss out on the latest versions of MDM, Cinnamon and Mate anyway. With Ubuntu set to stick to GNOME 3.6 until after 13.04 is released, adding the GNOME 3 Team PPA will be needed if 3.8 arrives with interesting goodies; there are interesting noises that suggest the approach taken in Linux Mint 12 may be used to give more of a GNOME 2 desktop experience. Options abound and there are developments in the pipeline that I hope to explore too.
However, there is one issue that I have had to fix which stymies upgrades within the 3.2 kernel branch. A configuration file (/etc/grub.d/10_linux) points to /usr/share/grub/grub-mkconfig_lib instead of /usr/lib/grub/grub-mkconfig_lib so I have been amending it every time I needed to do a kernel update. However, it just reverts to the previous state, so I thought of another solution: creating a symbolic link in the incorrect location that points to the correct one so that updates complete without manual intervention every time. The command that does the needful is below:
sudo ln -s /usr/lib/grub/grub-mkconfig_lib /usr/share/grub/grub-mkconfig_lib
Of course, figuring out what causes the reversion would be good too, but the symbolic link fix works so well that there's little point in exploring it further. Of course, if anyone can add how you'd do that, I'd welcome this advice too. New knowledge is always good.