Technology Tales

Adventures in consumer and enterprise technology

TOPIC: FILE SYSTEM

Sorting out sluggish start-up and shutdown times in Linux Mint 19

9th August 2018

The Linux Mint team never forces users to upgrade to the latest version of their distribution, but curiosity often provides a strong enough impulse for me to do so. When I encounter rough edges, the wisdom of leaving things unchanged becomes apparent. Nevertheless, the process brings learning opportunities, which I am sharing in this post. It also allows me to collect various useful titbits that might help others.

Again, I went with the in-situ upgrade option, though the addition of the Timeshift backup tool means that it is less frowned upon than once would have been the case. It worked well too, apart from slow start-up and shutdown times, so I set about tracking down the causes on the two machines that I have running Linux Mint. As it happens, the cause was different on each machine.

On one PC, it was networking that holding up things. The cause was my specifying a fixed IP address in /etc/network/interfaces instead of using the Network Settings GUI tool. Resetting the configuration file back to its defaults and using the Cinnamon settings interface took away the delays. It was inspecting /var/log/boot.log that highlighted problem, so that is worth checking if I ever encounter slow start times again.

As I mentioned earlier, the second PC had a very different problem, though it also involved a configuration file. What had happened was that /etc/initramfs-tools/conf.d/resume contained the wrong UUID for my system's swap drive, so I was seeing messages like the following:

W: initramfs-tools configuration sets RESUME=UUID=<specified UUID for swap partition>
W: but no matching swap device is available.
I: The initramfs will attempt to resume from <specified file system location>
I: (UUID=<specified UUID for swap partition>)
I: Set the RESUME variable to override this.

Correcting the file and executing the following command resolved the issue by updating the affected initramfs image for all installed kernels and speeded up PC start-up times:

sudo update-initramfs -u -k all

Though it was not a cause of system sluggishness, I also sorted another message that I kept seeing during kernel updates and removals on both machines. This has been there for a while and causes warning messages about my system locale not being recognised. The problem has been described elsewhere as follows: /usr/share/initramfs-tools/hooks/root_locale is expecting to see individual locale directories in /usr/lib/locale, but locale-gen is configured to generate an archive file by default.  Issuing the following command sorted that:

sudo locale-gen --purge --no-archive

Following these, my new Linux Mint 19 installations have stabilised with more speedy start-up and shutdown times. That allows me to look at what is on Flathub to see what applications and if they get updated to the latest version on an ongoing basis. That may be a topic for another entry on here, but the applications that I have tried work well so far.

Copying a directory tree on a Windows system using XCOPY and ROBOCOPY

17th September 2016

My usual method for copying a directory tree without any of the files in there involves the use of the Windows command line tool XCOPY and the command takes the following form:

xcopy /t /e <source> <destination>

The /t switch tells XCOPY to copy only the directory structure, while the /e one tells it to include empty directories too. Substituting /s for /e would ensure that only non-empty directories are copied. <source> and <destination> are the directory paths that you want to use and need to be enclosed in quotes if you have a space in a directory name.

There is one drawback to this approach that I have discovered. When you have long directory paths, messages about there being insufficient memory are issued and the command fails. The limitation has nothing to do with the machine that you are using, but is a limitation of XCOPY itself.

After discovering that, I got to check if ROBOCOPY can do the same thing without the same file path length limitation because I did not have the liberty of shortening folder names to get the whole path within the length expected by XCOPY. The following is the form of the command that I found did what I needed:

robocopy <source1> <destination1> /e /xf *.* /r:0 /w:0 /fft

Here, <source1> and <destination1> are the directory paths that you want to use and need to be enclosed in quotes if you have a space in a directory name. The /e switch copies all subdirectories and not just non-empty ones. Then, the xf *.* portion excludes all files from the copying process. The remaining options are added to help with getting around access issues and to try to copy only those directories that do not exist in the destination location. The /ftt switch was added to address the latter by causing ROBOCOPY to assume FAT file times. To get around the folder permission delays, the /r:0 switch was added to stop any operation being retried, with /w:0 setting wait times to 0 seconds. All this was enough to achieve what I wanted, and I am keeping it on file for my future reference, as well as sharing it with you.

Creating soft and hard symbolic links using the Windows command line

19th August 2015

In the world of UNIX and Linux, symbolic links are shortcuts, but they do not work like normal Windows shortcuts because you do not jump from one location to another with the file manager's address bar changing what it shows. Instead, it is as if you see the contents of the directory at another quicker to access location in the file system, and the same sort of thinking applies to files too. In some ways, it is like giving files and directories alternative aliases. There are soft links that point to the name of a given directory or file, and hard links that point to actual files or directories.

For a long time, I was under the mistaken impression that such things did not exist on Windows until I came across the mklink command, which came with the launch of Windows Vista at the start of 2007. While this feature might not be widely known, it demonstrates that Windows did adopt some UNIX and Linux capability long before other UNIX-like features, such as virtual desktops, were introduced in Windows 10.

By default, the aforementioned command sets up symbolic links to files and the /D switch allows the same to be done for directories too. The /H switch makes a hard link instead of a soft link, so we get much of the functionality of the ln command in UNIX and Linux. Here is an example that creates a soft symbolic link for a directory:

mklink /D shortcut target_directory

Above, shortcut is the name of the symbolic link file and target_directory is the destination to which it links. In my experience, it works best for destinations beyond your home folder and, from what I have read, hard links may not be possible across different disks either.

Setting up a WD My Book Live NAS on Ubuntu GNOME 13.10

1st December 2013

The official line from Western Digital is this: they do not support the use of their My Book Live NAS drives with Linux or UNIX. However, what that means is that they only develop tools for accessing their products for Windows and maybe OS X. It still doesn't mean that you cannot access the drive's configuration settings by pointing your web browser at http://mybooklive.local/. In fact, not having those extra tools is no drawback at all since the drive can be accessed through your file manager of choice under the Network section and the default name is MyBookLive too, so you easily can find the thing once it is connected to a router, or switch anyway.

Once you are in the server's web configuration area, you can do things like changing its name, updating its firmware, finding out what network has been assigned to it, creating and deleting file shares, password protecting file shares and other things. These are the kinds of things that come in handy if you are going to have a more permanent connection to the NAS from a PC that runs Linux. The steps that I describe have worked on Ubuntu 12.04 and 13.10 with the GNOME desktop environment.

What I was surprised to discover that you cannot just set up a symbolic link that points to a file share. Instead, it needs to be mounted and this can be done from the command line using mount or at start-up with /etc/fstab. For this to happen, you need the Common Internet File System utilities and these are added as follows if you need them (check in the Software Centre or in Synaptic):

sudo apt-get install cifs-utils

Once these are added, you can add a line like the following to /etc/fstab:

//[NAS IP address]/[file share name] /[file system mount point] cifs
credentials=[full file location]/.creds,
iocharset=utf8,
sec=ntlm,
gid=1000,
uid=1000,
file_mode=0775,
dir_mode=0775
0 0

Though I have broken it over several lines above, this is one unwrapped line in /etc/fstab with all the fields in square brackets populated for your system and with no brackets around these. Though there are other ways to specify the server, using its IP address is what has given me the most success; this is found under Settings > Network on the web console. Next up is the actual file share name on the NAS; I have used a custom term instead of the default of Public. The NAS file share needs to be mounted to an actual directory in your file system like /media/nas or whatever you like; however, you will need to create this beforehand. After that, you have to specify the file system, and it is cifs instead of more conventional alternatives like ext4 or swap. After this and before the final two space delimited zeroes in the line comes the chunk that deals with the security of the mount point.

What I have done in my case is to have a password-protected file share and the user ID and password have been placed in a file in my home area with only the owner having read and write permissions for it (600 in chmod-speak). Preceding the filename with a "." also affords extra invisibility. That file then is populated with the user ID and password like the following. Of course, the bracketed values have to be replaced with what you have in your case.

username=[NAS file share user ID]
password=[NAS file share password]

With the credentials file created, its options have to be set. First, there is the character set of the file (usually UTF-8 and I got error code 79 when I mistyped this) and the security that is to be applied to the credentials (ntlm in this case). To save having no write access to the mounted file share, the uid and gid for your user needs specification, with 1000 being the values for the first non-root user created on a Linux system. After that, it does no harm to set the file and directory permissions because they only can be set at mount time; using chmod, chown and chgrp afterwards, has no effect whatsoever. Here, I have set permissions to read, write and execute for the owner and the user group while only allowing read and execute access for everyone else (that's 775 in the world of chmod).

All of what I have described here worked for me and had to be gleaned from disparate sources like Mount Windows Shares Permanently from the Ubuntu Wiki, another blog entry regarding the permissions settings for a CIFS mount point and an Ubuntu forum posting on mounting CIFS with UTF-8 support. Because of the scattering of information, I just felt that it needed to all together in one place for others to use, and I hope that fulfils someone else's needs similarly to mine.

Relocating the Apache web server document root directory in Fedora 12

9th April 2010

So as not to deface anything that is available online on the web, I have a tendency to set up an offline Apache server on a home PC to do any tinkering away from the eyes of the unsuspecting public. Though Ubuntu is my mainstay for home computing, I do have a PC with Fedora installed, and I have been trying to get an Apache instance to start automatically on there unsuccessfully for a few months. While I can start it by running the following command as root, I'd rather not have more manual steps than is necessary.

httpd -k start

The command used by the system when it starts is different and, even when manually run as root, it failed with messages saying that it couldn't find the directory while the web server files are stored. Here it is:

service httpd start

The default document root location on any Linux distribution that I have seen is /var/www and all is very well with this, but it isn't a safe place to leave things if ever a re-installation is needed. Having needed to wipe /var after having it on a separate disk or partition for the sake of one installation, it doesn't look so persistent to me. In contrast, you can safeguard /home by having it on another disk or in a dedicated partition, which means that it can be retained even when you change the distro that you're using. Thus, I have got into the habit of having the root of the web server document root folder in my home area, and that is where I have been seeing the problem.

Because of the access message, I tried using chmod and chgrp, but to no avail. The remedy has to do with reassigning the security contexts used by SELinux. In Fedora, Apache will not work with the context user_home_t that is usually associated with home directories, but needs httpd_sys_content_t instead. To find out what contexts are associated with particular folders, issue the following command:

ls -Z

The final solution was to create a user account whose home directory hosts the root of the web server file system, called www in my case. Then, I executed the following command as root to get things going:

chcon -R -h -t httpd_sys_content_t /home/www

It appears that even the root of the home directory has to have an appropriate security context (/home has home_root_t so that might do the needful too). Without that, nothing will work, even if all is well at the next level down. The switches for chcon command translate as follows:

-R : recursive; applies changes to all files and folders within a directory.

-h : changes apply only to symbolic links and not to where they refer in the file system.

-t : alters context type.

It took a while for all of this stuff about SELinux security contexts to percolate through to the point where I was able to solve the problem. A spot of further inspiration was needed too and even guided my search for the information that I needed. It's well worth trying Linux Home Networking if you need further details. Though there are references to an earlier release of Fedora, the content still applies to later versions of Fedora to the current release, if my experience is typical.

Watch where you store your virtual machines when using VMware on Linux

12th July 2008

My experience is with Ubuntu on this one, but I have found that you need to be careful as regards the file system used by the drive where you keep your virtual machines. If it is NTFS, VMware can fail to start a VM because it cannot create a virtual memory file while it presents as physical memory to a guest operating system. Use ext2 or ext3 and there should be no problem, even if that means formatting a drive to fulfil the need. That's what I did and all was well thereafter.

The irritation of a 4 GB file size limitation

20th November 2007

Recently, I got myself a 500GB Western Digital My Book, an external hard drive in other words. Bizarrely, the thing is formatted using the FAT32 file system. While I appreciate that backward compatibility for Windows 9x might seem desirable, using NTFS would be more understandable, particularly given that the last of the 9x line, Windows ME, is now eight years old (there cannot be anybody who still uses that, can there?). The result is that I got core dump messages from cp commands issued from the terminal on my Ubuntu system to copy files of size exceeding 4GB last night. It surprised me at first, but it now seems to be a FAT32 limitation. The idea of formatting the drive as NTFS did occur to me, only for GParted not to do that, at least not with my current configuration. While the ext3 file system is an option, I have a spare PC with Windows 2000 so that will be a step too far for now, unless I take the plunge and bring that into the Linux universe too.

Other than the 4GB irritation, the new drive works well and was picked up and supported by Ubuntu without any hassle beyond getting it out of the box, finding a place for it on my desk and plugging in a few cables. While needing judiciousness about file sizes, it played an important role while I converted a 320 GB internal WD drive from NTFS to ext3 and may yet be vital if my Windows 2000 box gets a migration to Linux. In the interim, 500 GB is a lot of space, and having an external drive that size is a bonus these days. That is especially the case when you consider that the 1 terabyte threshold is on the verge of getting crossed. It certainly makes DVD's, flash drives and other multi-gigabyte media less impressive than they otherwise might appear.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.