Technology Tales

Adventures & experiences in contemporary technology

Avoiding permissions, times or ownership failure messages when using rsync

22nd April 2023

The rsync command is one that I use heavily for doing backups and web publishing. The latter means that it is part of how I update websites built using Hugo because new and/or updated files need uploading. The command also sees usage when uploading files onto other websites as well. During one of these operations, and I am unsure now as to which type is relevant, I encountered errors about being unable to set permissions.

The cause was the encompassing -a option. This is a shorthand for -rltpgoD, and the individual options perform the following:

-r: recursive transfer, copying all contents within a directory hierarchy

-l: symbolic links copied as symbolic links

-t: preserve times

-p: preserve permissions

-g: preserve groups

-o: preserve owners

-D: preserve device and special files

The solution is to some of the options if they are inappropriate. The minimum is to omit the option for permissions preservation, but others may not apply between different servers either, especially when operating systems differ. Removing the options for preserving permissions, groups and owners results in something like this:

rsync -rltD [rest of command]

While it can be good to have a more powerful command with the setting of a single option, it can mean trying to do too much. Another way to avoid permissions and similar errors is to have consistency between source and destination files systems, but that is not always possible.

Getting Eclipse to start without incompatibility errors on Linux Mint 19.1

12th June 2019

Recent curiosity about Java programming and Groovy scripting got me trying to start up the Eclipse IDE that I had install on my main machine. What I got instead of a successful application startup was a message that included the following:

!MESSAGE Exception launching the Eclipse Platform:
!STACK
java.lang.ClassNotFoundException: org.eclipse.core.runtime.adaptor.EclipseStarter
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:466)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:566)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:626)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584)
at org.eclipse.equinox.launcher.Main.run(Main.java:1438)
at org.eclipse.equinox.launcher.Main.main(Main.java:1414)

The cause was a mismatch between Eclipse and the installed version of Java that it needed in order to run. After all, the software itself is written in the Java language and the installed version from the usual software repositories was too old for Java 11. The solution turned out to be installing a newer version as a Snap (Ubuntu’s answer to Flatpak). The following command did the needful since Snapd already was running on my machine:

sudo snap install eclipse --classic

The only part of the command that warrants extra comment is the --classic switch since that is needed for a tool like Eclipse that needs to access a host file system. On executing, the software was downloaded from Snapcraft and then installed within its own bundle of dependencies. The latter adds a certain detachment from the underlying Linux installation and ensures that no messages appear because of incompatibilities like the one near the start of this post.

Sorting out sluggish start-up and shutdown times in Linux Mint 19

9th August 2018

The Linux Mint team never pushes anyone into upgrading to the latest version of their distribution but curiosity often is strong enough an impulse to make me do just that. When it brings me across some rough edges, then the wisdom of leaving things alone is evident. Nevertheless, doing so also brings its share of learning and that is what I am sharing in this post. It also also me to collect a number of titbits that may be of use to others.

Again, I went with the in-situ upgrade option though the addition of the Timeshift backup tool means that it is less frowned upon than once would have been the case. It worked well too part from slow start-up and shutdown times so I set about track down the causes on the two machines that I have running Linux Mint. As it happens, the cause was different on each machine.

On one PC, it was networking that holding up things. The cause was my specifying a fixed IP address in /etc/network/interfaces instead of using the Network Settings GUI tool. Resetting the configuration file back to its defaults and using the Cinnamon settings interface took away the delays. It was inspecting /var/log/boot.log that highlighted problem so that is worth checking if I ever encounter slow start times again.

As I mentioned earlier, the second PC had a very different problem though it also involved a configuration file. What had happened was that /etc/initramfs-tools/conf.d/resume contained the wrong UUID for my system’s swap drive so I was seeing messages like the following:

W: initramfs-tools configuration sets RESUME=UUID=<specified UUID for swap partition>
W: but no matching swap device is available.
I: The initramfs will attempt to resume from <specified file system location>
I: (UUID=<specified UUID for swap partition>)
I: Set the RESUME variable to override this.

Correcting the file and executing the following command fixed the issue by updating the affected initramfs image for all installed kernels and speeded up PC start-up times:

sudo update-initramfs -u -k all

Though it was not a cause of system sluggishness, I also sorted another message that I kept seeing during kernel updates and removals on both machines. This has been there for a while and causes warning messages about my system locale not being recognised. The problem has been described elsewhere as follows: /usr/share/initramfs-tools/hooks/root_locale is expecting to see individual locale directories in /usr/lib/locale but locale-gen is configured to generate an archive file by default.  Issuing the following command sorted that:

sudo locale-gen --purge --no-archive

Following these, my new Linux Mint 19 installations have stabilised with more speedy start-up and shutdown times. That allows me to look at what is on Flathub to see what applications and if they get updated to the latest version on an ongoing basis. That may be a topic for another entry on here but the applications that I have tried work well so far.

Copying a directory tree on a Windows system using XCOPY and ROBOCOPY

17th September 2016

My usual method for copying a directory tree without any of the files in there involves the use of the Windows commands line XCOPY and the command takes the following form:

xcopy /t /e <source> <destination>

The /t switch tells XCOPY to copy only the directory structure while the /e one tells it to include empty directories too. Substituting /s for /e would ensure that only non-empty directories are copied. <source> and <destination> are the directory paths that you want to use and need to be enclosed in quotes if you have a space in a directory name.

There is one drawback to this approach that I have discovered. When you have long directory paths, messages about there being insufficient memory are issued and the command fails. The limitation has nothing to do with the machine that you are using but is a limitation of XCOPY itself.

After discovering that, I got to checking if ROBOCOPY can do the same thing without the same file path length limitation because I did not have the liberty of shortening folder names to get the whole path within the length expected by XCOPY. The following is the form of the command that I found did what I needed:

robocopy <source> <destination> /e /xf *.* /r:0 /w:0 /fft

Again, <source> and <destination> are the directory paths that you want to use and need to be enclosed in quotes if you have a space in a directory name. The /e switch copies all subdirectories and not just non-empty ones. Then, the xf *.* portion excludes all files from the copying process. The remaining options are added to help with getting around access issues and to try copy only those directories that do not exist in the destination location. The /ftt switch was added to address the latter by causing ROBOCOPY to assume FAT file times. To get around the folder permission delays, the /r:0 switch was added to stop any operation being retried with /w:0 setting wait times to 0 seconds. All this was enough to achieve what I wanted and I am keeping it on file for my future reference as well as sharing it with you.

Batch conversion of DNG files to other file types with the Linux command line

8th June 2016

At the time of writing, Google Drive is unable to accept DNG files, the Adobe file type for RAW images from digital cameras. The uploads themselves work fine but the additional processing at the end that I believe is needed for Google Photos appears to be failing. Because of this, I thought of other possibilities like uploading them to Dropbox or enclosing them in ZIP archives instead; of these, it is the first that I have been doing and with nothing but success so far. Another idea is to convert the files into an image format that Google Drive can handle and TIFF came to mind because it keeps all the detail from the original image. In contrast, JPEG files lose some information because of the nature of the compression.

Handily, a one line command does the conversion for all files in a directory once you have all the required software installed:

find -type f | grep -i “DNG” | parallel mogrify -format tiff {}

The find and grep commands are standard with the first getting you a list of all the files in the current directory and sending (piping) these to the grep command so the list only retains the names of all DNG files. The last part uses two commands for which I found installation was needed on my Linux Mint machine. The parallel package is the first of these and distributes the heavy workload across all the cores in your processor and this command will add it to your system:

sudo apt-get install parallel

The mogrify command is part of the ImageMagick suite along with others like convert and this is how you add that to your system:

sudo apt-get install imagemagick

In the command at the top, the parallel command works through all the files in the list provided to it and feeds them to mogrify for conversion. Without the use of parallel, the basic command is like this:

mogrify -format tiff *.DNG

In both cases, the -format switch specifies the output file type with tiff triggering the creation of TIFF files. The *.DNG portion itself captures all DNG files in a directory but {} does this in the main command at the top of this post. If you wanted JPEG ones, you would replace tiff with jpg. Shoudl you ever need them, a full list of what file types are supported is produced using the identify command (also part of ImageMagick) as follows:

identify -list format

Creating soft and hard symbolic links using the Windows command line

19th August 2015

In the world of UNIX and Linux, symbolic links are shortcuts but they do not work like normal Windows shortcuts because you do not jump from one location to another with the file manager’s address bar changing what it shows. Instead, it is as if you see the contents of the directory at another quicker to access location in the file system and the same sort of thinking applies to files too. In some ways, it is like giving files and directories alternative aliases. There are soft links that point to the name of a given directory or file and hard links that point to actual files or directories.

For a long time, I was under the mistaken impression that such things did not exist in Windows until I came across the mklink command, which came with the launch of Windows Vista at the start of 2007. While the then new feature may not be an obvious to most of us, it does does that Windows did incorporate a feature from UNIX and Linux well before the advent of virtual desktops on Windows 10.

By default, the aforementioned command sets up symbolic links to files and the /D switch allows the same to be done for directories too. The /H switch makes a hard link instead of a soft link so we get much of the functionality of the ln command in UNIX and Linux. Here is an example that creates a soft symbolic link for a directory:

mklink /D shortcut target_directory

Above, shortcut is the name of the symbolic link file and target_directory is the destination to which it links. In my experience, it works best for destinations beyond your home folder and, from what I have read, hard links may not be possible across different disks either.

A few more shell commands

8th July 2015

Here are some Linux commands that I encountered in a feature article in the current issue of Linux User & Developer that I had not met before:

cd --

This returns you to the previous directory where you were before with having to go back through the folder hierarchy to get there and is handy if you are jumping around a file system and any other means is far from speedy.

lsb_release -a

It can be useful to uncover what version of a distro you have from the command line and the above works for distros as diverse as Linux Mint, Debian, Fedora (it automatically installs in Fedora 22 if it is not installed already, a more advanced approach than showing you the command like in Linux Mint or Ubuntu), openSUSE and Manjaro. These days, the version may not change too often but it still is good to uncover what you have.

yum install fedora-upgrade

This one can be run either with sudo or in a root session started with su and it is specific to Fedora. The command performs an upgrade of the Fedora distro itself and I wonder if the functionality has been ported to the dnf command that has taken over from yum. My experiences with that in Fedora 22 so far suggest that it should be the case though I need to check that further with the VirtualBox VM that I have created.

Smarter file renaming using PowerShell

14th November 2014

It seems that the Rename-Item commandlet in Powershell is a very useful tool when it comes to smarter renaming of files. Even text substitution is a possibility and what follows is an example that takes the output of the Dir command for listing the files in a directory and replaces hyphens with underscores in each one.

Dir | Rename-Item –NewName { $_.name –replace “-“,”_” }

The result is that something like the-file.txt becomes the_file.txt. This behaviour is reminiscent of the rename command found on Linux and UNIX systems, where regular expressions can be used like in the following example that has the same result as the above:

rename ‘s/-/_/g’ *

In both cases, you do need to be careful as to what files are in a directory for this though the wildcard syntax on Linux or UNIX will be more familiar who has worked with files via almost any command line. Another thing to watch in the UNIX world is that ** parses the whole directory structure and that could be something that is not wanted for much of the time.

All of this is a far cry from the capabilities of the ren or rename command used in the days of MS-DOS and what has become the legacy Windows command line. Apart from simple renaming, any attempt at tweaking a filename through substitution ended up with the extra string getting appended to filenames when I tried it. Thus, the Powershell option looks better in comparison.

Changing the working directory in a SAS session

12th August 2014

It appears that PROC SGPLOT and other statistical graphics procedures create image files even if you are creating RTF or PDF files. By default, these are PNG files but there are other possibilities. When working with PC SAS , I have seen them written to the current working directory and that could clutter up your folder structure, especially if they are unwanted.

Being unable to track down a setting that controls this behaviour, I resolved to find a way around it by sending the files to the SAS work directory so they are removed when a SAS session is ended. One option is to set the session’s working directory to be the SAS work one and that can be done in SAS code without needing to use the user interface. As a result, you get some automation.

The method is implicit though in that you need to use an X statement to tell the operating system to change folder for you. Here is the line of code that I have used:

x "cd %sysfunc(pathname(work))";

The X statement passes commands to an operating system’s command line and they are enclosed in quotes. %sysfunc then is a macro command that allows certain data step functions or call routines as well as some SCL functions to be executed. An example of the latter is pathname and this resolves library or file references and it is interrogating the location of the SAS work library here so it can be passed to the operating systems cd (change directory) command for processing. This method works on Windows and UNIX so Linux should be covered too, offering a certain amount of automation since you don’t have to specify the location of the SAS work library in every session due to the folder name changing all the while.

Of course, if someone were to tell me of another way to declare the location of the generated PNG files that works with RTF and PDF ODS destinations, then I would be all ears. Even direct output without image file creation would be even better. Until then though, the above will do nicely.

Setting up a WD My Book Live NAS on Ubuntu GNOME 13.10

1st December 2013

The official line from Western Digital is that they do not support the use of their My Book Live NAS drives with Linux or UNIX. However, what that means is that they only develop tools for accessing their products for Windows and maybe OS X. It still doesn’t mean that you cannot access the drive’s configuration settings by pointing your web browser at http://mybooklive.local/. In fact, not having those extra tools is no drawback at all since the drive can be accessed through your file manager of choice under the Network section and the default name is MyBookLive too so you easily can find the thing once it is connected to a router or switch anyway.

Once you are in the servers web configuration area, you can do things like changing its name, updating its firmware, finding out what network has been assigned to it, creating and deleting file shares, password protecting file shares and other things. These are the kinds of things that come in handy if you are going to have a more permanent connection to the NAS from a PC that runs Linux. The steps that I describe have worked on Ubuntu 12.04 and 13.10 with the GNOME  desktop environment.

What I was surprised to discover that you cannot just set up a symbolic link that points to a file share. Instead, it needs to be mounted and this can be done from the command line using mount or at start-up with /etc/fstab. For this to happen, you need the Common Internet File System utilities and these are added as follows if you need them (check on the Software Software or in Synaptic):

sudo apt-get install cifs-utils

Once these are added, you can add a line like the following to /etc/fstab:

//[NAS IP address]/[file share name] /[file system mount point] cifs
credentials=[full file location]/.creds,
iocharset=utf8,
sec=ntlm,
gid=1000,
uid=1000,
file_mode=0775,
dir_mode=0775
0 0

Though I have broken it over several lines above, this is one unwrapped line in /etc/fstab with all the fields in square brackets populated for your system and with no brackets around these. Though there are other ways to specify the server, using its IP address is what has given me the most success and this is found under Settings > Network on the web console. Next up is the actual file share name on the NAS and I have used a custom them instead of the default of Public. The NAS file share needs to be mounted to an actual directory in your file system like /media/nas or whatever you like and you will need to create this beforehand. After that, you have to specify the file system and it is cifs instead of more conventional alternatives like ext4 or swap. After this and before the final two space delimited zeroes in the line comes the chunk that deals with the security of the mount point.

What I have done in my case is to have a password protected file share and the user ID and password have been placed in a file in my home area with only the owner having read and write permissions for it (600 in chmod-speak). Preceding the filename with a “.” also affords extra invisibility. That file then is populated with the user ID and password like the following. Of course, the bracketed values have to be replaced with what you have in your case.

username=[NAS file share user ID]
password=[NAS file share password]

With the credentials file created, its options have to be set. First, there is the character set of the file (usually UTF-8 and I got error code 79 when I mistyped this) and the security that is to be applied to the credentials (ntlm in this case). To save having no write access to the mounted file share, the uid and gid for your user needs specification with 1000 being the values for the first non-root user created on a Linux system. After that, it does no harm to set the file and directory permissions because they only can be set at mount time; using chmod, chown and chgrp later on has no effect whatsoever. Here, I have set permissions to read, write and execute for the owner and the user group while only allowing read and execute access for everyone else (that’s 775 in the world of chmod).

All of what I have described here worked for me and had to gleaned from disparate sources like Mount Windows Shares Permanently from the Ubuntu Wiki, another blog entry regarding the permissions settings for a CIFS mount point and an Ubuntu forum posting on mounting CIFS with UTF-8 support. Because of the scattering of information, I just felt that it needed to all together in one place for others to use and I hope that fulfils someone else’s needs in a similar way to mine.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.