-->
Adventures & experiences in contemporary technology
Recently, a message appear on some web servers that I have that exhorted me to upgrade to Ubuntu 22.04.1 using the do-release-upgrade command. In the interests of remaining current, I did just that to get another message, one like the following:
The upgrade needs a total of [amount of space with units] free space on disk `/boot`.
Please free at least an additional [amount of space with units] of disk space on `/boot`.
Empty your trash and remove temporary packages of former installations
using `sudo apt-get clean`.
Using sudo apt-get clean did not resolve the problem so the advice given was of no use. The actual problem was that there were too many old kernels cluttering up /boot and searching around the web provided that wisdom. What also came up was a single command for fixing the problem. However, removing the wrong kernel can trash a system so I took a more cautious approach. First, I listed the kernels to be removed and checked that they did not include the currently running one. This was done with the following command (broken up over several lines for clarity using the backslash character to denote continuation) and running uname -r found the details of the running kernel:
dpkg -l linux-{image,headers}-"[0-9]*" \
| awk '/ii/{print $2}' \
| grep -ve "$(uname -r \
| sed -r 's/-[a-z]+//')"
The dpkg command listed the installed kernels with awk, grep and sed filtering out unwanted sections of the text. The awk command takes the tabular output from dpkg and turns it into a list. The -v switch on the grep command gets the lines that do not match the search expression created by the sed command, while the -e switch makes grep look for patterns. The sed command removes all letters from the output of the uname command, where the -r switch produces the kernel release details, to leave on the release number of the current kernel. On being satisfied that nothing untoward would happen, the full command below (also broken up over several lines for clarity using the backslash character to denote continuation) could be executed.
sudo apt purge $(dpkg -l linux-{image,headers}-"[0-9]*" \
| awk '/ii/{print $2}' \
| grep -ve "$(uname -r \
| sed -r 's/-[a-z]+//')")
This apt to purge the unwanted kernels, thus freeing up enough space for the upgrade to continue. That happened without significant incident though there were some remediations needed on the PHP side to get the website working smoothly again.
Having an older PC to upgrade, I decided to install OpenMediaVault on there a few years ago after adding in 6 TB and 4 TB hard drives for storage, a Gigabit network card to speed up backups and a new BeQuiet! power supply to make it quieter. It has been working smoothly since then and the release of OpenMediaVault 6.x had me wondering how to move to it.
Usefully, I enabled an SSH service for remote logins and set up an account for anything that I needed to do. This includes upgrades, taking backups of what is on my NAS drives and even shutting down the machine when I am done with what I need to do with it.
Using an SSH session, the first step was to switch to the administrator account and issue the following command to ensure that my OpenMediaVault 5.x installation was as up to date as it could be:
omv-update
Once that had completed what it needed to do, the next step was to do the upgrade itself with the following command:
omv-release-upgrade
With that complete, it was time to reboot the system and I fired up the web administration interface and spotted a kernel update that I applied. Again the system was restarted and further updates were noticed and these were applied, again through the web interface. The whole thing is based on Debian 11.x but I am not complaining since it quietly does exactly what I need of it.
Much of the past weekend was spent getting a working Debian 10 installation up and running in a VirtualBox virtual machine. Because I chose the Cinnamon desktop environment, the process was not as smooth as I would have liked so a minimal installation was performed before I started to embellish as I liked. Along the way, I got to wondering if I could create virtual hard drives using the command line and I found that something like the following did what was needed:
VBoxManage createmedium disk --filename <full path including file name without extension> -size <size in MiB> --format VDI --variant Standard
Most of the options are self-explanatory part from the one named variant. This defines whether the VDI file expands to the maximum size specified using the size parameter or is reserved with the size defined in that parameter. Two VDI files were created in this way and I used these to replace their Debian 8 predecessors and even to save a bit of space too. If you want, you can find out more in the user documentation but this post hopefully gets you started anyway.
After a recent upgrade to Linux Mint Debian Edition 3 in a VirtualBox virtual machine that I had running its predecessor, I began to notice that background images were being loaded with more washed out of faded colours. This happened at startup so selecting another background image worked as intended until the same thing happened to that after a system restart.
This problem is not new and has affected the Cinnamon desktop in the main Linux Mint variant (the one that is based on Ubuntu) and issuing the following command in a terminal session is a suggested solution:
gsettings set org.cinnamon.muffin background-transition fade-in
In my case, that solved the problem and desktop background image display is as it should be since I executed the above. All it took was a change to a system setting.
Recently, I noticed that the disk in my WD My Cloud NAS was active all the time so it reminded me of another time when this happened. Then, I needed to activate the SSH service on the device and log in as root with the password welc0me. That default password was changed before doing anything else. Since the device runs on Debian Linux, that was a simple case of using the passwd command and following the prompts. One word of caution is in order since only root can be used for SSH connections to a WD My Cloud NAS and any other user that you set up will not have these privileges.
The cause of all the activity was two services: wdmcserverd and wdphotodbmergerd. One way to halt their actions is to stop the services using these commands:
/etc/init.d/wdmcserverd stop
/etc/init.d/wdphotodbmergerd stop
The above act only works until the next system restart so these command should make for a more persistent disabling of the culprits:
update-rc.d -f wdmcserverd remove
update-rc.d -f wdphotodbmergerd remove
If all else fails, removing executable privileges from the normally executable files that the services need will work and it is a solution that I have tried with success between system updates:
cd /etc/init.d
chmod 644 wdmcserverd
reboot
Between all of these, it should be possible to have you WD My Cloud NAS go into power saving mode as it should though turning off additional services such as DLNA may be what some need to do. Having turned off these already, I only needed to disable the photo thumbnail services that were the cause of my machine’s troubles.
In a previous posting, I talked about compressing a virtual hard disk for a Windows guest system running in VirtualBox on a Linux system. Since then, I have needed to do the same for a Linux guest following some housekeeping. The Linux distribution used is Debian so the instructions are relevant to that and maybe its derivatives such as Ubuntu, Linux Mint and their kind.
While there are other alternatives like dd, I am going to stick with a utility named zerofree to overwrite the newly freed up disk space with zeroes to aid compression later on in the process for this and the first step is to install it using the following command:
apt-get install zerofree
Once that has been completed, the next step is to unmount the relevant disk partition. Luckily for me, what I needed to compress was an area that I reserved for synchronisation with Dropbox. If it was the root area where the operating system files are kept, a live distro would be needed instead. In any event, the required command takes the following form with the mount point being whatever it is on your system (/home, for instance):
sudo umount [mount point]
With the disk partition unmounted, zerofree can be run by issuing a command that looks like this:
zerofree -v /dev/sdxN
Above, the -v switch tells zerofree to display its progress and a continually updating percentage count tells you how it is going. The /dev/sdxN piece is generic with the x corresponding to the letter assigned to the disk on which the partition resides (a, b, c or whatever) and the N is the partition number (1, 2, 3 or whatever; before GPT, the maximum was 4). Putting all this together, we get an example like /dev/sdb2.
Once, that had completed, the next step is to shut down the VM and execute a command like the following on the host Linux system ([file location/file name] needs to be replaced with whatever applies on your system):
VBoxManage modifyhd [file location/file name].vdi --compact
With the zero filling in place, there was a lot of space released when I tried this. While it would be nice for dynamic virtual disks to reduce in size automatically, I accept that there may be data integrity risks with those so the manual process will suffice for now. It has not been needed that often anyway.
Here are some Linux commands that I encountered in a feature article in the current issue of Linux User & Developer that I had not met before:
cd --
This returns you to the previous directory where you were before with having to go back through the folder hierarchy to get there and is handy if you are jumping around a file system and any other means is far from speedy.
lsb_release -a
It can be useful to uncover what version of a distro you have from the command line and the above works for distros as diverse as Linux Mint, Debian, Fedora (it automatically installs in Fedora 22 if it is not installed already, a more advanced approach than showing you the command like in Linux Mint or Ubuntu), openSUSE and Manjaro. These days, the version may not change too often but it still is good to uncover what you have.
yum install fedora-upgrade
This one can be run either with sudo or in a root session started with su and it is specific to Fedora. The command performs an upgrade of the Fedora distro itself and I wonder if the functionality has been ported to the dnf command that has taken over from yum. My experiences with that in Fedora 22 so far suggest that it should be the case though I need to check that further with the VirtualBox VM that I have created.
When I recently did my usual system update for the stable version Ubuntu GNOME, there were some updates pertaining to apt and the process failed when I executed the following command:
sudo apt-get upgrade
Usefully, some messages were issued and here’s a flavour:
Setting up apt (0.9.9.1~ubuntu3.1) …
ERROR: Can’t find the archive-keyring
Is the ubuntu-keyring package installed?
dpkg: error processing apt (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
apt
E: Sub-process /usr/bin/dpkg returned an error code (1)
Some searching on the web revealed that the problem was that there were no files in /usr/share/keyring when there should have been and I had not removed them myself so I have no idea how they disappeared. Various remedies were tried and any that needed software installed were non-starters because apt was disabled by the lack of keyring files. The workaround that restored things for me was to take a copy of the files in /usr/share/keyring from an Ubuntu GNOME 14.04 installation in a VirtualBox VM and copy them in to the same location in its Ubuntu GNOME 13.10 host. For those without such resources, I have packaged them in a zip file below. Other remedies like Y PPA also were suggested where I was reading but that software package needed installing beforehand so it was little use to me when the likes of Synaptic were disabled. If there are other remedies that do not involve an operating system re-installation, I would like to know about them too as well as possible causes for the file loss in the first place and how to avoid these.
More than a decade of computer upgrades and rebuilds can leave obsolete kit in your hands and the arrival of legislation controlling the dumping of electronic goods during this time can leave one wondering how anyone can dispose of them. Thankfully, I discovered that the local council refuse site only a few miles away from me accepts such things for recycling and saw me a good few times over the last summer with obsolete and non-working gadgets that has stayed with me far too long. Some were as bulky as a computer monitor and a printer but others were relatively diminutive.
Disposing of non-working and utterly obsolete equipment is an easy choice but I find this is harder when a device still works as intended and even might have a use yet. When you realise that computer motherboards still come with PS/2, floppy and IDE ports, things get trickier. My Gigabyte Z87-HD3 mainboard just has one PS/2 when predecessors would have had two and the same applies to IDE sockets and there still is a floppy drive socket on there too, a surprising sight for anyone used to thinking that such things are utterly outmoded these days. So, PC technology isn’t relinquishing backwards compatibility just yet since that mainboard is part of a system with an Intel Core i5-4670K CPU and 24 GB of RAM on there.
Even with that presence of an IDE port, I was not tempted to use leftover 10 GB and 20GB hard drives that I have had for just over a decade. Ten years ago, that sort of capacity would been respectable were it not for our voracious appetite for data storage thanks to photography, video and music. Apart from the size constraints, the speed of those drives cannot compare well with what we have today either and I quickly saw that when I replaced a Samsung 160 HD of a similar age with a Samsung SSD.
The result of this line of thought was that I was minded to recycle the drives so I started to think about wiping and Linux has a good tool for this in the form of the dd command. It can overwrite data on the disks so as to render the information virtually irretrievable. Also, Linux has a number of dummy devices that can supply junk data for overwriting purposes. They are like /dev/null which is used to suppress the issuing of output to the command. The first is /dev/zero which supplies octal zeros and I have used this. However, there also is /dev/random and /dev/urandom for those wanting a more random element to the overwriting.
To overwrite data on a disk with zeroes while having feedback on progress, the following command achieves the required result:
sudo dd if=/dev/zero | pv | sudo dd of=/dev/sdd bs=16M
The whole operation needs to be executed with root privileges and the if parameter of dd specifies the input data and this is sent to a pv command that shows a progress bar that dd would not produce by itself while sending the output on to another dd command with the disk to be overwritten specified using the of parameter. The bs parameter in that second dd command specifies the block size for the disk writing job. Unfortunately, pv is not installed by default so you need to add it yourself. On a Debian, Ubuntu or Linux Mint system, the command is the following:
sudo apt-get install pv
That pv sandwich also is invaluable for those times when dd is needed to copy partitions between different physical or virtual (in a virtual machine) disks. Without it, you might wonder what exactly is happening in the silence and that especially is concerning when you are retrying an operation that failed previously and it takes a while to complete each time.
Installing the latest version of Citrix Receiver (13.0 at the time of writing) on 64-bit Ubuntu should be as simple as downloading the required DEB package and double-clicking on the file so that Ubuntu Software Centre can work its magic. Unfortunately, the 64-bit DEB file is faulty so that means that the Ubuntu community how-to guide for Citrix still is needed. In fact, any user of Linux Mint or another distro that uses Ubuntu as its base would do well to have a look at that Ubuntu link.
For sake of completeness, I still am going to let you in on the process that worked for me. Once the DEB file has been downloaded, the first task is to creating a temporary folder where the DEB file’s contents can be extracted:
mkdir ica_temp
With that in place, it then is time to do the extraction and it needs two commands with the second of these need to extract the control file while the first extracts everything else.
sudo dpkg-deb -x icaclient- ica_temp
sudo dpkg-deb --control icaclient- ica_temp/DEBIAN
It is the control file that has been the cause of all the bother because it refers to unavailable dependencies that it really doesn’t need anyway. To open the file for editing, issue the following command:
sudo gedit ica_temp/DEBIAN/control
Then change line 7 (it should begin with Depends:) to: Depends: libc6-i386 (>= 2.7-1), lib32z1, nspluginwrapper. There are other software packages in there that Ubuntu no longer supports and they are not needed anyway. With the edit made and the file saved, the next step is to build a new DEB package with the corrected control file:
dpkg -b ica_temp icaclient-modified.deb
Once you have the package, the next step is to install it using the following command:
sudo dpkg -i icaclient-modified.deb
If it fails, then you have missing dependencies and the following command should sort these before a re-run of the above command again:
sudo apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386
With Citrix Receiver installed, there is one more thing that is needed before you can use it freely. This is to put Thawte security certificate files into /opt/Citrix/ICAClient/keystore/cacerts. What I had not realised until recently was that many of these already are in /usr/share/ca-certificates/mozilla and linking to them with the following command makes them available to Citrix receiver:
sudo ln -s /usr/share/ca-certificates/mozilla/* /opt/Citrix/ICAClient/keystore/cacerts/
Another approach is to download the Thawte certificates and extract the archive to /tmp/. From there they can be copied to /opt/Citrix/ICAClient/keystore/cacerts and I copied the Thawte Personal Premium certificate as follows:
sudo cp /tmp/Thawte Root Certificates/Thawte Personal Premium CA/Thawte Personal Premium CA.cer /opt/Citrix/ICAClient/keystore/cacerts/
Until I found out about what was in the Mozilla folder, I simply picked out the certificate mentioned in the Citrix error message and copied it over like the above. Of course, all of this may seem like a lot of work to those who are non-tinkerers and I have added a repaired 64-bit DEB package that incorporates all of the above and should not need any further intervention aside from installing it using GDebi, Ubuntu’s Software Centre, dpkg or anything else that does what’s needed.