-->
Adventures & experiences in contemporary technology
One thing that is very handy for shell scripting is to have automated entry of passwords for logging into other servers. This can involve using plain text files, which is not always ideal so it was good to find an alternative. The first step is to use the keygen tool that comes with SSH. The command is given below and the -t switch specifies the type of key to be made, RSA in this case. There is the option to add a passphrase but I decided against this for sake of convenience and you do need to assess your security needs before embarking on such a course of action.
ssh-keygen -t rsa
The next step is to use the ssh-copy-id command to generate the keys for a set of login credentials. For this, it is better to use a user account with restricted access to keep as much server security as you can. Otherwise, the process is as simple as executing a command like the following and entering the password at the prompt for doing so.
ssh-copy-id [user ID]@[server address]
Getting this set up has been useful for running a file upload script to keep a web server synchronised and it is better to have the credentials encrypted rather than kept in a plain text file.
Recently, I tried using Commento with a static website that I was developing and this needed PostgreSQL rather than MySQL or MariaDB, which many content management tools use. That meant a learning curve that made me buy a book as well as the creation of a system account for administering PostgreSQL. These are not the kind of things that you want to be too visible so I wanted to hide them.
Since Linux Mint uses AccountsService, you cannot use lightdm to do this (the comments in /etc/lightdm/users.conf suggest as much). Instead, you need to go to /var/lib/AccountsService/users and look for a file called after the user name. If one exists, all that is needed is for you to add the following line under the [User] section:
SystemAccount=true
If there is no file present for the user in question, then you need to create one with the following lines in there:
[User]
SystemAccount=true
Once the configuration files are set up as needed, AccountsService needs to be restarted and the following command does that deed:
sudo systemctl restart accounts-daemon.service
Logging out should reveal that the user in question is not listed on the logon screen as required.
Having had a mishap that lost me some photos in the early days of my dalliance with digital photography, I have been far more careful since then and that now applies to other files as well. Doing regular backups is a must that you find reiterated by many different authors and the current computing climate makes doing that more vital than it ever was.
So, as well as having various local backups, I also have remote ones in the form of OneDrive, Dropbox and Google Drive. These more correctly are file synchronisation services but disciplined use can make them useful as additional storage facilities in the interests of maintaining added resilience. There also are dedicated backup services that I have seen reviewed in the likes of PC Pro magazine but I have to make use of those.
Part of my process for dealing with new digital photo files is to back them up to Google Drive and I did that with a Windows client in the early days but then moved to Insync running on Linux Mint. One drawback to the approach is that this hogs the upload bandwidth of an internet connection that has yet to move to fibre from copper cabling. Having fibre connections to a local cabinet helps but a 100 KiB/s upload speed is easily overwhelmed and digital photo file sizes keep increasing. It does not help that I insist on using more flexible raw formats like DNG, CR2 or CR3 either.
Making fewer images could help to cut the load but I still come away from an excursion with many files because I get so besotted with my surroundings. This means that upload sessions take numerous hours and can extend across calendar days. Ultimately, this makes my internet connection far less usable so I want to throttle upload speed much like what is possible in the Transmission BitTorrent client or in the Dropbox client. Unfortunately, this is not available in Insync so I have tried using the trickle command instead and an example is below:
trickle -d 2000 -u 50 insync
Here, the upload speed is limited to 50 KiB/s while the download speed is limited to 2000 KiB/s. In my case, the latter of these hardly matters while the former leaves me with acceptable internet usability. Insync does not work smoothly with this, however, so occasional restarts are needed to keep file uploads progressing and CPU load also is higher. As rough as the user experience feels, uploads can continue in parallel with other work.
One other option that I am exploring is the use of the command-line tool gdrive and this appears to work well with trickle. After downloading and installing the tool, getting going is a matter of issuing the following command and following the instructions:
gdrive about
On web servers, I even have the tool backing up things to Google Drive on a scheduled basis. Because of a Google Drive limitation that I have encountered not only with gdrive but also with Insync and Google’s own Windows Google Drive client, synchronisation only can happen with two new folders, one local and the other remote. Handily, gdrive supports the usual bash style commands for working with remote directories so something like the following will create a directory on Google Drive:
gdrive mkdir ttdc [ID for parent folder]
Here, the ID for the parent folder may be omitted but it can be obtained by going to Google Drive online and getting a link location by right-clicking on a folder and choosing the appropriate context menu item. This gets you something like the following and the required identifier is found between the last slash and the first question mark in the address string (so as not to share any real links, I made the address more general below):
https://drive.google.com/drive/folders/[remote folder ID]?usp=sharing
Then, synchronisation uses a command like the following:
gdrive sync upload [local folder or file path] [remote folder ID]
There also is the option to do a one-way upload and this is the form of the command used:
gdrive upload [local folder or file path] -p [remote folder ID]
Because every file or folder object has its own ID on Google Drive, it is possible to create two objects on there that appear to have the same name though that is sure to cause confusion even if you know what is happening. It is possible in each of the above to throttle them using trickle as well:
trickle -d 2000 -u 50 gdrive sync upload [local folder or file path] [remote folder ID]
trickle -d 2000 -u 50 gdrive upload [local folder or file path] -p [remote folder ID]
Handily, this works without the added drama seen with Insync and lends itself to scripting as well so it could be something that I will incorporate into my current workflow. One thing that needs to be watched is file upload failures but there may be ways to catch those and retry them so that would another thing that needs doing. This is built into Insync and it would be a learning opportunity if I was to stick with gdrive instead.
During the past week, I rebooted my system only to find that a number of things no longer worked and my Pi-hole DNS server was among them. Having exhausted other possibilities by testing out things on another machine, I did a status check when I spotted a line like the following in my system logs and went investigating further:
cron[322]: (root) INSECURE MODE (mode 0600 expected) (crontabs/root)
It turned out to be more significant than I had expected because this was why every CRON job was failing and that included the network set up needed by Pi-hole; a script is executed using the @reboot directive to accomplish this and I got Pi-hole working again by manually executing it. The evening before, I did make some changes to file permissions under /var/www but I was not expecting it to affect other parts of /var though that may have something to do with some forgotten heavy-handedness. The cure was to issue a command like the following for execution in a terminal session:
sudo chmod -R 600 /var/spool/cron/crontabs/
Then, CRON itself needed starting since it had not not running at all and executing this command did the needful without restarting the system:
sudo systemctl start cron
That outcome was proved by executing the following command to issue some terminal output that include the welcome text “active (running)” highlighted in green:
sudo systemctl status cron
There was newly updated output from a frequently executing job that checked on web servers for me but this was added confirmation. It was a simple solution to a perplexing situation that led up all sorts of blind alleys before I alighted on the right solution to the problem.
One should not do a new PC build in the middle of a heatwave if you do not want to be concerned about how fast fans are spinning and how hot things are getting. Yet, that is what I did last month after delaying the act for numerous months.
My efforts mean that I have a system built around an AMD Ryzen 9 5950X CPU and a Gigabyte X570 Aorus Pro with 64 GB of memory and things are settling down after the initial upheaval. That also meant some adjustments to the CPU fan profile in the BIOS for quieter running while the the use of Be Quiet! Dark Rock 4 cooler also helps as does a Be Quiet! Silent Wings 3 case fan. All are components from trusted brands though I wonder how much abuse they got during their installation and subsequent running in.
Fan noise is a non-quantitative indicator of heat levels as much as touch so more quantitative means are in order. Aside from using a thermocouple device, there are in-built sensors too. My using Linux Mint means that I have the sensors command from the lm-sensors package for checking on CPU and other temperatures though hddtemp is what you need for checking on the same for hard drives. The latter can be used as follows:
sudo hddtemp /dev/sda /dev/sdb
This has to happen using administrator access and a list of drives needs to be provided because it cannot find them by itself. In my case, I have no mechanical hard drives installed in non-NAS systems and I even got to replacing a 6 TB Western Digital Green disk with an 8 TB SSD but I got the following when I tried checking on things with hddtemp:
WARNING: Drive /dev/sda doesn't seem to have a temperature sensor.
WARNING: This doesn't mean it hasn't got one.
WARNING: If you are sure it has one, please contact me ([email protected]).
WARNING: See --help, --debug and --drivebase options.
/dev/sda: Samsung SSD 870 QVO 8TB: no sensor
The cause of the message for me was that there is no entry for Samsung SSD 870 QVO 8TB in /etc/hddtemp.db so that needed to be added there. Before that could be rectified, I needed to get some additional information using smartmontools and these needed to be installed using the following command:
sudo apt-get install smartmontools
What I needed to do was check the drive’s SMART data output for extra information and that was achieved using the following command:
sudo smartctl /dev/sda -a | grep -i Temp
What this does is to look for the temperature information from smartctl output using the grep command with output from the first being passed to the second through a pipe. This yielded the following:
190 Airflow_Temperature_Cel 0x0032 072 050 000 Old_age Always - 28
The first number in the above (190) is the thermal sensor’s attribute identifier and that was needed in what got added to /etc/hddtemp.db. The following command added the necessary data to the aforementioned file:
echo \"Samsung SSD 870 QVO 8TB\" 190 C \"Samsung SSD 870 QVO 8TB\" | sudo tee -a /etc/hddtemp.db
Here, the output of the echo command was passed to the tee command for adding to the end of the file. In the echo command output, the first part is the name of the drive, the second is the heat sensor identifier, the third is the temperature scale (C for Celsius or F for Fahrenheit) and the last part is the label (it can be anything that you like but I kept it the same as the name). On re-running the hddtemp command, I got output like the following so all was as I needed it to be.
/dev/sda: Samsung SSD 870 QVO 8TB: 28°C
Since then, temperatures may have cooled and the weather become more like what we usually get but I am still keeping an eye on things, especially when the system is put under load using Perl, R, Python or SAS. There may be further modifications such as changing the case or even adding water cooling, not least to have a cooler power supply unit, but nothing is being rushed as I monitor things to my satisfaction.
Recent experimentation centring around getting my hands on a test version of Windows 11 had me duplicating virtual machines and virtual disk images though VirtualBox still is not ready for the next Windows version; it has no TPM capability at the moment. Nevertheless, I was able to get something after a fresh installation that removed whatever files were on the disk image. That meant that I needed to mount the old version to get at those files again.
Renaming partially helped with this but what I really needed to do was change the UUID so VirtualBox would not report a collision between two disk images with the same UUID. To avoid this, the UUID of one of the disk images had to be changed and a command like the following was used to accomplish this:
VBoxManage internalcommands sethduuid [Virtual Disk Image Name].vdi
Because I was doing this on Linux Mint, I could call VBoxManage without need to tell the system where it was as would be the case in Windows. Otherwise, it is the sethduuid portion that changes the UUID as required. Another way around this is to clone the VDI file using the following command but I had not realised that at the time:
VBoxManage clonevdi [old virtual disk image].vdi [new virtual disk image].vdi
It seems that there can be more than way to do things in VirtualBox at times so the second way will remain on reference for the future.
While some Linux distros like Fedora install VirtualBox drivers during installation time, I prefer to install the VirtualBox Guest Additions themselves. Before doing this, it is best to remove the virtualbox-guest-additions package from Fedora to avoid conflicts. After that, execute the following command to ensure that all prerequisites for the VirtualBox Guest Additions are in place prior to mounting the VirtualBox Guest Additions ISO image and installing from there:
sudo dnf -y install gcc automake make kernel-headers dkms bzip2 libxcrypt-compat kernel-devel perl
During the installation, you may encounter a message like the following:
ValueError: File context for /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf already defined
This is generated by SELinux so the following commands need executing before the VirtualBox Guest Additions installation is repeated:
sudo semanage fcontext -d /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf
sudo restorecon /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf
Without doing the above step and fixing the preceding error message, I had an issue with mounting of Shared Folders whereby the mount point was set up but no folder contents were displayed. This happened even when my user account was added to the vboxsf group and it proved to be the SELinux context issue that was the cause.
By default, a GNOME Terminal instance does not display a menu bar and that applies not only in GNOME Shell but also on the Cinnamon Desktop environment. In the latter, it is easy enough to display the menu bar using the context menu produced by right clicking in the window before going to Edit > Preferences and ticking the box for Show menubar by default in new terminals in the General section. After closing the Preferences dialogue, every new GNOME Terminal session will show the menu bar.
Unfortunately, it is not so easy in GNOME Shell though the context menu route does allow you to unhide the menu bar on a temporary basis. That is because the requisite tickbox is missing from the Preferences dialogue box displayed after navigating to Edit > Preferences in the menus. To address, you need the execute the following command in a terminal session:
gsettings set org.gnome.Terminal.Legacy.Settings headerbar false
This not only adds the menu bar on a permanent basis but it also adds the missing tickbox, populated as required. GNOME Shell may be minimalist in some ways but making this action harder looks like going a little too far.
The Flatpak concept offers a useful way of getting the latest version of software like LibreOffice or GIMP on Linux machines because repositories are managed conservatively when it comes to the versions of included software. Ubuntu has Snaps, which are similar in concept. Both options bundle dependencies with the packaged software so that its operation can use later versions of system libraries that what may be available with a particular distribution.
However, even Flatpak depends on what is available through the repositories for a distribution as I found when a software update needed a version of the tool. The solution was to add PPA using the following command and agreeing to the prompts that arise (answering Y, in other words):
sudo add-apt-repository ppa:alexlarsson/flatpak
With the new PPA instated, the usual apt commands were used to update the Flatpak package and continue with the required updates. Since then, all has gone smoothly as expected.
Over the weekend, I finally got to fixing a problem that has affected Ubuntu 18.04 virtual machine for quite a while. The usual checks on Guest Additions installation and vboxsf group access assignment were performed but were not causing the issue. Also, no other VM (Windows (7 & 10) and Linux Mint Debian Edition) on the same Linux Mint 19.2 machine was experiencing the same issue. The latter observation made the problem intrinsic to the Ubuntu VM itself.
Because I install the Guest Additions software from the included virtual CD, I executed the following command to open the relevant file for editing:
sudo systemctl edit --full vboxadd-service
If I had installed installed virtualbox-guest-dkms and virtualbox-guest-utils from the Ubuntu repositories instead, then this would have been the command that I needed to execute instead of the above.
sudo systemctl edit --full virtualbox-guest-utils
Whichever configuration gets opened, the line that needs attention is the one beginning with Conflicts (line 6 in the file on my system). The required edit removes systemd-timesync.service from the list following the equals sign. It is worth checking that file paths include the correct version number for the Guest Additions software that is installed in case this was not the case. The only change that was needed on my Ubuntu VM was to the Conflicts line and rebooting it got the Shared Folder automatically mounted under the /media directory as expected.