Technology Tales

Adventures & experiences in contemporary technology

Upgrading from OpenMediaVault 6.x to OpenMediaVault 7.x

8th March 2024

Having an older PC to upgrade, I decided to install OpenMediaVault on there a few years ago after adding in 6 TB and 4 TB hard drives for storage, a Gigabit network card to speed up backups and a new BeQuiet! power supply to make it quieter. It has been working smoothly since then, and the release of OpenMediaVault 6.x had me wondering how to move to it.

Usefully, I enabled an SSH service for remote logins and set up an account for anything that I needed to do. This includes upgrades, taking backups of what is on my NAS drives, and even shutting down the machine when I am done with what I need to do with it.

Using an SSH session, the first step was to switch to the administrator account and issue the following command to ensure that my OpenMediaVault 6.x installation was as up-to-date as it could be:

omv-update

Once that had completed what it needed to do, the next step was to do the upgrade itself with the following command:

omv-release-upgrade

With that complete, it was time to reboot the system, and I fired up the web administration interface and spotted a kernel update that I applied. Again, the system was restarted, and further updates were noticed and these were applied, again through the web interface. The whole thing is based on Debian 12.x, but I am not complaining since it quietly does exactly what I need of it. There was one slight glitch when doing an update after the changeover, and that was quickly sorted.

Getting custom Python imports to work in Visual Studio Code

18th February 2022

While I continue to use Spyder as my preferred Python code editor, I also tried out Visual Studio Code. Handily, this Integrated Development Environment also has facilities for working with R and Julia code as well as MarkDown text editing and adding the required extensions is enough for these applications; it helps that there is an unofficial Grammarly extension for content creation.

My Python code development makes use of the Pylance extension and it works a little differently from Spyder when it comes to including files using import statements. Spyder will look into the folder where the base script is located but the default behaviour of Pylance is that it looks in the root path of your workspace. This meant that any code that ran successfully in Spyder failed in Visual Studio Code.

The way around this was to add the required location using the python.analysis.extraPaths setting for the workspace. That meant opening Settings by navigating to File > Preferences > Settings in the menu system and entering python.analysis.extraPaths into the search box. That took me to the section that I needed and I then clicked on Add Item before entering the required path and clicking on the OK button. That was enough to fix the problem and all worked as it should after that.

When a hard drive is unrecognised by the Linux hddtemp command

15th August 2021

One should not do a new PC build in the middle of a heatwave if you do not want to be concerned about how fast fans are spinning and how hot things are getting. Yet, that is what I did last month after delaying the act for numerous months.

My efforts mean that I have a system built around an AMD Ryzen 9 5950X CPU and a Gigabyte X570 Aorus Pro with 64 GB of memory and things are settling down after the initial upheaval. That also meant some adjustments to the CPU fan profile in the BIOS for quieter running while the the use of Be Quiet! Dark Rock 4 cooler also helps as does a Be Quiet! Silent Wings 3 case fan. All are components from trusted brands though I wonder how much abuse they got during their installation and subsequent running in.

Fan noise is a non-quantitative indicator of heat levels as much as touch so more quantitative means are in order. Aside from using a thermocouple device, there are in-built sensors too. My using Linux Mint means that I have the sensors command from the lm-sensors package for checking on CPU and other temperatures though hddtemp is what you need for checking on the same for hard drives. The latter can be used as follows:

sudo hddtemp /dev/sda /dev/sdb

This has to happen using administrator access and a list of drives needs to be provided because it cannot find them by itself. In my case, I have no mechanical hard drives installed in non-NAS systems and I even got to replacing a 6 TB Western Digital Green disk with an 8 TB SSD but I got the following when I tried checking on things with hddtemp:

WARNING: Drive /dev/sda doesn't seem to have a temperature sensor.
WARNING: This doesn't mean it hasn't got one.
WARNING: If you are sure it has one, please contact me ([email protected]).
WARNING: See --help, --debug and --drivebase options.
/dev/sda: Samsung SSD 870 QVO 8TB: no sensor

The cause of the message for me was that there is no entry for Samsung SSD 870 QVO 8TB in /etc/hddtemp.db so that needed to be added there. Before that could be rectified, I needed to get some additional information using smartmontools and these needed to be installed using the following command:

sudo apt-get install smartmontools

What I needed to do was check the drive’s SMART data output for extra information and that was achieved using the following command:

sudo smartctl /dev/sda -a | grep -i Temp

What this does is to look for the temperature information from smartctl output using the grep command with output from the first being passed to the second through a pipe. This yielded the following:

190 Airflow_Temperature_Cel 0x0032 072 050 000 Old_age Always - 28

The first number in the above (190) is the thermal sensor’s attribute identifier and that was needed in what got added to /etc/hddtemp.db. The following command added the necessary data to the aforementioned file:

echo \"Samsung SSD 870 QVO 8TB\" 190 C \"Samsung SSD 870 QVO 8TB\" | sudo tee -a /etc/hddtemp.db

Here, the output of the echo command was passed to the tee command for adding to the end of the file. In the echo command output, the first part is the name of the drive, the second is the heat sensor identifier, the third is the temperature scale (C for Celsius or F for Fahrenheit) and the last part is the label (it can be anything that you like but I kept it the same as the name). On re-running the hddtemp command, I got output like the following so all was as I needed it to be.

/dev/sda: Samsung SSD 870 QVO 8TB: 28°C

Since then, temperatures may have cooled and the weather become more like what we usually get but I am still keeping an eye on things, especially when the system is put under load using Perl, R, Python or SAS. There may be further modifications such as changing the case or even adding water cooling, not least to have a cooler power supply unit, but nothing is being rushed as I monitor things to my satisfaction.

Changing the UUID of a VirtualBox Virtual Disk Image in Linux

11th July 2021

Recent experimentation centring around getting my hands on a test version of Windows 11 had me duplicating virtual machines and virtual disk images though VirtualBox still is not ready for the next Windows version; it has no TPM capability at the moment. Nevertheless, I was able to get something after a fresh installation that removed whatever files were on the disk image. That meant that I needed to mount the old version to get at those files again.

Renaming partially helped with this but what I really needed to do was change the UUID so VirtualBox would not report a collision between two disk images with the same UUID. To avoid this, the UUID of one of the disk images had to be changed and a command like the following was used to accomplish this:

VBoxManage internalcommands sethduuid [Virtual Disk Image Name].vdi

Because I was doing this on Linux Mint, I could call VBoxManage without need to tell the system where it was as would be the case in Windows. Otherwise, it is the sethduuid portion that changes the UUID as required. Another way around this is to clone the VDI file using the following command but I had not realised that at the time:

VBoxManage clonevdi [old virtual disk image].vdi [new virtual disk image].vdi

It seems that there can be more than way to do things in VirtualBox at times so the second way will remain on reference for the future.

Getting rid of the Windows Resizing message from a Manjaro VirtualBox guest

27th July 2020

Like Fedora, Manjaro also installs a package for VirtualBox Guest Additions when you install the Linux distro in a VirtualBox virtual machine. However, it does have certain expectations when doing this. On many systems and my own is one of these, Linux guests are forced to use the VMSVGA virtual graphics controller while Windows guests are allowed to use the VBoxSVGA one. It is the latter that Manjaro expects so you get a message like the following appearing when the desktop environment has loaded:

Windows Resizing
Set your VirtualBox Graphics Controller to enable windows resizing

After ensuring that gcc, make, perl and kernel headers are installed, I usually install VirtualBox Guest Additions myself from the included ISO image and so I did the same with Manjaro. Doing that and restarting the virtual machine got me extra functionality like screen resizing and being able to copy and paste between the VM and elsewhere after choosing the Bidirectional setting in the menus under Devices > Shared Clipboard.

That still left an unwanted message popping up on startup. To get rid of that, I just needed to remove /etc/xdg/autostart/mhwd-vmsvga-alert.desktop. It can be deleted but I just moved it somewhere else and a restart proved that the message was gone as needed. Now everything is working as I wanted.

Moving a website from shared hosting to a virtual private server

24th November 2018

This year has seen some optimisation being applied to my web presences guided by the results of GTMetrix scans. It was then that I realised how slow things were, so server loads were reduced. Anything that slowed response times, such as WordPress plugins, got removed. Usage of Matomo also was curtailed in favour of Google Analytics while HTML, CSS and JS minification followed. What had yet to happen was a search for a faster server. Now, another website has been moved onto a virtual private server (VPS) to see how that would go.

Speed was not the only consideration since security was a factor too. After all, a VPS is more locked away from other users than a folder on a shared server. There also is the added sense of control, so Let’s Encrypt SSL certificates can be added using the Electronic Frontier Foundation’s Certbot. That avoids the expense of using an SSL certificate provided through my shared hosting provider and a successful transition for my travel website may mean that this one undergoes the same move.

For the VPS, I chose Ubuntu 18.04 as its operating system and it came with the LAMP stack already in place. Have offload development websites, the mix of Apache, MySQL and PHP is more familiar to me than anything using Nginx or Python. It also means that .htaccess files become more useful than they were on my previous Nginx-based platform. Having full access to the operating system by means of SSH helps too and should mean that I have fewer calls on technical support since I can do more for myself. Any extra tinkering should not affect others either, since this type of setup is well known to me and having an offline counterpart means that anything riskier is tried there beforehand.

Naturally, there were niggles to overcome with the move. The first to fix was to make the MySQL instance accept calls from outside the server so that I could migrate data there from elsewhere and I even got my shared hosting setup to start using the new database to see what performance boost it might give. To make all this happen, I first found the location of the relevant my.cnf configuration file using the following command:

find / -name my.cnf

Once I had the right file, I commented out the following line that it contained and restarted the database service afterwards using another command to stop the appearance of any error 111 messages:

bind-address 127.0.0.1
service mysql restart

After that, things worked as required and I moved onto another matter: uploading the requisite files. That meant installing an FTP server so I chose proftpd since I knew that well from previous tinkering. Once that was in place, file transfer commenced.

When that was done, I could do some testing to see if I had an active web server that loaded the website. Along the way, I also instated some Apache modules like mod-rewrite using the a2enmod command, restarting Apache each time I enabled another module.

Then, I discovered that Textpattern needed php-7.2-xml installed, so the following command was executed to do this:

apt install php7.2-xml

Then, the following line was uncommented in the correct php.ini configuration file that I found using the same method as that described already for the my.cnf configuration and that was followed by yet another Apache restart:

extension=php_xmlrpc.dll

Addressing the above issues yielded enough success for me to change the IP address in my Cloudflare dashboard so it pointed at the VPS and not the shared server. The changeover happened seamlessly without having to await DNS updates as once would have been the case. It had the added advantage of making both WordPress and Textpattern work fully.

With everything working to my satisfaction, I then followed the instructions on Certbot to set up my new Let’s Encrypt SSL certificate. Aside from a tweak to a configuration file and another Apache restart, the process was more automated than I had expected so I was ready to embark on some fine-tuning to embed the new security arrangements. That meant updating .htaccess files and Textpattern has its own, so the following addition was needed there:

RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This complemented what was already in the main .htaccess file and WordPress allows you to include http(s) in the address it uses, so that was another task completed. The general .htaccess only needed the following lines to be added:

RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://www.assortedexplorations.com/$1 [R,L]

What all these achieve is to redirect insecure connections to secure ones for every visitor to the website. After that, internal hyperlinks without https needed updating along with any forms so that a padlock sign could be shown for all pages.

With the main work completed, it was time to sort out a lingering niggle regarding the appearance of an FTP login page every time a WordPress installation or update was requested. The main solution was to make the web server account the owner of the files and directories, but the following line was added to wp-config.php as part of the fix even if it probably is not necessary:

define('FS_METHOD', 'direct');

There also was the non-operation of WP Cron and that was addressed using WP-CLI and a script from Bjorn Johansen. To make double sure of its effectiveness, the following was added to wp-config.php to turn off the usual WP-Cron behaviour:

define('DISABLE_WP_CRON', true);

Intriguingly, WP-CLI offers a long list of possible commands that are worth investigating. A few have been examined but more await attention.

Before those, I still need to get my new VPS to send emails. So far, sendmail has been installed, the hostname changed from localhost and the server restarted. More investigations are needed but what I have not is faster than what was there before, so the effort has been rewarded already.

Lightening of desktop background images on Linux Mint Debian Edition running in Virtualbox

22nd October 2018

After a recent upgrade to Linux Mint Debian Edition 3 in a VirtualBox virtual machine that I had running its predecessor, I began to notice that background images were being loaded with more washed out of faded colours. This happened at startup so selecting another background image worked as intended until the same thing happened to that after a system restart.

This problem is not new and has affected the Cinnamon desktop in the main Linux Mint variant (the one that is based on Ubuntu) and issuing the following command in a terminal session is a suggested solution:

gsettings set org.cinnamon.muffin background-transition fade-in

In my case, that solved the problem and desktop background image display is as it should be since I executed the above. All it took was a change to a system setting.

Updating Flatpack applications on Linux Mint 19

10th August 2018

Since upgrading to Linux Mint 19, I have installed some software from Flatpack. The cause for my curiosity was that you could have the latest versions of applications like GIMP or Libreoffice without having to depend on a third-party PPA. Installation is straightforward given the support built into Linux Mint. You just need to download the relevant package from the Flatpack website and running the file through the GUI installer. Because the packages come with extras to ensure cross-compatibility, more disk space is used but there is no added system overhead beyond that from what I have seen. Updating should be as easy as running the following single command too:

flatpack update

However, I needed to do a little extra work before this was possible. The first step was to update the configuration file at ~/.local/share/flatpak/repo/config to add the following lines:

[remote "flathub"]
gpg-verify=true
gpg-verify-summary=true
url=https://flathub.org/repo/
xa.title=Flathub

Once that was completed, I ran the following commands to import the required GPG key:

wget https://flathub.org/repo/flathub.gpg
flatpak --user remote-modify --gpg-import=flathub.gpg flathub

With this complete, I was able to run the update process and update any applications as necessary. After that first run, it has been integrated in to my normal processes by adding the command to the relevant alias definition.

Halting constant disk activity on a WD My Cloud NAS

6th June 2018

Recently, I noticed that the disk in my WD My Cloud NAS was active all the time so it reminded me of another time when this happened. Then, I needed to activate the SSH service on the device and log in as root with the password welc0me. That default password was changed before doing anything else. Since the device runs on Debian Linux, that was a simple case of using the passwd command and following the prompts. One word of caution is in order since only root can be used for SSH connections to a WD My Cloud NAS and any other user that you set up will not have these privileges.

The cause of all the activity was two services: wdmcserverd and wdphotodbmergerd. One way to halt their actions is to stop the services using these commands:

/etc/init.d/wdmcserverd stop
/etc/init.d/wdphotodbmergerd stop

The above act only works until the next system restart so these command should make for a more persistent disabling of the culprits:

update-rc.d -f wdmcserverd remove
update-rc.d -f wdphotodbmergerd remove

If all else fails, removing executable privileges from the normally executable files that the services need will work and it is a solution that I have tried with success between system updates:

cd /etc/init.d
chmod 644 wdmcserverd
reboot

Between all of these, it should be possible to have you WD My Cloud NAS go into power saving mode as it should though turning off additional services such as DLNA may be what some need to do. Having turned off these already, I only needed to disable the photo thumbnail services that were the cause of my machine’s troubles.

Installing Firefox Developer Edition in Linux Mint

22nd April 2018

Having moved beyond the slow response and larger memory footprint of Firefox ESR, I am using Firefox Developer Edition in its place even if it means living without a status bar at the bottom of the window. Hopefully, someone will create an equivalent of the old add-on bar extensions that worked before the release of Firefox Quantum.

Firefox Developer Edition may be pre-release software with some extras for web developers like being able to to drill into an HTML element and see its properties but I am finding it stable enough for everyday use. It is speedy too, which helps, and it has its own profile so it can co-exist on the same machine as regular releases of Firefox like its ESR and Quantum variants.

Installation takes a little added effort though and there are various options available. My chosen method involved Ubuntu Make. Installing this involves setting up a new PPA as the first step and the following commands added the software to my system:

sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
sudo apt-get update
sudo apt-get install ubuntu-make

With the above completed, it was simple to install Firefox Developer edition using the following command:

umake web firefox-dev

Where things got a bit more complicated was getting entries added to the Cinnamon Menu and Docky. The former was sorted using the cinnamon-menu-editor command but the latter needed some tinkering with my firefox-developer.desktop file found in .local/share/applications/ within my user area to get the right icon shown. Discovering this took me into .gconf/apps/docky-2/Docky/Interface/DockPreferences/%gconf.xml where I found the location of the firefox-developer.desktop that needed changing. Once this was completed, there was nothing else to do from the operating system side.

Within Firefox itself, I opted to turn off warnings about password logins on non-https websites by going to about:config using the address bar, then looking for security.insecure_field_warning.contextual.enabled and changing its value from True to False. Some may decry this but there are some local websites on my machine that need attention at times. Otherwise, Firefox is installed with user access so I can update it as if it were a Windows or MacOS application and that is useful given that there are frequent new releases. All is going as I want it so far.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.