Technology Tales

Adventures & experiences in contemporary technology

Upgrading from OpenMediaVault 6.x to OpenMediaVault 7.x

8th March 2024

Having an older PC to upgrade, I decided to install OpenMediaVault on there a few years ago after adding in 6 TB and 4 TB hard drives for storage, a Gigabit network card to speed up backups and a new BeQuiet! power supply to make it quieter. It has been working smoothly since then, and the release of OpenMediaVault 6.x had me wondering how to move to it.

Usefully, I enabled an SSH service for remote logins and set up an account for anything that I needed to do. This includes upgrades, taking backups of what is on my NAS drives, and even shutting down the machine when I am done with what I need to do with it.

Using an SSH session, the first step was to switch to the administrator account and issue the following command to ensure that my OpenMediaVault 6.x installation was as up-to-date as it could be:

omv-update

Once that had completed what it needed to do, the next step was to do the upgrade itself with the following command:

omv-release-upgrade

With that complete, it was time to reboot the system, and I fired up the web administration interface and spotted a kernel update that I applied. Again, the system was restarted, and further updates were noticed and these were applied, again through the web interface. The whole thing is based on Debian 12.x, but I am not complaining since it quietly does exactly what I need of it. There was one slight glitch when doing an update after the changeover, and that was quickly sorted.

When CRON is stalled by incorrect file and folder permissions

8th October 2021

During the past week, I rebooted my system only to find that a number of things no longer worked and my Pi-hole DNS server was among them. Having exhausted other possibilities by testing out things on another machine, I did a status check when I spotted a line like the following in my system logs and went investigating further:

cron[322]: (root) INSECURE MODE (mode 0600 expected) (crontabs/root)

It turned out to be more significant than I had expected because this was why every CRON job was failing and that included the network set up needed by Pi-hole; a script is executed using the @reboot directive to accomplish this and I got Pi-hole working again by manually executing it. The evening before, I did make some changes to file permissions under /var/www but I was not expecting it to affect other parts of /var though that may have something to do with some forgotten heavy-handedness. The cure was to issue a command like the following for execution in a terminal session:

sudo chmod -R 600 /var/spool/cron/crontabs/

Then, CRON itself needed starting since it had not not running at all and executing this command did the needful without restarting the system:

sudo systemctl start cron

That outcome was proved by executing the following command to issue some terminal output that include the welcome text “active (running)” highlighted in green:

sudo systemctl status cron

There was newly updated output from a frequently executing job that checked on web servers for me but this was added confirmation. It was a simple solution to a perplexing situation that led up all sorts of blind alleys before I alighted on the right solution to the problem.

Shared folders not automounting on an Ubuntu 18.04 guest in a VirtualBox virtual machine

1st October 2019

Over the weekend, I finally got to fixing a problem that has affected Ubuntu 18.04 virtual machine for quite a while. The usual checks on Guest Additions installation and vboxsf group access assignment were performed but were not causing the issue. Also, no other VM (Windows (7 & 10) and Linux Mint Debian Edition) on the same Linux Mint 19.2 machine was experiencing the same issue. The latter observation made the problem intrinsic to the Ubuntu VM itself.

Because I install the Guest Additions software from the included virtual CD, I executed the following command to open the relevant file for editing:

sudo systemctl edit --full vboxadd-service

If I had installed installed virtualbox-guest-dkms and virtualbox-guest-utils from the Ubuntu repositories instead, then this would have been the command that I needed to execute instead of the above.

sudo systemctl edit --full virtualbox-guest-utils

Whichever configuration gets opened, the line that needs attention is the one beginning with Conflicts (line 6 in the file on my system). The required edit removes systemd-timesync.service from the list following the equals sign. It is worth checking that file paths include the correct version number for the Guest Additions software that is installed in case this was not the case. The only change that was needed on my Ubuntu VM was to the Conflicts line and rebooting it got the Shared Folder automatically mounted under the /media directory as expected.

Halting constant disk activity on a WD My Cloud NAS

6th June 2018

Recently, I noticed that the disk in my WD My Cloud NAS was active all the time so it reminded me of another time when this happened. Then, I needed to activate the SSH service on the device and log in as root with the password welc0me. That default password was changed before doing anything else. Since the device runs on Debian Linux, that was a simple case of using the passwd command and following the prompts. One word of caution is in order since only root can be used for SSH connections to a WD My Cloud NAS and any other user that you set up will not have these privileges.

The cause of all the activity was two services: wdmcserverd and wdphotodbmergerd. One way to halt their actions is to stop the services using these commands:

/etc/init.d/wdmcserverd stop
/etc/init.d/wdphotodbmergerd stop

The above act only works until the next system restart so these command should make for a more persistent disabling of the culprits:

update-rc.d -f wdmcserverd remove
update-rc.d -f wdphotodbmergerd remove

If all else fails, removing executable privileges from the normally executable files that the services need will work and it is a solution that I have tried with success between system updates:

cd /etc/init.d
chmod 644 wdmcserverd
reboot

Between all of these, it should be possible to have you WD My Cloud NAS go into power saving mode as it should though turning off additional services such as DLNA may be what some need to do. Having turned off these already, I only needed to disable the photo thumbnail services that were the cause of my machine’s troubles.

Reloading .bashrc within a BASH terminal session

3rd July 2016

BASH is a command-line interpretor that is commonly used by Linux and UNIX operating systems. Chances are that you will find find yourself in a BASH session if you start up a terminal emulator in many of these though there are others like KSH and SSH too.

BASH comes with its own configuration files and one of these is located in your own home directory, .bashrc. Among other things, it can become a place to store command shortcuts or aliases. Here is an example:

alias us=’sudo apt-get update && sudo apt-get upgrade’

Such a definition needs there to be no spaces around the equals sign and the actual command to be declared in single quotes. Doing anything other than this will not work as I have found. Also, there are times when you want to update or add one of these and use it without shutting down a terminal emulator and restarting it.

To reload the .bashrc file to use the updates contained in there, one of the following commands can be issued:

source ~/.bashrc

. ~/.bashrc

Both will read the file and execute its contents so you get those updates made available so you can continue what you are doing. There appears to be a tendency for this kind of thing in the world of Linux and UNIX because it also applies to remounting drives after a change to /etc/fstab and restarting system services like Apache, MySQL or Nginx. The command for the former is below:

sudo mount -a

Often, the means for applying the sorts of in-situ changes that you make are simple ones too and anything that avoids system reboots has to be good since you have less work interruptions.

Migrating to Windows 10

10th August 2015

While I have had preview builds of Windows 10 in various virtual machines for the most of twelve months, actually upgrading physical and virtual devices that you use for more critical work is a very different matter. Also, Windows 10 is set to be a rolling release with enhancements coming on an occasional basis so I would like to see what comes before it hits the actual machines that I need to use. That means that a VirtualBox instance of the preview build is being retained to see what happens to that over time.

Some might call it incautious but I have taken the plunge and completely moved from Windows 8.1 to Windows 10. The first machine that I upgraded was more expendable and success with that encouraged me to move onto others before even including a Windows 7 machine to see how that went. The 30-day restoration period allows an added degree of comfort when doing all this. The list of machines that I upgraded were a VMware VM with 32-bit Windows 8.1 Pro (itself part of a 32-bit upgrade cascade involving Windows 7 Home and Windows 8 Pro), a VirtualBox VM with 64-bit Windows 8.1, a physical PC that dual booted Linux Mint 17.2 and 64-bit Windows 8.1 and an HP Pavilion dm4 laptop (Intel Core i3 with 8 GB RAM and a 1 TB SSHD) with Windows 7.

The main issue that I uncovered with the virtual machines is that the Windows 10 update tool that is downloaded onto Windows 7 and 8.x does not accept the graphics capability on there. This is a bug because the functionality works fine on the Windows Insider builds. The solution was to download the appropriate Windows 10 ISO image for use in the ensuing upgrade. There are 32-bit and 64-bit disk images with Windows 10 and Windows 10 Pro installation files on each. My own actions used both disk images.

During the virtual machine upgrades, most of the applications that considered important were carried over from Windows 8.1 to Windows without a bother. Anyone would expect Microsoft’s own software like Word, Excel and others to make the transition, but others like Adobe’s Photoshop and Lightroom made it too, as did Mozilla’s Firefox, albeit requiring a trip to Settings to set it as the default option for opening web pages. Less well-known desktop applications like Zinio (digital magazines) or Mapyx Quo (maps for cycling, walking and the like) were the same. Classic Shell was an exception but the Windows 10 Start Menu suffices for now anyway. Also, there was a need to reinstate Bitdefender Antivirus Plus using its new Windows 10 compatible installation file. Still, the experience was a big change from the way things used to be in the days when you used to have to reinstall nearly all your software following a Windows upgrade.

The Windows 10 update tool worked well for the Windows 8.1 PC, so no installation disks were needed. Neither was the bootloader overwritten so the Windows option needed selecting from GRUB every time there was a system reboot as part of the installation process, a temporary nuisance that was tolerated since booting into Linux Mint was preserved. Again, no critical software was lost in the process apart from Kaspersky Internet Security, which needed the Windows 10 compatible version installed, much like Bitdefender, or Epson scanning software that I found was easy to reinstall anyway. Usefully, Anquet’s Outdoor Map Navigator (again used for working with walking and cycling maps) continued to function properly after the changeover.

For the Windows 7 laptop, it was much the same story, albeit with the upgrade being delivered using Windows Update. Then, the main Windows account could be connected to my Outlook account to get everything tied up with the other machines for the first time. Before the obligatory change of background picture, the browns in the one that I was using were causing interface items to appear in red, not exactly my favourite colour for application menus and the like. Now they are in blue and all the upheaval surrounding the operating system upgrade had no effect on the Dropbox or Kaspersky installations that I had in place before it all started. If there is any irritation, it is that unpinning of application tiles from the Start Menu or turning off of live tiles is not always as instantaneous as I would have liked and that is all done now anyway.

While writing the above, I could not help thinking that more observations on Windows 10 may follow, but these will do for now. Microsoft had to get this upgrade process right and it does appear that they have, so credit is due to them for that. So far, I have Windows 10 to be stable and will be seeing how things develop from here, especially when those new features arrive occasionally as is the promise that has been made to us users. Hopefully, that will be as painless as it needs to be to ensure trust is retained.

A reappraisal of Windows 8 and 8.1 licensing

15th November 2013

With the release of Windows 8 around this time last year, I thought that the full retail version that some of us got for fresh installations on PC’s, real or virtual, had become a thing of the past. In fact, it did seem that every respecting technology news website and magazine was saying just that. The release that you would buy from Microsft or from mainstream computer stores was labelled as an upgrade. That made it look as if you needed the OEM or System Builder edition for those PC’s that needed a new Windows installation and that the licence that you bought was then attached to the machine from when it got installed on there.

As is usual with Microsoft, the situation is less clear cut than that. For instance, there was some back-pedalling to allow OEM editions of Windows to be licensed for personal use on real or virtual PC’s. With Windows & and its predecessors, it even was possible to be able to install afresh on a PC without Windows by first installing on inactivated copy on there and then upgrading that as if it was a previous version of Windows. Of course, an actual licence was of the previous version of Windows was needed for full compliance if not the actual installation. At times, Microsoft muddies waters so as to keep its support costs down.

Even with Microsoft’s track record in mind, it still did surprise me when I noticed that Amazon was selling what appeared to be full versions of both Windows 8.1 and Windows 8.1 Pro. Having set up a 64-bit VirtualBox virtual machine for Windows 8.1, I got to discovering the same for software purchased from the Microsoft web site. However, unlike the DVD versions, you do need an active Windows installation if you fancy a same day installation of the downloaded software. For those without Windows on a machine, this can be as simple as downloading either the 32-bit or the 64-bit 90 day evaluation editions of Windows 8.1 Enterprise and using that as a springboard for the next steps. This not only be an actual in-situ installation but there options to create an ISO or USB image of the installation disk for later installation.

In my case, I created a 64-bit ISO image and used that to reboot the virtual machine that had Windows 8.1 Enterprise on there before continuing with the installation. By all appearances, there seemed to be little need for a pre-existing Windows instance for it to work so it looks as if upgrades have fallen by the wayside and only full editions of Windows 8.1 are available now. The OEM version saves money so long as you are happy to stick with just one machine and most users probably will do that. As for the portability of the full retail version, that is not something that I have tested and I am unsure that I will go beyond what I have done already.

My main machine has seen a change of motherboard, CPU and memory so it could have de-activated a pre-existing Windows licence. However, I run Linux as my main operating system and, apart possibly from one surmountable hiccup, this proves surprisingly resilient in the face of such major system changes. For running Windows, I turn to virtual machines and there were no messages about licence activation during the changeover either. Microsoft is anything but confiding when it comes to declaring what hardware changes inactivate a licence. Changing a virtual machine from VirtualBox to VMware or vice versa definitely so does it so I tend to avoid doing that. One item that is fundamental to either a virtual or a real PC is the mainboard and I have seen suggestions that this is the critical component for Windows licence activation and it would make sense if that was the case.

However, this rule is not hard and fast either since there appears to be room for manoeuvre should your PC break. It might be worth calling Microsoft after a motherboard replacement to see if they can help you and I have seen that it is. All in all, Microsoft often makes what appear to be simple rules only to override them when faced with what happens in the real world. Is that why they can be unclear about some matters at times? Do they still hanker after how they want things to be even when they are impossible to keep like that?

Piggybacking an Android Wi-Fi device off your Windows PC’s internet connection

16th March 2013

One of the disadvantages of my Google/Asus Nexus 7 is that it needs a Wi-Fi connection to use. Most of the time this is not a problem since I also have a Huawei mobile WiFi hub from T-Mobile and this seems to work just about anywhere in the U.K. Away from the U.K. though, it won’t work because roaming is not switched on for it and that may be no bad thing with the fees that could introduce. My HTC Desire S could deputise but I need to watch costs with that too.

There’s also the factor of download caps and those apply both to the Huawei and to the HTC. Recently, I added Anquet‘s Outdoor Map Navigator (OMN) to my Nexus 7 through the Google Play store for a fee of £7 and that allows access to any walking maps that I have bought from Anquet. However, those are large downloads so the caps start to come into play. Frugality would help but I began to look at other possibilities that make use of a laptop’s Wi-Fi functionality.

Looking on the web, I found two options for this that work on Windows 7 (8 should be OK too): Connectify Hotspot and Virtual Router Manager. The first of these is commercial software but there is a Lite edition for those wanting to try it out; that it is not a time limited demo is not something that I can confirm though that did not seem to be the case since it looked as if only features were missing from it that you’d get if you paid for the Pro variant. The second option is an open source one and is free of charge apart from an invitation to donate to the project.

Though online tutorials show the usage of either of these to be straightforward, my experiences were not all that positive at the outset. In fact, there was something that I needed to do and that is why this post has come to exist at all. That happened even after the restart that Conectify Hotspot needed as part of its installation; it runs as a system service so that’s why the restart was needed. In fact, it was Virtual Router Manager that told me what the issue was and it needed no reboot. Neither did it cause network disconnection of a laptop like the Connectify offering did on me and that was the cause of its ejection from that system; limitations in favour of its paid addition aside, it may have the snazzier interface but I’ll take effective simplicity any day.

Using Virtual Router Manager turns out to be simple enough. It needs a network name (also known as an SSID), a password to restrict who accesses the network and the internet connection to be shared. In my case, the was Local Area Connection on the drop down list. With all the required information entered,  I was ready to start the router using the Start Network Router button. The text on this changes to Stop Network Router when the hub is operational or at least it should have done for me on the first time that I ran it. What I got instead was the following message:

The group or resource is not in the correct state to perform the requested operation.

The above may not say all that much but it becomes more than ample information if you enter it into the likes of Google. Behind the scenes, Virtual Router Manager is using native Windows functionality is create a WiFi hub from a PC and it appears to be the Microsoft Virtual Wi-Fi Miniport Adapter from what I have seen. When I tried setting up an adhoc Wi-Fi network from a laptop to the Nexus 7 using Windows’ own network set up capability via its Control Panel, it didn’t do what I needed so there might be something that third party software can do. So, the interesting thing about the solution to my Virtual Router Manager problem was that it needed me to delve into the innards of Windows a little.

Firstly, there’s running Command Prompt (All Programs > Accessories) from the Start Menu with Administrator privileges. It helps here if the account with which you log into Windows is in the Administrators group since all you have  to do then is right click on the Start Menu entry and choose Run as administrator entry in the pop-up context menu. With a command line window now open, you then need to issue the following command:

netsh wlan set hostednetwork mode=allow ssid=[network name] key=[password] keyUsage=persistent

When that had done its thing, Virtual Router Manager worked without a hitch though it did turn itself after a while and that may be no bad thing from the security standpoint. On the Android side, it was a matter of going in Settings > Wi-Fi and choose the new network that have been creating on the laptop. This sort of thing may apply to other types of tablet (Dare I mention iPads?) so you could connect anything to the hub without needing to do any more on the Windows side.

For those wanting to know what’s going on behind the scenes on Windows, there’s a useful tutorial on Instructables that shows what third party software is saving you from having to do. Even if I never go down the more DIY route, I probably have saved myself having to buy a mobile Wi-Fi hub for any trips to Éire. For now, the Irish 3G dongle that I already have should be enough.

Getting an Epson Pefection 4490 Photo scanner going with Ubuntu GNOME Remix 12.10

7th March 2013

My Epson Perfection 4490 Photo scanner has been in my possession for a while now and its impossible to justify any replacement given that it both works well and digital photography has taken over from its film predecessor for me. Every time I go installing an operating system afresh, I need to reinstate it again and last year’s installation of Ubuntu GNOME Remix 12.04 only saw me do the deed recently. When I did so, it was brought back to me that I’d never gone and documented on here how this was done. Given that I sometimes use this place as a repository of stuff to which I need to refer again in the future, it seemed remiss of me so here it is for you all.

Though I had XSane and SimpleScan already installed on the system, Sane wasn’t on there so I went and added it and a few other extras using the following command:

sudo apt-get install sane sane-utils libsane-extras

Then, it was onto the Epson website for their Perfection 4490 Photo Linux drivers since Sane’s support for this scanner seemingly remains incomplete even though it pre-dates my move to Linux in 2007. Three files were needed and the following commands install them (depending on when you do this, the file names may be different so just change them to whatever they are for you; it can be done with a single command too but there is not enough girth for that here):

sudo dpkg -i iscan-data_1.22.0-1_all.deb
sudo dpkg -i iscan_2.29.1-5~usb0.1.ltdl7_i386.deb
sudo dpkg -i iscan-plugin-gt-x750_2.1.2-1_i386.deb

With those in place, there was one other task that needed doing so that scanning could be done without resorting to running scanning software using sudo privileges. To free up the access to a normal user account, I needed a HAL device information file. These normally are in /usr/share/hal/fdi/ but they change every time an installation so any modifications that you may make are going to be lost. Therefore, there is no point modifying either /usr/share/hal/fdi/preprobe/10osvendor/20-libsane.fdi or /usr/share/hal/fdi/preprobe/10osvendor/20-libsane-extras.fdi where scanner information usually is to be found.

The first task in creating an fdi file was to issue the lsusb command and look for a line corresponding to my scanner. This is the one that I got:

Bus 001 Device 004: ID 04b8:0119 Seiko Epson Corp. Perfection 4490 Photo

From this, I gleaned the manufacturer ID and model ID as 04b8 and 0119, respectively. These are needed later on. Next I needed to create the hal/fdi/preprobe/ folder structure under /etc since it was there. Then, I created epson4490photo.fdi in the bottom folder of the tree (/etc/hal/fdi/preprobe/epson4490photo.fdi) as follows:

cd /etc/hal/fdi/preprobe/ && sudo touch epson4490photo.fdi

Then, I edited the new file using the following command:

gksu gedit epson4490photo.fdi &

When open, I added in the following text:

<?xml version="1.0" encoding="UTF-8"?>
<deviceinfo version="0.2">
<device>
<match key="info.subsystem" string="usb">
<!-- Epson Perfection 4490 Photo -->
<match key="usb.vendor_id" int="0x04b8">
<match key="usb.product_id" int="0x0119">
<append key="info.capabilities" type="strlist">scanner</append>
<merge key="scanner.access_method" type="string">proprietary</merge>
</match>
</match>
</match>
</device>
</deviceinfo>

It’s all in XML so the place to look is immediately beneath the scanner name comment. The int attributes of the two match elements immediately following the comment line are populated using the information from the lsusb command output with 0x prefixing both the manufacturer and model identifiers. The element with a key attribute of usb.vendor_id is the former and that with a key attribute of usb.product_id is the latter. With epson4490photo.fdi saved, I rebooted the machine to restart HAL and all was as I wanted it to be apart maybe from XSane making complaints that seemed not to be of any actual consequence. With Epson’s Image Scan! and Simple Scan on the PC, there’s no need to be bothered with those messages. Choice is good when you have it, especially when you have expended some effort to get that far.

Getting rid of a Dropbox error message on a Linux-powered PC

24th September 2012

One of my PC’s has ended up becoming a testing ground for a number of Linux distributions. The list has included openSUSE, Fedora, Arch and LMDE with Sabayon being the latest incumbent. From Arch onwards in that list though, a message has appeared on loading the desktop with every one of these when I have Dropbox’s client set up on there:

Unable to monitor entire Dropbox folder hierarchy. Please run “echo 100000 | sudo tee /proc/sys/fs/inotify/max_user_watches” and restart Dropbox to correct the problem.

Even applying the remedy that the message suggests won’t permanently fix the problem. For that, you need to edit /etc/sysctl.conf with superuser access and add the following line to it:

fs.inotify.max_user_watches = 100000

With that in place, you can issue the following command to fix the problem in the current session (assuming your user account is listed in /etc/sudoers):

sudo sysctl -p & dropbox stop & dropbox start

A reboot should demonstrate that the messages no longer appear again. For a good while, I had ignored it but curiosity eventually got me to find out how it could be stopped and led to what you find above.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.