TOPIC: SUPERUSER
Upgrading from OpenMediaVault 6.x to OpenMediaVault 7.x
29th May 2024Having an older PC to upgrade, I decided to install OpenMediaVault on there a few years ago after adding in 6 TB and 4 TB hard drives for storage, a Gigabit network card to speed up backups and a new BeQuiet! power supply to make it quieter. It has been working smoothly since then, and the release of OpenMediaVault 7.x had me wondering how to move to it.
Usefully, I enabled an SSH service for remote logins and set up an account for anything that I needed to do. This includes upgrades, taking backups of what is on my NAS drives, and even shutting down the machine when I am done with what I need to do with it.
Using an SSH session, the first step was to switch to the administrator account and issue the following command to ensure that my OpenMediaVault 6.x installation was as up-to-date as it could be:
omv-update
Once that had completed what it needed to do, the next step was to do the upgrade itself with the following command:
omv-release-upgrade
With that complete, it was time to reboot the system, and I fired up the web administration interface and spotted a kernel update that I applied. Again, the system was restarted, and further updates were noticed and these were applied, again through the web interface. The whole thing is based on Debian 12.x, but I am not complaining as long as it quietly does exactly what I need of it. There was one slight glitch when doing an update after the changeover, and that was quickly sorted.
Later on, I ran into trouble because I had changed my broadband. Because the router address had changed, the system lost its access to the rest of the internet. The web interface also got disable and was issuing 502: Bad Gateway errors. The solution was to execute the following command with superuser privileges:
omv-salt stage run deploy
That took quite a while to run, though. After it completed, I needed to work out what the administrator credentials were. With that done, I could log in and update the network details as needed to restore external internet access. Since then, all has been well.
Running cron jobs using the www-data system account
22nd December 2018When you set up your own web server or use a private server (virtual or physical), you will find that web servers run using the www-data
account. That means that website files need to be accessible to that system account if not owned by it. The latter is mandatory if you want WordPress to be able to update itself with needing FTP details.
It also means that you probably need scheduled jobs to be executed using the privileges possessed by the www-data
account. For instance, I use WP-CLI to automate spam removal and updates to plugins, themes and WordPress itself. Spam removal can be done without the www-data
account, but the updates need file access and cannot be completed without this. Therefore, I got interested in setting up cron jobs to run under that account and the following command helps to address this:
sudo -u www-data crontab -e
For that to work, your own account needs to be listed in /etc/sudoers
or be assigned to the sudo group in /etc/group. If it is either of those, then entering your own password will open the cron
file for www-data
, and it can be edited as for any other account. Closing and saving the session will update cron
with the new job details.
In fact, the same approach can be taken for a variety of commands where files only can be accessed using www-data
. This includes copying, pasting and deleting files as well as executing WP-CLI commands. The latter issues a striking message if you run a command using the root account, a pervasive temptation given what it allows. Any alternative to the latter has to be better from a security standpoint.
Halting constant disk activity on a WD My Cloud NAS
6th June 2018Recently, I noticed that the disk in my WD My Cloud NAS was active all the time, so it reminded me of another time when this happened. Then, I needed to activate the SSH service on the device and log in as root with the password welc0me
. That default password was changed before doing anything else. Since the device runs on Debian Linux, that was a simple case of using the passwd
command and following the prompts. One word of caution is in order since only root can be used for SSH connections to a WD My Cloud NAS and any other user that you set up will not have these privileges.
The cause of all the activity was two services: wdmcserverd
and wdphotodbmergerd
. One way to halt their actions is to stop the services using these commands:
/etc/init.d/wdmcserverd stop
/etc/init.d/wdphotodbmergerd stop
The above act only works until the next system restart, so these command should make for a more persistent disabling of the culprits:
update-rc.d -f wdmcserverd remove
update-rc.d -f wdphotodbmergerd remove
If all else fails, removing executable privileges from the normally executable files that the services need will work, and it is a solution that I have tried successfully between system updates:
cd /etc/init.d
chmod 644 wdmcserverd
reboot
Between all of these, it should be possible to have you WD My Cloud NAS go into power saving mode as it should, even if turning off additional services such as DLNA may be what some need to do. Having turned off these already, I only needed to disable the photo thumbnail services that were the cause of my machine's troubles.
Killing Windows processes from the command line
26th September 2015During my days at work, I often hear about the need to restart a server because something has gone awry with it. This makes me wonder if you can kill processes from the command line, like you do in Linux and UNIX. A recent need to reset Windows Update on a Windows 10 machine gave me enough reason to answer the question.
Because I already knew the names of the services, I had no need to look at the Services tab in the Task Manager like you otherwise would. Then, it was a matter of opening up a command line session with Administrator privileges and issuing a command like the following (replacing [service name] with the name of the service):
sc queryex [service name]
From the output of the above command, you can find the process identifier, or PID. With that information, you can execute a command like the following in the same command line session (replacing [PID] with the actual numeric value of the PID):
taskkill /f /pid [PID]
After the above, the process no longer exists and the service can be restarted. With any system, you need to find the service that is stuck to kill it, but that would be the subject of another posting. What I have not got to testing is whether these work in PowerShell, since I used them with the legacy command line instead. Along with processes belonging to software applications (think Word, Excel, Firefox, etc.), that may be something else to try should the occasion arise.
Restoring GRUB for dual booting of Linux and Windows
11th April 2015Once you end up with Windows overwriting your master boot record (MBR), you have lost the ability to use GRUB. Therefore, it would be handy to get it back if you want to start up Linux again. Though the loss of GRUB from the MBR was a deliberate act of mine, I knew that I'd have to restore GRUB to get Linux working again. So, I have been addressing the situation with a Live DVD for the likes of Ubuntu or Linux Mint. Once one of those had loaded its copy of the distribution, issuing the following command in a terminal session gets things back again:
sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda
When there were error messages, I tried this one to see if I could get additional information:
sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda --recheck
Also, it is possible to mount a partition on the boot drive and use that in the command to restore GRUB. Here is the required combination:
sudo mount /dev/sda1 /mnt
sudo grub-install --root-directory=/mnt /dev/sda
Either of these will get GRUB working without a hitch, and they are far more snappy than downloading Boot-Repair and using that; I was doing that for a while until a feature on triple booting appeared in an issue of Linux User & Developer that reminded me of the more readily available option. Once, there was a need to manually add an entry for Windows 7 to the GRUB menu too and, with that instated, I was able to dual-boot Ubuntu and Windows using GRUB to select which one was to start for me. Since then, I have been able to dual boot Linux Mint and Windows 8.1, with GRUB finding the latter all by itself. Since your experiences too may show this variation, it's worth bearing in mind.
Adding Microsoft Core Fonts to Fedora 19
6th July 2013While I have a previous posting from 2009 that discusses adding Microsoft's Core Fonts to the then current version of Fedora, it did strike me that I hadn't laid out the series of commands that were used. Instead, I referred to an external and unofficial Fedora FAQ. That's still there, yet I also felt that I was leaving things a little to chance, given how websites can disappear quite suddenly.
Even after next to four years, it still amazes me that you cannot install Microsoft's Core Fonts in Fedora as you would on Ubuntu, Linux Mint or even Debian. Therefore, the following series of steps is as necessary now as it was then.
The first step is to add in a number of precursor applications such as wget
for command line file downloading from websites, cabextract
for extracting the contents of Windows CAB files, rpmbuild
for creating RPM installers and utilities for the XFS file system that chkfontpath
needs:
sudo yum -y install rpm-build cabextract ttmkfdir wget xfs
Here, I have gone with terminal commands that use sudo
, but you could become the superuser (root) for all of this and there are those who believe you should. The -y switch tells yum to go ahead with prompting you for permission before it does any installations. The next step is to download the Microsoft fonts package with wget
:
sudo wget http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec
Once that is done, you need to install the chkfontpath
package because the RPM for the fonts cannot be built without it:
sudo rpm -ivh http://dl.atrpms.net/all/chkfontpath
Once that is in place, you are ready to create the RPM file using this command:
sudo rpmbuild -ba msttcorefonts-2.0-1.spec
After the RPM has been created, it is time to install it:
sudo yum install --nogpgcheck ~/rpmbuild/RPMS/noarch/msttcorefonts-2.0-1.noarch.rpm
When installation has completed, the process is done. Because I used sudo
, all of this happened in my own home area, so there was a need for some housekeeping afterwards. If you did it by becoming the root user, then the files would be there instead, and that's the scenario in the online FAQ.
Setting up MySQL on Sabayon Linux
27th September 2012For quite a while now, I have offline web servers for doing a spot of tweaking and tinkering away from the gaze of web users that visit what I have on there. Therefore, one of the tests that I apply to any prospective main Linux distro is the ability to set up a web server on there. This is more straightforward for some than for others. For Ubuntu and Linux Mint, it is a matter of installing the required software and doing a small bit of configuration. My experience with Sabayon is that it needs a little more effort than this, so I am sharing it here for the installation of MySQL.
The first step is to install the software using the commands that you find below. The first pops the software onto the system while the second completes the set-up. The --basedir
option is need with the latter because it won't find things without it. It specifies the base location on the system, and it's /usr
in my case.
sudo equo install dev-db/mysql
sudo /usr/bin/mysql_install_db --basedir=/usr
With the above complete, it's time to start the database server and set the password for the root user. That's what the two following commands achieve. Once your root password is set, you can go about creating databases and adding other users using the MySQL command line
sudo /etc/init.d/mysql start
mysqladmin -u root password 'password'
The last step is to set the database server to start every time you start your Sabayon system. The first command adds an entry for MySQL to the default run level so that this happens. The purpose of the second command is to check that this happened before restarting your computer to discover if it really happens. This procedure also is necessary for having an Apache web server behave in the same way, so the commands are worth having and even may have a use for other services on your system. ProFTP is another that comes to mind, for instance.
sudo rc-update add mysql default
sudo rc-update show | grep mysql
Listing hardware information for Linux systems
3rd August 2012Curiosity about the graphics card on my backup PC caused me to look for ways of getting this information without opening up the machine or searching for a manual. In the end, a solitary command did the job:
sudo lshw
If you are running it as root, the sudo
piece can be dropped, but the result is the same. As it happened, it provided me with the information that I needed.
Taking the sudo command beyond Ubuntu
27th October 2010Though some may call it introducing a security risk, being able to execute administrator commands on Ubuntu using sudo and gksu by default is handy. It's not the only Linux distribution with the facility, though, since the /etc/sudoers
file is found in Debian and I plan to have a look into Fedora. The thing that needs to be done is to add the following line to the aforementioned file (you will need to do this as root):
[your user name] ALL=(ALL) ALL
One that is done, you are all set. Just make sure that you're using a secure password, though, and removing the sudo/gksu permissions is as simple as reversing the change.
Update on 2011-12-03: The very same can be done for both Arch Linux and Fedora, The same file locations apply too.
Ubuntu upgrades: do a clean installation or use Update Manager?
9th April 2009Part of some recent "fooling" brought on by the investigation of what turned out to be a duff DVD writer was a fresh installation of Ubuntu 8.10 on my main home PC. It might have brought on a certain amount of upheaval, but it was nowhere near as severe as that following the same sort of thing with a Windows system. While a few hours was all that was needed, whether it is better to perform just an upgrade every time a new Ubuntu release is unleashed on the world or to go for a complete virgin installation instead. With Ubuntu 9.04 in the offing, that question takes on a more immediate significance than it otherwise might do.
Various tricks make the whole reinstallation idea more palatable. For instance, many years of Windows usage have taught me the benefits of separating system and user files. The result is that my home directory lives on a different disk to my operating system files. Add to that the experience of being able to reuse that home drive across different Linux distros, and even swapping from one distro to another becomes feasible. From various changes to my secondary machine, I can vouch that this works for Ubuntu, Fedora and Debian; the latter is what currently powers the said PC. Though you might have to use superuser powers to attend to ownership and access issues, the portability is certainly there, and it applies to anything kept on other disks too.
Naturally, there's always the possibility of losing programs that you have had installed, but losing the clutter can be liberating too. However, assembling a script made up of one or more apt-get install
commands can allow you to get many things back at a stroke. For example, I have a test web server (Apache/MySQL/PHP/Perl) set up, so this would be how I'd get everything back in place before beginning further configuration. It might be no bad idea to back up your collection of software sources, either; I have yet to add all the ones that I have been using back into Synaptic. Then there are closed source packages such as VirtualBox (yes, I know that there is an open-source edition) and Adobe Reader. After reinstating the former, all my virtual machines were available for me to use again, without further ado. Restoring the latter allowed me to grab version 9.1 (probably more secure anyway) and it inveigles itself into Firefox now too so the number of times that I need to go through the download shuffle before seeing the contents of a PDF are much reduced, though not eliminated by the Windows-like ability to see a PDF loaded in a browser tab. Moving from software to hardware for a moment, it looks like any bespoke actions such as my activating an Epson Perfection 4490 Photo scanner need to be repeated, but that was all that I had to do. Getting things back into order is not so bad, even if you have to allow a modicum of time for this.
What I have discussed so far are what might be categorised as the common or garden aspects of a clean installation, yet I have seen some behaviours that make me wonder if the usual Ubuntu upgrade path is sufficiently complete in its refresh of your system. The counterpoint to all of this is that I may not have been looking for some of these things before now. That may apply to my noticing that DSLR support seems to be better with my Canon and Pentax cameras both being picked up and mounted for me as soon as they are connected to a PC, the caveat being that they are themselves powered on for this to happen. Another surprise that may be new is that the BBC iPlayer's Listen Again works without further work from the user, a very useful development. It obviously wasn't that way before I carried out the invasive means. My previous tweaking might have prevented the in situ upgrade from doing its thing, but I do see the point of not upsetting people's systems with an overly aggressive update process, even if it means that some advances do not make themselves known.
So what's my answer regarding which way to go once Ubuntu Jaunty Jackalope appears? For the sake of avoiding initial disruption, I'd be inclined to go down the Update Manager route first, while reserving the right to do a fresh installation later on. All in all, I am left with the gut feeling is that the jury is still out on this one.