TOPIC: SYSTEM ADMINISTRATION
Keeping a graphical eye on CPU temperature and power consumption on the Linux command line
20th March 2025Following my main workstation upgrade in January, some extra monitoring has been needed. This follows on from the experience with building its predecessor more than three years ago.
Being able to do this in a terminal session keeps things lightweight, and I have done that with text displays like what you see below using a combination of sensors
and nvidia-smi
in the following command:
watch -n 2 "sensors | grep -i 'k10'; sensors | grep -i 'tdie'; sensors | grep -i 'tctl'; echo "" | tee /dev/fd/2; nvidia-smi"
Everything is done within a watch
command that refreshes the display every two seconds. Then, the panels are built up by a succession of commands separated with semicolons, one for each portion of the display. The grep
command is used to pick out the desired output of the sensors
command that is piped to it; doing that twice gets us two lines. The next command, echo "" | tee /dev/fd/2
, adds an extra line by sending a space to STDERR output before the output of nvidia-smi
is displayed. The result can be seen in the screenshot below.
However, I also came across a more graphical way to do things using commands like turbostat
or sensors
along with AWK programming and ttyplot
. Using the temperature output from the above and converting that needs the following:
while true; do sensors | grep -i 'tctl' | awk '{ printf("%.2f\n", $2); fflush(); }'; sleep 2; done | ttyplot -s 100 -t "CPU Temperature (Tctl)" -u "°C"
This is done in an infinite while
loop to keep things refreshing; the watch
command does not work for piping output from the sensors
command to both the awk
and ttyplot
commands in sequence and on a repeating, periodic basis. The awk
command takes the second field from the input text, formats it to two places of decimals and prints it before flushing the output buffer afterwards. The ttyplot
command then plots those numbers on the plot seen below in the screenshot with a y-axis scaled to a maximum of 100 (-s
), units of °C
(-u
) and a title of CPU Temperature (Tctl)
(-t
).
A similar thing can be done for the CPU wattage, which is how I learned of the graphical display possibilities in the first place. The command follows:
sudo turbostat --Summary --quiet --show PkgWatt --interval 1 | sudo awk '{ printf("%.2f\n", $1); fflush(); }' | sudo ttyplot -s 200 -t "Turbostat - CPU Power (watts)" -u "watts"
Handily, the turbostat
can be made to update every so often (every second in the command above), avoiding the need for any infinite while
loop. Since only a summary is needed for the wattage, all other output can be suppressed, though everything needs to work using superuser privileges, unlike the sensors
command earlier. Then, awk
is used like before to process the wattage for plotting; the first field is what is being picked out here. After that, ttyplot
displays the plot seen in the screenshot below with appropriate title, units and scaling. All works with output from one command acting as input to another using pipes.
All of this offers a lightweight way to keep an eye on system load, with the top
command showing the impact of different processes if required. While there are graphical tools for some things, command line possibilities cannot be overlooked either.
Changing the Ansible Vault editor from Vi to Nano
15th August 2022Recently, I got to experiment with Ansible after reading about the orchestration tool in a copy of Admin magazine. It came in handy for updating a few web servers that I have, as well as updating my main Linux workstation. For the former, automated entry of SSH passwords sufficed, but the same did not apply for sudo usage on my local machine. This meant that I needed to use Ansible Vault to store the administrator password, and doing so opened up a file in the Vi editor. Since I am not familiar with Vi and wanted to get things sorted quickly, I fancied using something more user-friendly like Nano.
Doing this meant adding the following line to .bashrc
:
export EDITOR=nano
Saving and closing the file followed by reloading the session set me up for what was needed.
Running cron jobs using the www-data system account
22nd December 2018When you set up your own web server or use a private server (virtual or physical), you will find that web servers run using the www-data
account. That means that website files need to be accessible to that system account if not owned by it. The latter is mandatory if you want WordPress to be able to update itself with needing FTP details.
It also means that you probably need scheduled jobs to be executed using the privileges possessed by the www-data
account. For instance, I use WP-CLI to automate spam removal and updates to plugins, themes and WordPress itself. Spam removal can be done without the www-data
account, but the updates need file access and cannot be completed without this. Therefore, I got interested in setting up cron jobs to run under that account and the following command helps to address this:
sudo -u www-data crontab -e
For that to work, your own account needs to be listed in /etc/sudoers
or be assigned to the sudo group in /etc/group. If it is either of those, then entering your own password will open the cron
file for www-data
, and it can be edited as for any other account. Closing and saving the session will update cron
with the new job details.
In fact, the same approach can be taken for a variety of commands where files only can be accessed using www-data
. This includes copying, pasting and deleting files as well as executing WP-CLI commands. The latter issues a striking message if you run a command using the root account, a pervasive temptation given what it allows. Any alternative to the latter has to be better from a security standpoint.
Sorting out sluggish start-up and shutdown times in Linux Mint 19
9th August 2018The Linux Mint team never forces users to upgrade to the latest version of their distribution, but curiosity often provides a strong enough impulse for me to do so. When I encounter rough edges, the wisdom of leaving things unchanged becomes apparent. Nevertheless, the process brings learning opportunities, which I am sharing in this post. It also allows me to collect various useful titbits that might help others.
Again, I went with the in-situ upgrade option, though the addition of the Timeshift backup tool means that it is less frowned upon than once would have been the case. It worked well too, apart from slow start-up and shutdown times, so I set about tracking down the causes on the two machines that I have running Linux Mint. As it happens, the cause was different on each machine.
On one PC, it was networking that holding up things. The cause was my specifying a fixed IP address in /etc/network/interfaces instead of using the Network Settings GUI tool. Resetting the configuration file back to its defaults and using the Cinnamon settings interface took away the delays. It was inspecting /var/log/boot.log that highlighted problem, so that is worth checking if I ever encounter slow start times again.
As I mentioned earlier, the second PC had a very different problem, though it also involved a configuration file. What had happened was that /etc/initramfs-tools/conf.d/resume
contained the wrong UUID for my system's swap drive, so I was seeing messages like the following:
W: initramfs-tools configuration sets RESUME=UUID=<specified UUID for swap partition>
W: but no matching swap device is available.
I: The initramfs will attempt to resume from <specified file system location>
I: (UUID=<specified UUID for swap partition>)
I: Set the RESUME variable to override this.
Correcting the file and executing the following command resolved the issue by updating the affected initramfs
image for all installed kernels and speeded up PC start-up times:
sudo update-initramfs -u -k all
Though it was not a cause of system sluggishness, I also sorted another message that I kept seeing during kernel updates and removals on both machines. This has been there for a while and causes warning messages about my system locale not being recognised. The problem has been described elsewhere as follows: /usr/share/initramfs-tools/hooks/root_locale
is expecting to see individual locale directories in /usr/lib/locale
, but locale-gen
is configured to generate an archive file by default. Issuing the following command sorted that:
sudo locale-gen --purge --no-archive
Following these, my new Linux Mint 19 installations have stabilised with more speedy start-up and shutdown times. That allows me to look at what is on Flathub to see what applications and if they get updated to the latest version on an ongoing basis. That may be a topic for another entry on here, but the applications that I have tried work well so far.
Halting constant disk activity on a WD My Cloud NAS
6th June 2018Recently, I noticed that the disk in my WD My Cloud NAS was active all the time, so it reminded me of another time when this happened. Then, I needed to activate the SSH service on the device and log in as root with the password welc0me
. That default password was changed before doing anything else. Since the device runs on Debian Linux, that was a simple case of using the passwd
command and following the prompts. One word of caution is in order since only root can be used for SSH connections to a WD My Cloud NAS and any other user that you set up will not have these privileges.
The cause of all the activity was two services: wdmcserverd
and wdphotodbmergerd
. One way to halt their actions is to stop the services using these commands:
/etc/init.d/wdmcserverd stop
/etc/init.d/wdphotodbmergerd stop
The above act only works until the next system restart, so these command should make for a more persistent disabling of the culprits:
update-rc.d -f wdmcserverd remove
update-rc.d -f wdphotodbmergerd remove
If all else fails, removing executable privileges from the normally executable files that the services need will work, and it is a solution that I have tried successfully between system updates:
cd /etc/init.d
chmod 644 wdmcserverd
reboot
Between all of these, it should be possible to have you WD My Cloud NAS go into power saving mode as it should, even if turning off additional services such as DLNA may be what some need to do. Having turned off these already, I only needed to disable the photo thumbnail services that were the cause of my machine's troubles.
Reloading .bashrc within a BASH terminal session
3rd July 2016BASH is a command-line interpreter that is commonly used by Linux and UNIX operating systems. Chances are that you will find yourself in a BASH session if you start up a terminal emulator in many of these, though there are others like KSH and SSH too.
BASH comes with its own configuration files and one of these is located in your own home directory, .bashrc
. Among other things, it can become a place to store command shortcuts or aliases. Here is an example:
alias us='sudo apt-get update && sudo apt-get upgrade'
Such a definition needs there to be no spaces around the equals sign, and the actual command to be declared in single quotes. Doing anything other than this will not work, as I have found. Also, there are times when you want to update or add one of these and use it without shutting down a terminal emulator and restarting it.
To reload the .bashrc
file to use the updates contained in there, one of the following commands can be issued:
source ~/.bashrc
. ~/.bashrc
Both will read the file and execute its contents so you get those updates made available so you can continue what you are doing. There appears to be a tendency for this kind of thing in the world of Linux and UNIX because it also applies to remounting drives after a change to /etc/fstab
and restarting system services like Apache, MySQL or Nginx. The command for the former is below:
sudo mount -a
Often, the means for applying the sorts of in-situ changes that you make are simple ones too, and anything that avoids system reboots has to be good since you have less work interruptions.
Restoring GRUB for dual booting of Linux and Windows
11th April 2015Once you end up with Windows overwriting your master boot record (MBR), you have lost the ability to use GRUB. Therefore, it would be handy to get it back if you want to start up Linux again. Though the loss of GRUB from the MBR was a deliberate act of mine, I knew that I'd have to restore GRUB to get Linux working again. So, I have been addressing the situation with a Live DVD for the likes of Ubuntu or Linux Mint. Once one of those had loaded its copy of the distribution, issuing the following command in a terminal session gets things back again:
sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda
When there were error messages, I tried this one to see if I could get additional information:
sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda --recheck
Also, it is possible to mount a partition on the boot drive and use that in the command to restore GRUB. Here is the required combination:
sudo mount /dev/sda1 /mnt
sudo grub-install --root-directory=/mnt /dev/sda
Either of these will get GRUB working without a hitch, and they are far more snappy than downloading Boot-Repair and using that; I was doing that for a while until a feature on triple booting appeared in an issue of Linux User & Developer that reminded me of the more readily available option. Once, there was a need to manually add an entry for Windows 7 to the GRUB menu too and, with that instated, I was able to dual-boot Ubuntu and Windows using GRUB to select which one was to start for me. Since then, I have been able to dual boot Linux Mint and Windows 8.1, with GRUB finding the latter all by itself. Since your experiences too may show this variation, it's worth bearing in mind.
Installing Citrix Receiver 13.0 in Ubuntu GNOME 13.10 64-bit
28th November 2013Installing the latest version of Citrix Receiver (13.0 at the time of writing) on 64-bit Ubuntu should be as simple as downloading the required DEB package and double-clicking on the file so that Ubuntu Software Centre can work its magic. Unfortunately, the 64-bit DEB file is faulty, so that means that the Ubuntu community how-to guide for Citrix still is needed. In fact, any user of Linux Mint or another distro that uses Ubuntu as its base would do well to have a look at that Ubuntu link.
For the sake of completeness, I still am going to let you in on the process that worked for me. Once the DEB file has been downloaded, the first task is to create a temporary folder where the DEB file's contents can be extracted:
mkdir ica_temp
With that in place, it then is time to do the extraction, and it needs two commands with the second of these need to extract the control file while the first extracts everything else.
sudo dpkg-deb -x icaclient- ica_temp
sudo dpkg-deb --control icaclient- ica_temp/DEBIAN
It is the control file that has been the cause of all the bother because it refers to unavailable dependencies that it really doesn't need anyway. To open the file for editing, issue the following command:
sudo gedit ica_temp/DEBIAN/control
Then change line 7 (it should begin with Depends:
) to: Depends: libc6-i386 (>= 2.7-1), lib32z1, nspluginwrapper
. While there are other software packages in there that Ubuntu no longer supports, they are not needed anyway. With the edit made, and the file saved, the next step is to build a new DEB package with the corrected control file:
dpkg -b ica_temp icaclient-modified.deb
Once you have the package, the next step is to install it using the following command:
sudo dpkg -i icaclient-modified.deb
If it fails, then you have missing dependencies and the following command should sort these before a re-run of the above command again:
sudo apt-get install libmotif4:i386 nspluginwrapper lib32z1 libc6-i386
With Citrix Receiver installed, there is one more thing that is needed before you can use it freely. This is to put Thawte security certificate files into /opt/Citrix/ICAClient/keystore/cacerts
. What I had not realised until recently was that many of these already are in /usr/share/ca-certificates/mozilla
and linking to them with the following command makes them available to Citrix Receiver:
sudo ln -s /usr/share/ca-certificates/mozilla/* /opt/Citrix/ICAClient/keystore/cacerts/
Another approach is to download the Thawte certificates and extract the archive to /tmp/
. From there they can be copied to /opt/Citrix/ICAClient/keystore/cacerts
and I copied the Thawte Personal Premium certificate as follows:
sudo cp /tmp/Thawte Root Certificates/Thawte Personal Premium CA/Thawte Personal Premium CA.cer /opt/Citrix/ICAClient/keystore/cacerts/
Until I found out about what was in the Mozilla folder, I simply picked out the certificate mentioned in the Citrix error message and copied it over like the above. Of course, all of this may seem like a lot of work to those who are non-tinkerers and I have added a repaired 64-bit DEB package that incorporates all of the above and should not need any further intervention aside from installing it using GDebi
, Ubuntu's Software Centre, dpkg or anything else that does what's needed.
Adding Microsoft Core Fonts to Fedora 19
6th July 2013While I have a previous posting from 2009 that discusses adding Microsoft's Core Fonts to the then current version of Fedora, it did strike me that I hadn't laid out the series of commands that were used. Instead, I referred to an external and unofficial Fedora FAQ. That's still there, yet I also felt that I was leaving things a little to chance, given how websites can disappear quite suddenly.
Even after next to four years, it still amazes me that you cannot install Microsoft's Core Fonts in Fedora as you would on Ubuntu, Linux Mint or even Debian. Therefore, the following series of steps is as necessary now as it was then.
The first step is to add in a number of precursor applications such as wget
for command line file downloading from websites, cabextract
for extracting the contents of Windows CAB files, rpmbuild
for creating RPM installers and utilities for the XFS file system that chkfontpath
needs:
sudo yum -y install rpm-build cabextract ttmkfdir wget xfs
Here, I have gone with terminal commands that use sudo
, but you could become the superuser (root) for all of this and there are those who believe you should. The -y switch tells yum to go ahead with prompting you for permission before it does any installations. The next step is to download the Microsoft fonts package with wget
:
sudo wget http://corefonts.sourceforge.net/msttcorefonts-2.0-1.spec
Once that is done, you need to install the chkfontpath
package because the RPM for the fonts cannot be built without it:
sudo rpm -ivh http://dl.atrpms.net/all/chkfontpath
Once that is in place, you are ready to create the RPM file using this command:
sudo rpmbuild -ba msttcorefonts-2.0-1.spec
After the RPM has been created, it is time to install it:
sudo yum install --nogpgcheck ~/rpmbuild/RPMS/noarch/msttcorefonts-2.0-1.noarch.rpm
When installation has completed, the process is done. Because I used sudo
, all of this happened in my own home area, so there was a need for some housekeeping afterwards. If you did it by becoming the root user, then the files would be there instead, and that's the scenario in the online FAQ.
Getting an Epson Pefection 4490 Photo scanner going with Ubuntu GNOME Remix 12.10
7th March 2013My Epson Perfection 4490 Photo scanner has been in my possession for a while now, and it is impossible to justify any replacement given that it both works well and digital photography has taken over from its film predecessor for me. Every time I go installing an operating system afresh, I need to reinstate it again; last year's installation of Ubuntu GNOME Remix 12.04 only saw me do the deed recently. When I did so, it was brought back to me that I'd never gone and documented on here how this was done. Given that I sometimes use this place as a repository of stuff to which I need to refer again in the future, it seemed remiss of me, so here it is for you all.
Though I had XSane and SimpleScan already installed on the system, Sane wasn't on there. Hence, I went and added it and a few other extras using the following command:
sudo apt-get install sane sane-utils libsane-extras
Then, it was onto the Epson website for their Perfection 4490 Photo Linux drivers, since Sane's support for this scanner seemingly remains incomplete even though it pre-dates my move to Linux in 2007. Three files were needed, and the following commands install them (depending on when you do this, the file names may be different, so just change them to whatever they are for you):
sudo dpkg -i iscan-data_1.22.0-1_all.deb
sudo dpkg -i iscan_2.29.1-5~usb0.1.ltdl7_i386.deb
sudo dpkg -i iscan-plugin-gt-x750_2.1.2-1_i386.deb
With those in place, there was one other task that needed doing so that scanning could be done without resorting to running scanning software using sudo privileges. To free up the access to a normal user account, I needed a HAL device information file. These normally are in, but /usr/share/hal/fdi/
they change every time an installation, so any modifications that you may make will be lost. Therefore, there is no point modifying either /usr/share/hal/fdi/preprobe/10osvendor/20-libsane.fdi
or /usr/share/hal/fdi/preprobe/10osvendor/20-libsane-extras.fdi
where scanner information usually is to be found.
The first task in creating an FDI file was to issue the lsusb
command and look for a line corresponding to my scanner. This is the one that I got:
Bus 001 Device 004: ID 04b8:0119 Seiko Epson Corp. Perfection 4490 Photo
From this, I gleaned the manufacturer ID and model ID as 04b8 and 0119, respectively. These are needed later on. Next I needed to create the hal/fdi/preprobe/
folder structure under /etc since it was there. Then, I created epson4490photo.fdi
in the bottom folder of the tree (/etc/hal/fdi/preprobe/epson4490photo.fdi
) as follows:
cd /etc/hal/fdi/preprobe/ && sudo touch epson4490photo.fdi
Then, I edited the new file using the following command:
gksu gedit epson4490photo.fdi &
With the file open, I added in the following text:
<?xml version="1.0" encoding="UTF-8"?>
<deviceinfo version="0.2">
<device>
<match key="info.subsystem" string="usb">
<!-- Epson Perfection 4490 Photo -->
<match key="usb.vendor_id" int="0x04b8">
<match key="usb.product_id" int="0x0119">
<append key="info.capabilities" type="strlist">scanner</append>
<merge key="scanner.access_method" type="string">proprietary</merge>
</match>
</match>
</match>
</device>
</deviceinfo>
Since it's all in XML, the place to look is immediately beneath the scanner name comment. The int
attributes of the two match elements immediately following the comment line are populated using the information from the lsusb
command output, with 0x
prefixing both the manufacturer and model identifiers. The element with a key attribute of usb.vendor_id
is the former, and that with a key attribute of usb.product_id
is the latter. With epson4490photo.fdi
saved, I rebooted the machine to restart HAL and all was as I wanted it to be, apart maybe from XSane making complaints that seemed not to be of any actual consequence. With Epson's Image Scan! and Simple Scan on the PC, there's no need to be bothered with those messages. Choice is good when you have it, especially when you have expended some effort to get that far.