When a hard drive is unrecognised by the Linux hddtemp command
15th August 2021One should not do a new PC build in the middle of a heatwave if you do not want to be concerned about how fast fans are spinning and how hot things are getting. Yet, that is what I did last month after delaying the act for numerous months.
My efforts mean that I have a system built around an AMD Ryzen 9 5950X CPU and a Gigabyte X570 Aorus Pro with 64 GB of memory, and things are settling down after the initial upheaval. That also meant some adjustments to the CPU fan profile in the BIOS for quieter running while the use of Be Quiet! Dark Rock 4 cooler also helps, as does a Be Quiet! Silent Wings 3 case fan. All are components from trusted brands, though I wonder how much abuse they got during their installation and subsequent running in.
Fan noise is a non-quantitative indicator of heat levels as much as touch, so more quantitative means are in order. Aside from using a thermocouple device, there are in-built sensors too. My using Linux Mint means that I have the sensors command from the lm-sensors package for checking on CPU and other temperatures, though hddtemp is what you need for checking on the same for hard drives. The latter can be used as follows:
sudo hddtemp /dev/sda /dev/sdb
This has to happen using administrator access and a list of drives needs to be provided because it cannot find them by itself. In my case, I have no mechanical hard drives installed in non-NAS systems and I even got to replace a 6 TB Western Digital Green disk with an 8 TB SSD, but I got the following when I tried checking on things with hddtemp:
WARNING: Drive /dev/sda doesn't seem to have a temperature sensor.
WARNING: This doesn't mean it hasn't got one.
WARNING: If you are sure it has one, please contact me (hddtemp@guzu.net).
WARNING: See --help, --debug and --drivebase options.
/dev/sda: Samsung SSD 870 QVO 8TB: no sensor
The cause of the message for me was that there is no entry for Samsung SSD 870 QVO 8TB in /etc/hddtemp.db so that needed to be added there. Before that could be rectified, I had to get some additional information using smartmontools and these had to be installed using the following command:
sudo apt-get install smartmontools
What I had to do was check the drive's SMART data output for extra information, and that was achieved using the following command:
sudo smartctl /dev/sda -a | grep -i Temp
What this does is to look for the temperature information from smartctl output using the grep command, with output from the first being passed to the second through a pipe. This yielded the following:
190 Airflow_Temperature_Cel 0x0032 072 050 000 Old_age Always - 28
The first number in the above (190) is the thermal sensor's attribute identifier, and that was needed in what got added to /etc/hddtemp.db. The following command added the necessary data to the aforementioned file:
echo \"Samsung SSD 870 QVO 8TB\" 190 C \"Samsung SSD 870 QVO 8TB\" | sudo tee -a /etc/hddtemp.db
Here, the output of the echo command was passed to the tee command for adding to the end of the file. In the echo command output, the first part is the name of the drive, the second is the heat sensor identifier, the third is the temperature scale (C for Celsius or F for Fahrenheit) and the last part is the label (it can be anything that you like, but I kept it the same as the name). On re-running the hddtemp command, I got output like the following, so all was as I needed it to be.
/dev/sda: Samsung SSD 870 QVO 8TB: 28°C
Since then, temperatures may have cooled and the weather become more like what we usually get, yet I am still keeping an eye on things, especially when the system is put under load using Perl, R, Python or SAS. There may be further modifications such as changing the case or even adding water cooling, not least to have a cooler power supply unit, but nothing is being rushed as I monitor things to my satisfaction.
Changing the UUID of a VirtualBox Virtual Disk Image in Linux
11th July 2021Recent experimentation centring around getting my hands on a test version of Windows 11 had me duplicating virtual machines and virtual disk images, though VirtualBox still is not ready for the next Windows version; it has no TPM capability at the moment. Nevertheless, I was able to get something after a fresh installation that removed whatever files were on the disk image. That meant that I needed to mount the old version to get at those files again.
While renaming partially helped with this, what I really needed to do was change the UUID, so VirtualBox would not report a collision between two disk images with the same UUID. To avoid this, the UUID of one of the disk images had to be changed and a command like the following was used to accomplish this:
VBoxManage internalcommands sethduuid [Virtual Disk Image Name].vdi
Because I was doing this on Linux Mint, I could call VBoxManage without need to tell the system where it was, as would be the case on Windows. Otherwise, it is the sethduuid portion that changes the UUID as required. Another way around this is to clone the VDI file using the following command, but I had not realised that at the time:
VBoxManage clonevdi [old virtual disk image].vdi [new virtual disk image].vdi
It appears that there can be more than one way to do things in VirtualBox at times, so the second way will remain on reference for the future.
Exploring online training options for computing and data science skills
18th April 2021Recently, I shared my thoughts on learning new computing languages by oneself using books, online research and personal practice. As successful as that can be, there remains a place for getting some actual instruction as well. Maybe that is why so many turn to YouTube, where there is a multitude of video channels offering such possibilities without cost. What I have also discovered is that this is complemented by a host of other providers whose services attract a fee, and there will be a few of those mentioned later in this post. Paying for online courses does mean that you can get the benefit of curation and an added assurance of quality in what appears to be a growing market.
The variation in quality can dog the YouTube approach, and it also can be tricky to find something good, even if the platform does suggest new videos based on what you have been watching. Much of what is found there does take the form of webinars from the likes of the Why R? Foundation, Posit or the NHSR Community. These can be useful, and there are shorter videos from such providers as the Association of Computing Machinery or SAS Users. These do help more if you already have some knowledge about the topic area being discussed, so they may not make the best starting points for someone who is starting from scratch.
Of course, working your way through a good book will help, and it is something that I have been known to do, but supplementing this with one or more video courses really adds to the experience and I have done a few of these on LinkedIn. That part of the professional platform came from the acquisition of Lynda.com and the topic areas range from soft skills like time management through to computing skills courses with R, SAS and Python seeing coverage among the data science portfolio. Even O'Reilly has ventured into the area in an expansion from the book publishing activities for which so many of us know the organisation.
The available online instructor community does not stop at the above since there are others like Degreed, Baeldung, Udacity, Programiz, Udemy, Business Science and Datanovia. Some of these tend towards online education provision that feels more like an online university course, and those are numerous as well as you will find through KDNuggets. Both of these earn income from advertising to pay for featured blog posts and newsletters, while the former also organises regular webinars and was my first port of call when I became curious about the world of data science during the autumn of 2017.
My point of approach into the world of online training has been as a freelance information professional needing to keep up to date with a rapidly changing field. The mix of content that is both free of charge and that which attracts a fee is one that can work. Both kinds do complement each other while possessing their unique advantages and disadvantages. The need to continually expand skills and knowledge never goes away, so it is well worth spending some time working what you are after, since you need to be sure that any training always adds to your own knowledge and skill level.
Expanding the coding toolkit: Adding R and Python in a changing landscape
10th April 2021Over the years, I have taught myself a number of computing languages, with some coming in useful for professional work while others came in handy for website development and maintenance. The collection has grown to include HTML, CSS, XML, Perl, PHP and UNIX Shell Scripting. The ongoing pandemic allowed to me add two more to the repertoire: R and Python.
My interest in these arose from my work as an information professional concerned with standardisation and automation of statistical results delivery. To date, the main focus has been on clinical study data, but ongoing changes in the life sciences sector could mean that I may need to look further afield, so having extra knowledge never hurts. Though I have been a SAS programmer for more than twenty years, its predominance in the clinical research field is not what it was, which causes me to rethink things.
As it happens, I would like to continue working with SAS since it does so much and thoughts of leaving it after me bring sadness. It also helps to know what the alternatives might be and to reject some management hopes about any newcomers, especially regarding the amount of code being produced and the quality of graphs being created. Use cases need to be assessed dispassionately, even when emotions loom behind the scenes.
Since both R and Python bring large scripting ecosystems with active communities, the attraction of their adoption makes a deal of sense. SAS is comparable in the scale of its own ecosystem, though there are considerable differences and the platform is catching up when it comes to Data Science. While the aforementioned open-source languages may have had a head start, it appears that others are not standing still either. It is a time to have wider awareness, and online conference attendance helps with that.
The breadth of what is available for any programming language more than stymies any attempt to create a truly all encompassing starting point, and I have abandoned thoughts of doing anything like that for R. Similarly, I will not even try such a thing for Python. Consequently, this means that my sharing of anything learned will be in the form of discrete postings from time to time, especially given ho easy it is to collect numerous website links for sharing.
The learning has been facilitated by ongoing pandemic restrictions, though things are opening up a little now. The pandemic also has given us public data that can be used for practice, since much can be gained from having one's own project instead of completing exercises from a book. Having an interesting data set with which to work is a must, and COVID-19 data contain a certain self-interest as well, while one remains mindful of the suffering and loss of life that has been happening since the pandemic first took hold.
Using .htaccess to control hotlinking
10th October 2020There are times when blogs cease to exist and the only place to find the content is on the Wayback Machine. Even then, it is in danger of being lost completely. One such example is the subject of this post.
Though this website makes use of the facilities of Cloudflare for various functions that include the blocking of image hot linking, the same outcome can be achieved using .htaccess files on Apache web servers. It may work on Nginx to a point too, but there are other configuration files that ought to be updated instead of using .htaccess when some frown upon the approach. In any case, the lines that need adding to .htaccess are listed below, while the web address needs to include your own domain in place of the dummy example provided:
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?yourdomain.com(/)?.*$ [NC]
RewriteRule .*\.(gif|jpe?g|png|bmp)$ [F,NC]
The first line activates the mod_rewrite engine, which you might have already done. For this to work, the module must be enabled in your Apache configuration, and you need permission to make these changes. This requires modifying the Apache configuration files. The next two lines examine the HTTP referrer strings. The third line permits images to be served only from your own web domain, not from others. To include additional domains, copy the third line and change the web address as needed. Any new lines should be placed before the final line that specifies which file extensions are blocked for other web addresses.
RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?yourdomain.com(/)?.*$ [NC]
RewriteRule \.(gif|jpe?g|png|bmp)$ /images/image.gif [L,NC]
Another variant of the previous code involves changing the last line to display a default image showing others what is happening. That may not reduce the bandwidth usage as much as complete blocking, but it may be useful for telling others what is happening.
Getting rid of the Windows Resizing message from a Manjaro VirtualBox guest
27th July 2020Like Fedora, Manjaro also installs a package for VirtualBox Guest Additions when you install the Linux distro in a VirtualBox virtual machine. However, it does have certain expectations when doing this. On many systems, and my own is one of these, Linux guests are forced to use the VMSVGA virtual graphics controller while Windows guests are allowed to use the VBoxSVGA one. It is the latter that Manjaro expects, so you get a message like the following appearing when the desktop environment has loaded:
Windows Resizing
Set your VirtualBox Graphics Controller to enable windows resizing
After ensuring that gcc, make, perl and kernel headers are installed, I usually install VirtualBox Guest Additions myself from the included ISO image, and so I did the same with Manjaro. Doing that and restarting the virtual machine got me extra functionality like screen resizing and being able to copy and paste between the VM and elsewhere after choosing the Bidirectional setting in the menus under Devices > Shared Clipboard.
That still left an unwanted message popping up on startup. To get rid of that, I just needed to remove /etc/xdg/autostart/mhwd-vmsvga-alert.desktop. While it can be deleted, I just moved it somewhere else and a restart proved that the message was gone as needed. Now everything is working as I wanted.
Resolving VirtualBox Shared Folders problems in Fedora 32 through a correct Guest Additions installation
26th July 2020While some Linux distros like Fedora install VirtualBox drivers during installation time, I prefer to install the VirtualBox Guest Additions themselves. Before doing this, it is best to remove the virtualbox-guest-additions package from Fedora to avoid conflicts. After that, execute the following command to ensure that all prerequisites for the VirtualBox Guest Additions are in place before mounting the VirtualBox Guest Additions ISO image and installing from there:
sudo dnf -y install gcc automake make kernel-headers dkms bzip2 libxcrypt-compat kernel-devel perl
During the installation, you may encounter a message like the following:
ValueError: File context for /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf already defined
This is generated by SELinux, so the following commands need to be executed before repeating the installation of VirtualBox Guest Additions:
sudo semanage fcontext -d /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf
sudo restorecon /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf
Without doing the above step and fixing the preceding error message, I had an issue with mounting of Shared Folders whereby the mount point was set up, but no folder contents were displayed. This happened even when my user account was added to the vboxsf group, and it proved to be the SELinux context issue that was the cause.
Restoring the menu bar on GNOME Terminal in a GNOME Shell session
25th July 2020By default, a GNOME Terminal instance does not display a menu bar and that applies not only in GNOME Shell but also on the Cinnamon Desktop environment. In the latter, it is easy enough to display the menu bar using the context menu produced by right-clicking in the window before going to Edit > Preferences and ticking the box for Show menubar by default in new terminals in the General section. After closing the Preferences dialogue, every new GNOME Terminal session will show the menu bar.
Unfortunately, it is not so easy in GNOME Shell, though the context menu route does allow you to unhide the menu bar on a temporary basis. That is because the requisite tick box is missing from the Preferences dialogue box displayed after navigating to Edit > Preferences in the menus. To address, you need to execute the following command in a terminal session:
gsettings set org.gnome.Terminal.Legacy.Settings headerbar false
This change permanently adds the menu bar and includes the previously missing tick box, which is selected when necessary. Although GNOME Shell has a minimalist design in some aspects, making this function difficult to access seems excessive.
Removing obsolete libraries from Flatpak
1st February 2020Along with various pieces of software, Flatpak also installs KDE and GNOME libraries needed to support them. However, it does not always remove obsolete versions of those libraries whenever software gets updated. One result is that messages regarding obsolete versions of GNOME may be issued and this has been known to cause confusion because there is the GNOME instance that is part of a Linux distro like Ubuntu and using Flatpak adds another one for its software packages to use. My use of Linux Mint may lessen the chances of misunderstanding.
Thankfully, executing a single command will remove any obsolete Flatpak libraries so the messages no longer appear and there then is no need to touch your actual Linux installation. This then is the command that sorted it for me:
flatpak uninstall --unused && sudo flatpak repair
The first part that removes any unused libraries is run as a normal user, so there is no error in the above command. Administrative privileges are needed for the second section that does any repairs that are needed. It might be better if Flatpak did all this for you using the update command, but that is not how the thing works. At least, there is a quick way to address this state of affairs and there might be some good reasons for having things work as they do.
One way to fix slow CyberGhost VPN connections on Windows 10
31st January 2020Due to a need to access websites with country blocking, I have decided to give CyberGhost a go, and it also will come in handy when connecting devices to other Wi-Fi connections. What I have got is the three-year subscription package and all went well on the first day of use. However, things became unusable on the second and a reboot did not sort it.
Since the problem seemed to affect a phone running Android too, I even got to suspect my router and broadband provider. Even terminating the subscription came to mind, but it did not come to that. Instead, I did a bit more research and tried changing the maximum transition unit (MTU) for the connection to 1300 as suggested in a CyberGhost help article. Because using the Control Panel meant that it was resetting to 1500 on my Windows 10 machine, I then turned to a command line-based solution.
To accomplish that, I started PowerShell in administrator mode from the context menu produced by right-clicking on the Start Menu icon on the taskbar. Then, I entered the following command to see what connections I had and what the MTU settings were:
netsh interface ipv4 show subinterfaces
From looking through the Settings and Control Panel applications, I already had worked out what network interface belonged to the CyberGhost connection. Seeing that the MTU setting was 1500, I then issued a command like the following to change that to 1300.
netsh interface ipv4 set subinterface "<name of ethernet interface>" mtu=1300 store=persistent
Here, <name of ethernet interface> gets replaced by the name of your connection and the string is quoted to avoid spaces in the name causing problems with executing the command. Once that second command had been run, the first one was issued again and the output checked to ensure that the MTU setting was as expected.
While this was done when the VPN connection was inactive, it may work also with an active connection. After making the change, I again reconnected to the VPN and all has been as expected since then, and I found a better connection for my Android phone too.