Technology Tales

Adventures & experiences in contemporary technology

Contents not displaying for Shared Folders on a Fedora 32 guest instance in VirtualBox

26th July 2020

While some Linux distros like Fedora install VirtualBox drivers during installation time, I prefer to install the VirtualBox Guest Additions themselves. Before doing this, it is best to remove the virtualbox-guest-additions package from Fedora to avoid conflicts. After that, execute the following command to ensure that all prerequisites for the VirtualBox Guest Additions are in place prior to mounting the VirtualBox Guest Additions ISO image and installing from there:

sudo dnf -y install gcc automake make kernel-headers dkms bzip2 libxcrypt-compat kernel-devel perl

During the installation, you may encounter a message like the following:

ValueError: File context for /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf already defined

This is generated by SELinux so the following commands need executing before the VirtualBox Guest Additions installation is repeated:

sudo semanage fcontext -d /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf
sudo restorecon /opt/VBoxGuestAdditions-<VERSION>/other/mount.vboxsf

Without doing the above step and fixing the preceding error message, I had an issue with mounting of Shared Folders whereby the mount point was set up but no folder contents were displayed. This happened even when my user account was added to the vboxsf group and it proved to be the SELinux context issue that was the cause.

Installing Perl modules using CPAN on Linux Mint 19.2

28th September 2019

My online travel photo gallery is a self-coded set of PHP scripts that read data from tables in a MySQL database. These tables are built from input XML files using a Perl script that itself creates and executes an SQL script. The Perl script also does some image processing using GraphicsMagick commands to resize images and to add copyright information and image framing. Because this processed one image at a time sequentially, it was taking several minutes to complete and only partly used the capacity of the PC that I used.

This led me to look at adding parallel processing and that is what brought me to looking at the Parallel::ForkManager Perl module. An alternative approach might have been to add new images in such a way as not to need the full run involving hundreds of image files, but that will take more work and I fancied having a look at parallelising things anyway.

If it was not there already, the first act would have been to install build-essential to get access to the cpan command. The following command accomplishes this:

sudo apt-get install build-essential

Once that is there, the cpan command needs to be run and some questions answered to get things going. The first question to answer is whether you want setup to be as automated as possible and the default answer of yes worked for me. The next question to answer regards the approach that cpan takes when installing modules and I chose sudo here (local::lib is the default value and manual is another option). After this, cpan drops into its own command shell. Here, I issued two more commands to continue the basic setup by updating CPAN.pm to the latest version and adding Bundle::CPAN to optimise the module further:

make install
install Bundle::CPAN

Continuing the last of these may need extra intervention to confirmation the suggested default of exit at one point in its operation and that takes a little time to complete. It is after this that Parallel::ForkManager can be installed using the following command:

install Parallel::ForkManager

That completed quickly and the cpan shell was exited using its exit command. Then, the new module was available in scripting after that. The actual use of this module is something that hope to describe in another post so I am ending this one here and the same process is just as applicable to setting up cpan and adding any other Perl CPAN module.

Getting Eclipse to start without incompatibility errors on Linux Mint 19.1

12th June 2019

Recent curiosity about Java programming and Groovy scripting got me trying to start up the Eclipse IDE that I had install on my main machine. What I got instead of a successful application startup was a message that included the following:

!MESSAGE Exception launching the Eclipse Platform:
!STACK
java.lang.ClassNotFoundException: org.eclipse.core.runtime.adaptor.EclipseStarter
at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:466)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:566)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:499)
at org.eclipse.equinox.launcher.Main.invokeFramework(Main.java:626)
at org.eclipse.equinox.launcher.Main.basicRun(Main.java:584)
at org.eclipse.equinox.launcher.Main.run(Main.java:1438)
at org.eclipse.equinox.launcher.Main.main(Main.java:1414)

The cause was a mismatch between Eclipse and the installed version of Java that it needed in order to run. After all, the software itself is written in the Java language and the installed version from the usual software repositories was too old for Java 11. The solution turned out to be installing a newer version as a Snap (Ubuntu’s answer to Flatpak). The following command did the needful since Snapd already was running on my machine:

sudo snap install eclipse --classic

The only part of the command that warrants extra comment is the --classic switch since that is needed for a tool like Eclipse that needs to access a host file system. On executing, the software was downloaded from Snapcraft and then installed within its own bundle of dependencies. The latter adds a certain detachment from the underlying Linux installation and ensures that no messages appear because of incompatibilities like the one near the start of this post.

Moving a website from shared hosting to a virtual private server

24th November 2018

This year has seen some optimisation being applied to my web presences guided by the results of GTMetrix scans. It was then that I realised how slow things were, so server loads were reduced. Anything that slowed response times, such as WordPress plugins, got removed. Usage of Matomo also was curtailed in favour of Google Analytics while HTML, CSS and JS minification followed. What had yet to happen was a search for a faster server. Now, another website has been moved onto a virtual private server (VPS) to see how that would go.

Speed was not the only consideration since security was a factor too. After all, a VPS is more locked away from other users than a folder on a shared server. There also is the added sense of control, so Let’s Encrypt SSL certificates can be added using the Electronic Frontier Foundation’s Certbot. That avoids the expense of using an SSL certificate provided through my shared hosting provider and a successful transition for my travel website may mean that this one undergoes the same move.

For the VPS, I chose Ubuntu 18.04 as its operating system and it came with the LAMP stack already in place. Have offload development websites, the mix of Apache, MySQL and PHP is more familiar to me than anything using Nginx or Python. It also means that .htaccess files become more useful than they were on my previous Nginx-based platform. Having full access to the operating system by means of SSH helps too and should mean that I have fewer calls on technical support since I can do more for myself. Any extra tinkering should not affect others either, since this type of setup is well known to me and having an offline counterpart means that anything riskier is tried there beforehand.

Naturally, there were niggles to overcome with the move. The first to fix was to make the MySQL instance accept calls from outside the server so that I could migrate data there from elsewhere and I even got my shared hosting setup to start using the new database to see what performance boost it might give. To make all this happen, I first found the location of the relevant my.cnf configuration file using the following command:

find / -name my.cnf

Once I had the right file, I commented out the following line that it contained and restarted the database service afterwards using another command to stop the appearance of any error 111 messages:

bind-address 127.0.0.1
service mysql restart

After that, things worked as required and I moved onto another matter: uploading the requisite files. That meant installing an FTP server so I chose proftpd since I knew that well from previous tinkering. Once that was in place, file transfer commenced.

When that was done, I could do some testing to see if I had an active web server that loaded the website. Along the way, I also instated some Apache modules like mod-rewrite using the a2enmod command, restarting Apache each time I enabled another module.

Then, I discovered that Textpattern needed php-7.2-xml installed, so the following command was executed to do this:

apt install php7.2-xml

Then, the following line was uncommented in the correct php.ini configuration file that I found using the same method as that described already for the my.cnf configuration and that was followed by yet another Apache restart:

extension=php_xmlrpc.dll

Addressing the above issues yielded enough success for me to change the IP address in my Cloudflare dashboard so it pointed at the VPS and not the shared server. The changeover happened seamlessly without having to await DNS updates as once would have been the case. It had the added advantage of making both WordPress and Textpattern work fully.

With everything working to my satisfaction, I then followed the instructions on Certbot to set up my new Let’s Encrypt SSL certificate. Aside from a tweak to a configuration file and another Apache restart, the process was more automated than I had expected so I was ready to embark on some fine-tuning to embed the new security arrangements. That meant updating .htaccess files and Textpattern has its own, so the following addition was needed there:

RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This complemented what was already in the main .htaccess file and WordPress allows you to include http(s) in the address it uses, so that was another task completed. The general .htaccess only needed the following lines to be added:

RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://www.assortedexplorations.com/$1 [R,L]

What all these achieve is to redirect insecure connections to secure ones for every visitor to the website. After that, internal hyperlinks without https needed updating along with any forms so that a padlock sign could be shown for all pages.

With the main work completed, it was time to sort out a lingering niggle regarding the appearance of an FTP login page every time a WordPress installation or update was requested. The main solution was to make the web server account the owner of the files and directories, but the following line was added to wp-config.php as part of the fix even if it probably is not necessary:

define('FS_METHOD', 'direct');

There also was the non-operation of WP Cron and that was addressed using WP-CLI and a script from Bjorn Johansen. To make double sure of its effectiveness, the following was added to wp-config.php to turn off the usual WP-Cron behaviour:

define('DISABLE_WP_CRON', true);

Intriguingly, WP-CLI offers a long list of possible commands that are worth investigating. A few have been examined but more await attention.

Before those, I still need to get my new VPS to send emails. So far, sendmail has been installed, the hostname changed from localhost and the server restarted. More investigations are needed but what I have not is faster than what was there before, so the effort has been rewarded already.

Quickly changing between virtual desktops in Windows 10

12th October 2018

One of the benefits of running Linux is the availability of virtual desktops and installing VirtuaWin was the only way to get the same functionality in Windows before the launch of Windows 10. For reasons known to Microsoft, they decided against the same sort of implementation as seen in Linux or UNIX. Instead, they put the virtual desktop functionality a click away and rather hides it from most users unless they know what clicking on the Task View button allows. The approach also made switching between desktops slower with a mouse. However, there are keyboard shortcuts that addresses this once multiple virtual desktops exist.

Using WIN+CTRL+LEFT or WIN+CTRL+RIGHT does this easily once you have mastered the action. Depending on your keyboard setup, WIN is the Windows, Super or Command key while CTRL is the Control key. Then, LEFT is the left arrow key and RIGHT is the right arrow key. For machines with smaller screens where multi-tasking causes clutter, virtual desktops are a godsend for organising how you work and having quick key combinations for switching between them adds to their utility.

Installing Firefox Developer Edition in Linux Mint

22nd April 2018

Having moved beyond the slow response and larger memory footprint of Firefox ESR, I am using Firefox Developer Edition in its place even if it means living without a status bar at the bottom of the window. Hopefully, someone will create an equivalent of the old add-on bar extensions that worked before the release of Firefox Quantum.

Firefox Developer Edition may be pre-release software with some extras for web developers like being able to to drill into an HTML element and see its properties but I am finding it stable enough for everyday use. It is speedy too, which helps, and it has its own profile so it can co-exist on the same machine as regular releases of Firefox like its ESR and Quantum variants.

Installation takes a little added effort though and there are various options available. My chosen method involved Ubuntu Make. Installing this involves setting up a new PPA as the first step and the following commands added the software to my system:

sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
sudo apt-get update
sudo apt-get install ubuntu-make

With the above completed, it was simple to install Firefox Developer edition using the following command:

umake web firefox-dev

Where things got a bit more complicated was getting entries added to the Cinnamon Menu and Docky. The former was sorted using the cinnamon-menu-editor command but the latter needed some tinkering with my firefox-developer.desktop file found in .local/share/applications/ within my user area to get the right icon shown. Discovering this took me into .gconf/apps/docky-2/Docky/Interface/DockPreferences/%gconf.xml where I found the location of the firefox-developer.desktop that needed changing. Once this was completed, there was nothing else to do from the operating system side.

Within Firefox itself, I opted to turn off warnings about password logins on non-https websites by going to about:config using the address bar, then looking for security.insecure_field_warning.contextual.enabled and changing its value from True to False. Some may decry this but there are some local websites on my machine that need attention at times. Otherwise, Firefox is installed with user access so I can update it as if it were a Windows or MacOS application and that is useful given that there are frequent new releases. All is going as I want it so far.

A look at Google’s Pixel C

26th December 2016

Since my last thoughts on trips away without a laptop, I have come by Google’s Pixel C. It is a 10″ tablet so it may not raise hackles on an aircraft like the 12.9″ screen of the large Apple iPad Pro might. The one that I have tried comes with 64 GB of storage space and its companion keyboard cover (there is a folio version). Together, they can be bought for £448, a saving of £150 on the full price.

Google Pixel C

The Pixel C keyboard cover uses strong magnets to hold the tablet onto it and that does mean some extra effort when changing between the various modes. These include covering the tablet screen as well as piggy backing onto it with the screen side showing or attached in such a way that allows typing. The latter usefully allows you to vary the screen angle as you see fit instead of having to stick with whatever is selected for you by a manufacturer. Unlike the physical connection offered by an iPad Pro, Bluetooth is the means offered by the Pixel C and it works just as well from my experiences so far. Because of the smaller size, it feels a little cramped in comparison with a full size keyboard or even that with a 12.9″ iPad Pro. They also are of the scrabble variety though they work well otherwise.

The tablet itself is impressively fast compared to a HTC One A9 phone or even a Google Nexus 9 and that became very clear when it came to installing or updating apps. The speed is just as well since an upgrade to Android 7 (Nougat) was needed on the one that I tried. You can turn on adaptive brightness too, which is a bonus. Audio quality is nowhere near as good as a 12.9″ iPad Pro but that of the screen easily is good enough for assessing photos stored on a WD My Passport Wireless portable hard drive using the WD My Cloud app.

All in all, it may offer that bit more flexibility for overseas trips compared to the bigger iPad Pro so I am tempted to bring one with me instead. The possibility of seeing newly captured photos in slideshow mode is a big selling point since it does functions well for tasks like writing emails or blog posts, like this one since it started life on there. Otherwise, this is a well made device.

Restoring GRUB for dual booting of Linux and Windows

11th April 2015

Once you end up with Windows overwriting your master boot record (MBR), you have lost the ability to use GRUB. Therefore, it would be handy to get it back if you want to start up Linux again. Though the loss of GRUB from the MBR was a deliberate act of mine, I knew that I’d have to restore GRUB to get Linux working again.So, I have been addressing the situation with a Live DVD for the likes of Ubuntu or Linux Mint. Once one of those had loaded its copy of the distribution, issuing the following command in a terminal session gets things back again:

sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda

When there were error messages, I tried this one to see if I could get more information:

sudo grub-install --root-directory=/media/0d104aff-ec8c-44c8-b811-92b993823444 /dev/sda --recheck

Also, it is possible to mount a partition on the boot drive and use that in the command to restore GRUB. Here is the required combination:

sudo mount /dev/sda1 /mnt
sudo grub-install --root-directory=/mnt /dev/sda

Either of these will get GRUB working without a hitch and they are far more snappy than downloading Boot-Repair and using that; I was doing that for a while until a feature on triple booting appeared in an issue of Linux User & Developer that reminded me of the more readily available option. Once, there was a need to manually add an entry for Windows 7 to the GRUB menu too and, with that instated, I was able to dual-boot Ubuntu and Windows using GRUB to select which one was to start for me. Since then, I have been able to dual boot Linux Mint and Windows 8.1 with GRUB finding the latter all by itself so your experiences too may show this variation so it’s worth bearing in mind.

ERROR: Can’t find the archive-keyring

10th April 2014

When I recently did my usual system update for the stable version Ubuntu GNOME, there were some updates pertaining to apt and the process failed when I executed the following command:

sudo apt-get upgrade

Usefully, some messages were issued and here’s a flavour:

Setting up apt (0.9.9.1~ubuntu3.1) …
ERROR: Can’t find the archive-keyring
Is the ubuntu-keyring package installed?
dpkg: error processing apt (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
apt
E: Sub-process /usr/bin/dpkg returned an error code (1)

Some searching on the web revealed that the problem was that there were no files in /usr/share/keyring when there should have been and I had not removed them myself so I have no idea how they disappeared. Various remedies were tried and any that needed software installed were non-starters because apt was disabled by the lack of keyring files. The workaround that restored things for me was to take a copy of the files in /usr/share/keyring from an Ubuntu GNOME 14.04 installation in a VirtualBox VM and copy them in to the same location in its Ubuntu GNOME 13.10 host. For those without such resources, I have packaged them in a zip file below. Other remedies like Y PPA also were suggested where I was reading but that software package needed installing beforehand so it was little use to me when the likes of Synaptic were disabled. If there are other remedies that do not involve an operating system re-installation, I would like to know about them too as well as possible causes for the file loss in the first place and how to avoid these.

Ubuntu Keyrings

A fallback method of installing Nightingale in Linux

3rd December 2013

When I upgraded to Ubuntu GNOME 13.10 and went for the 64-bit variant, I tried a previously tried and tested approach for installing Nightingale that used a PPA only for it not to work. At that point, the repository had not caught up with the latest Ubuntu release (it has by the time of writing) and other pre-compiled packages would not work either. However, there was one further possibility left and that was downloading a copy of the source code and compiling that. My previous experiences of doing that kind of thing have not been universally positive so it was not my first choice but I gave it a go anyway.

In order to get the source code, I first needed to install Git so I could take a copy from the version controlled repository and the following command added the tool and all its dependencies:

sudo apt-get install git autoconf g++ libgtk2.0-dev libdbus-glib-1-dev libtag1-dev libgstreamer-plugins-base0.10-dev zip unzip

With that lot installed, it was time to checkout a copy of the latest source code and I went with the following:

git clone https://github.com/nightingale-media-player/nightingale-hacking.git

The next step was to go into the nightingale-hacking sub-folder and issue the following command:

./build.sh

That should produce a sub-directory named nightingale that contains the compiled executable files. If this exists, it can be copied into /opt. If not, then create a folder named nightingale under /opt using copy the files from ~/nightingale-hacking/compiled/dist into that location. Ubuntu GNOME 13.10 comes with GNOME Shell 3.8, the next step took a little fiddling before it was sorted: adding an icon to application menu or dashboard. This involved adding a file called nightingale.desktop in /usr/share/applications/ with the following contents:

[Desktop Entry]
Name=Nightingale
Comment=Play music
TryExec=/opt/nightingale/nightingale
Exec=/opt/nightingale/nightingale
Icon=/usr/share/pixmaps/nightingale.xpm
Type=Application
X-GNOME-DocPath=nightingale/index.html
X-GNOME-Bugzilla-Bugzilla=Nightingale
X-GNOME-Bugzilla-Product=nightingale
X-GNOME-Bugzilla-Component=BugBuddyBugs
X-GNOME-Bugzilla-Version=1.1.2
Categories=GNOME;Audio;Music;Player;AudioVideo;
StartupNotify=true
OnlyShowIn=GNOME;Unity;
Keywords=Run;
Actions=New
X-Ubuntu-Gettext-Domain=nightingale

[Desktop Action New]
Name=Nightingale
Exec=/opt/nightingale/nightingale
OnlyShowIn=Unity

It was created from a copy of another *.desktop file and the categories in there together with the link to the icon were as important as the title and took a little tinkering before all was in place.  Also, you may find that /opt/nightingale/chrome/icons/default/default.xpm needs to be become /usr/share/pixmaps/nightingale.xpm using the cp command before your new menu entry gains an icon to go with it. While the steps that I describe here worked for me, there is more information on the Nightingale wiki if you need it.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.