Technology Tales

Adventures & experiences in contemporary technology

Ensuring that Flatpak remains up to date on Linux Mint 19.2

25th October 2019

The Flatpak concept offers a useful way of getting the latest version of software like LibreOffice or GIMP on Linux machines because repositories are managed conservatively when it comes to the versions of included software. Ubuntu has Snaps, which are similar in concept. Both options bundle dependencies with the packaged software so that its operation can use later versions of system libraries that what may be available with a particular distribution.

However, even Flatpak depends on what is available through the repositories for a distribution as I found when a software update needed a version of the tool. The solution was to add PPA using the following command and agreeing to the prompts that arise (answering Y, in other words):

sudo add-apt-repository ppa:alexlarsson/flatpak

With the new PPA instated, the usual apt commands were used to update the Flatpak package and continue with the required updates. Since then, all has gone smoothly as expected.

Installing Perl modules using CPAN on Linux Mint 19.2

28th September 2019

My online travel photo gallery is a self-coded set of PHP scripts that read data from tables in a MySQL database. These tables are built from input XML files using a Perl script that itself creates and executes an SQL script. The Perl script also does some image processing using GraphicsMagick commands to resize images and to add copyright information and image framing. Because this processed one image at a time sequentially, it was taking several minutes to complete and only partly used the capacity of the PC that I used.

This led me to look at adding parallel processing and that is what brought me to looking at the Parallel::ForkManager Perl module. An alternative approach might have been to add new images in such a way as not to need the full run involving hundreds of image files, but that will take more work and I fancied having a look at parallelising things anyway.

If it was not there already, the first act would have been to install build-essential to get access to the cpan command. The following command accomplishes this:

sudo apt-get install build-essential

Once that is there, the cpan command needs to be run and some questions answered to get things going. The first question to answer is whether you want setup to be as automated as possible and the default answer of yes worked for me. The next question to answer regards the approach that cpan takes when installing modules and I chose sudo here (local::lib is the default value and manual is another option). After this, cpan drops into its own command shell. Here, I issued two more commands to continue the basic setup by updating CPAN.pm to the latest version and adding Bundle::CPAN to optimise the module further:

make install
install Bundle::CPAN

Continuing the last of these may need extra intervention to confirmation the suggested default of exit at one point in its operation and that takes a little time to complete. It is after this that Parallel::ForkManager can be installed using the following command:

install Parallel::ForkManager

That completed quickly and the cpan shell was exited using its exit command. Then, the new module was available in scripting after that. The actual use of this module is something that hope to describe in another post so I am ending this one here and the same process is just as applicable to setting up cpan and adding any other Perl CPAN module.

Moving a website from shared hosting to a virtual private server

24th November 2018

This year has seen some optimisation being applied to my web presences guided by the results of GTMetrix scans. It was then that I realised how slow things were, so server loads were reduced. Anything that slowed response times, such as WordPress plugins, got removed. Usage of Matomo also was curtailed in favour of Google Analytics while HTML, CSS and JS minification followed. What had yet to happen was a search for a faster server. Now, another website has been moved onto a virtual private server (VPS) to see how that would go.

Speed was not the only consideration since security was a factor too. After all, a VPS is more locked away from other users than a folder on a shared server. There also is the added sense of control, so Let’s Encrypt SSL certificates can be added using the Electronic Frontier Foundation’s Certbot. That avoids the expense of using an SSL certificate provided through my shared hosting provider and a successful transition for my travel website may mean that this one undergoes the same move.

For the VPS, I chose Ubuntu 18.04 as its operating system and it came with the LAMP stack already in place. Have offload development websites, the mix of Apache, MySQL and PHP is more familiar to me than anything using Nginx or Python. It also means that .htaccess files become more useful than they were on my previous Nginx-based platform. Having full access to the operating system by means of SSH helps too and should mean that I have fewer calls on technical support since I can do more for myself. Any extra tinkering should not affect others either, since this type of setup is well known to me and having an offline counterpart means that anything riskier is tried there beforehand.

Naturally, there were niggles to overcome with the move. The first to fix was to make the MySQL instance accept calls from outside the server so that I could migrate data there from elsewhere and I even got my shared hosting setup to start using the new database to see what performance boost it might give. To make all this happen, I first found the location of the relevant my.cnf configuration file using the following command:

find / -name my.cnf

Once I had the right file, I commented out the following line that it contained and restarted the database service afterwards using another command to stop the appearance of any error 111 messages:

bind-address 127.0.0.1
service mysql restart

After that, things worked as required and I moved onto another matter: uploading the requisite files. That meant installing an FTP server so I chose proftpd since I knew that well from previous tinkering. Once that was in place, file transfer commenced.

When that was done, I could do some testing to see if I had an active web server that loaded the website. Along the way, I also instated some Apache modules like mod-rewrite using the a2enmod command, restarting Apache each time I enabled another module.

Then, I discovered that Textpattern needed php-7.2-xml installed, so the following command was executed to do this:

apt install php7.2-xml

Then, the following line was uncommented in the correct php.ini configuration file that I found using the same method as that described already for the my.cnf configuration and that was followed by yet another Apache restart:

extension=php_xmlrpc.dll

Addressing the above issues yielded enough success for me to change the IP address in my Cloudflare dashboard so it pointed at the VPS and not the shared server. The changeover happened seamlessly without having to await DNS updates as once would have been the case. It had the added advantage of making both WordPress and Textpattern work fully.

With everything working to my satisfaction, I then followed the instructions on Certbot to set up my new Let’s Encrypt SSL certificate. Aside from a tweak to a configuration file and another Apache restart, the process was more automated than I had expected so I was ready to embark on some fine-tuning to embed the new security arrangements. That meant updating .htaccess files and Textpattern has its own, so the following addition was needed there:

RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This complemented what was already in the main .htaccess file and WordPress allows you to include http(s) in the address it uses, so that was another task completed. The general .htaccess only needed the following lines to be added:

RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://www.assortedexplorations.com/$1 [R,L]

What all these achieve is to redirect insecure connections to secure ones for every visitor to the website. After that, internal hyperlinks without https needed updating along with any forms so that a padlock sign could be shown for all pages.

With the main work completed, it was time to sort out a lingering niggle regarding the appearance of an FTP login page every time a WordPress installation or update was requested. The main solution was to make the web server account the owner of the files and directories, but the following line was added to wp-config.php as part of the fix even if it probably is not necessary:

define('FS_METHOD', 'direct');

There also was the non-operation of WP Cron and that was addressed using WP-CLI and a script from Bjorn Johansen. To make double sure of its effectiveness, the following was added to wp-config.php to turn off the usual WP-Cron behaviour:

define('DISABLE_WP_CRON', true);

Intriguingly, WP-CLI offers a long list of possible commands that are worth investigating. A few have been examined but more await attention.

Before those, I still need to get my new VPS to send emails. So far, sendmail has been installed, the hostname changed from localhost and the server restarted. More investigations are needed but what I have not is faster than what was there before, so the effort has been rewarded already.

Installing Firefox Developer Edition in Linux Mint

22nd April 2018

Having moved beyond the slow response and larger memory footprint of Firefox ESR, I am using Firefox Developer Edition in its place even if it means living without a status bar at the bottom of the window. Hopefully, someone will create an equivalent of the old add-on bar extensions that worked before the release of Firefox Quantum.

Firefox Developer Edition may be pre-release software with some extras for web developers like being able to to drill into an HTML element and see its properties but I am finding it stable enough for everyday use. It is speedy too, which helps, and it has its own profile so it can co-exist on the same machine as regular releases of Firefox like its ESR and Quantum variants.

Installation takes a little added effort though and there are various options available. My chosen method involved Ubuntu Make. Installing this involves setting up a new PPA as the first step and the following commands added the software to my system:

sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
sudo apt-get update
sudo apt-get install ubuntu-make

With the above completed, it was simple to install Firefox Developer edition using the following command:

umake web firefox-dev

Where things got a bit more complicated was getting entries added to the Cinnamon Menu and Docky. The former was sorted using the cinnamon-menu-editor command but the latter needed some tinkering with my firefox-developer.desktop file found in .local/share/applications/ within my user area to get the right icon shown. Discovering this took me into .gconf/apps/docky-2/Docky/Interface/DockPreferences/%gconf.xml where I found the location of the firefox-developer.desktop that needed changing. Once this was completed, there was nothing else to do from the operating system side.

Within Firefox itself, I opted to turn off warnings about password logins on non-https websites by going to about:config using the address bar, then looking for security.insecure_field_warning.contextual.enabled and changing its value from True to False. Some may decry this but there are some local websites on my machine that need attention at times. Otherwise, Firefox is installed with user access so I can update it as if it were a Windows or MacOS application and that is useful given that there are frequent new releases. All is going as I want it so far.

Upgrading avahi-dnsconfd on Ubuntu

18th April 2018

This is how I got around problem that occurred when I was updating a virtualised Ubuntu 16.04 instance that I have. My usual way to do this is using apt-get or apt from the command line and the process halted because a pre-removal script for the upgrade of avahi-dnsconf failed. The cause was its not disabling the avahi daemon beforehand so I need to execute the following command before repeating the operation:

sudo systemctl disable avahi-daemon

Once the upgrade had completed, then it was time to re-enable the service using the following command:

sudo systemctl enable avahi-daemon

Ideally, this would completed without such manual intervention and there is a bug report for the unexpected behaviour. Hopefully, it will be sorted soon but these steps will fix things for now.

Trying out a new way to upgrade Linux Mint in situ while going from 17.3 to 18.1

19th March 2017

There was a time when the only recommended way to upgrade Linux Mint from one version to another was to do a fresh installation with back-ups of data and a list of the installed applications created from a special tool.

Even so, it never stopped me doing my own style of in situ upgrade though some might see that as a risky option. More often than not, that actually worked without causing major problems in a time when Linux Mint releases were more tightly tied to Ubuntu’s own six-monthly cycle.

In recent years, Linux Mint’s releases have kept in line with Ubuntu’s Long Term Support (LTS) editions instead. That means that any major change comes only every two years with minor releases in between those. The latter are delivered through Linux Mint’s Update Manager so the process is a simple one to implement. Still, upgrades are not forced on you so it is left to your discretion as to when you need to upgrade since all main and interim versions get the same extended level of support. In fact, the recommendation is not to upgrade at all unless something is broken on your own installation.

For a number of reasons, I stuck with that advice by sticking on my main machine with Linux Mint 17.3 instead of upgrading to Linux Mint 18. The fact that I broke things on another machine using an older method of upgrading provided even more encouragement.

However, I subsequently discovered another means of upgrading between major versions of Linux Mint that had some endorsement from the project. There still are warnings about testing a live DVD version of Linux Mint on your PC first and backing up your data beforehand. Another task is ensuring that you are upgraded from a fully up to data Linux Mint 17.3 installation.

When you are ready, you can install mintupgrade using the following command:

sudo apt-get install mintupgrade

When that is installed, there is a sequence of tasks that you need to do. The first of these is to simulate an upgrade to test for the appearance of untoward messages and resolve them. Repeating any checking until all is well gets a recommendation. The command is as follows:

mintupgrade check

Once you are happy that the system is ready, the next step is to download the updated packages so they are on your machine ahead of their installation. Only then should you begin the upgrade process. The two commands that you need to execute are below:

mintupgrade download
mintupgrade upgrade

Once these have completed, you can restart your system. In my case the whole process worked well with only my PHP installation needing attention. A clash between different versions of the scripting interpretor was addressed by removing the older one since PHP 7 is best kept for sake of testing. Beyond that, a reinstallation of VMware Player and the move from version 18 to version 18.1, there hardly was anything more to do and there was next to no real disruption. That is just as well since I depend heavily on my main PC these days. The backup option of a full installation would have left me clearing up things for a few days afterwards since I use a bespoke selection of software.

Batch conversion of DNG files to other file types with the Linux command line

8th June 2016

At the time of writing, Google Drive is unable to accept DNG files, the Adobe file type for RAW images from digital cameras. The uploads themselves work fine but the additional processing at the end that I believe is needed for Google Photos appears to be failing. Because of this, I thought of other possibilities like uploading them to Dropbox or enclosing them in ZIP archives instead; of these, it is the first that I have been doing and with nothing but success so far. Another idea is to convert the files into an image format that Google Drive can handle and TIFF came to mind because it keeps all the detail from the original image. In contrast, JPEG files lose some information because of the nature of the compression.

Handily, a one line command does the conversion for all files in a directory once you have all the required software installed:

find -type f | grep -i “DNG” | parallel mogrify -format tiff {}

The find and grep commands are standard with the first getting you a list of all the files in the current directory and sending (piping) these to the grep command so the list only retains the names of all DNG files. The last part uses two commands for which I found installation was needed on my Linux Mint machine. The parallel package is the first of these and distributes the heavy workload across all the cores in your processor and this command will add it to your system:

sudo apt-get install parallel

The mogrify command is part of the ImageMagick suite along with others like convert and this is how you add that to your system:

sudo apt-get install imagemagick

In the command at the top, the parallel command works through all the files in the list provided to it and feeds them to mogrify for conversion. Without the use of parallel, the basic command is like this:

mogrify -format tiff *.DNG

In both cases, the -format switch specifies the output file type with tiff triggering the creation of TIFF files. The *.DNG portion itself captures all DNG files in a directory but {} does this in the main command at the top of this post. If you wanted JPEG ones, you would replace tiff with jpg. Shoudl you ever need them, a full list of what file types are supported is produced using the identify command (also part of ImageMagick) as follows:

identify -list format

ERROR: Can’t find the archive-keyring

10th April 2014

When I recently did my usual system update for the stable version Ubuntu GNOME, there were some updates pertaining to apt and the process failed when I executed the following command:

sudo apt-get upgrade

Usefully, some messages were issued and here’s a flavour:

Setting up apt (0.9.9.1~ubuntu3.1) …
ERROR: Can’t find the archive-keyring
Is the ubuntu-keyring package installed?
dpkg: error processing apt (--configure):
subprocess installed post-installation script returned error exit status 1
Errors were encountered while processing:
apt
E: Sub-process /usr/bin/dpkg returned an error code (1)

Some searching on the web revealed that the problem was that there were no files in /usr/share/keyring when there should have been and I had not removed them myself so I have no idea how they disappeared. Various remedies were tried and any that needed software installed were non-starters because apt was disabled by the lack of keyring files. The workaround that restored things for me was to take a copy of the files in /usr/share/keyring from an Ubuntu GNOME 14.04 installation in a VirtualBox VM and copy them in to the same location in its Ubuntu GNOME 13.10 host. For those without such resources, I have packaged them in a zip file below. Other remedies like Y PPA also were suggested where I was reading but that software package needed installing beforehand so it was little use to me when the likes of Synaptic were disabled. If there are other remedies that do not involve an operating system re-installation, I would like to know about them too as well as possible causes for the file loss in the first place and how to avoid these.

Ubuntu Keyrings

Installing Nightingale music player on Ubuntu 13.04

25th June 2013

Ever since the Songbird project concentrated its efforts to support only Windows and OS X, the Firefox-based music player has been absent from a Linux user’s world. However, the project is open source and a fork called Nightingale now fulfils the same needs. Intriguingly, it too is available for Windows for OS X users so I am left wondering why that overlap has happened. However, Songbird also is available as a web app and as an app on both Android and iOS while Nightingale sticks to being a desktop application.

To add it to Ubuntu, you need to set up a new repository. That can be done using the Software Centre but issuing a command in a terminal can be so much quicker and cleaner so here it is:

sudo add-apt-repository ppa:nightingaleteam/nightingale-release

Apart from entering your password, there will be prompt to continue by pressing the carriage return key or cancelling with CTRL + C. For our purposes, it is the first action that’s needed and once that’s done the needful, you can execute the following command:

sudo apt-get update && sudo apt-get install nightingale

This is in two parts: the first updates the repositories on your system and second actually installs the software. When that is complete, you are ready run Nightingale and, with the repository, staying up to date is not chore either. In fact, using the above commands brings another advantage and it is that they should in any Ubuntu derivatives such as Linux Mint.

Using a variant of Debian’s Iceweasel that keeps pace with Firefox

5th February 2013

Left to its own devices, Debian will leave you with an ever ageing re-branded version of Firefox that was installed at the same time as the rest of the operating system. From what I have found, the main cause of this was that Mozilla’s wanting to retain control of its branding and trademarks in a manner not in keeping with Debian’s Free Software rules. This didn’t affect just Firefox but also Thunderbird, Sunbird and Seamonkey with Debian’s equivalents for these being IceDove, IceOwl and IceApe, respectively.

While you can download a tarball of Firefox from the web and use that, it’d be nice to get a variant that updated through Debian’s normal apt-get channels. In fact, IceWeasel does get updated whenever there is a new release of Firefox even if these updates never find their way into the usual repositories. While I have been know to take advantage of the more frozen state of Debian compared with other Linux distributions, I don’t mind getting IceWeasel updated so it isn’t a security worry.

The first step in so doing is to add the following lines to /etc/apt/sources.list using root access (using sudo, gksu or su to assume root privileges) since the file normally cannot be edited by normal users:

deb http://backports.debian.org/debian-backports squeeze-backports main
deb http://mozilla.debian.net/ squeeze-backports iceweasel-release

With the file updated and saved, the next step is to update the repositories on your machine using the following command:

sudo apt-get update

With the above complete, it is time to overwrite the existing IceWeasel installation with the latest one using an apt-get command that specifies the squeeze-backports repository as its source using the -t switch. While IceWeasel is installed from the iceweasel-release squeeze-backports repository, there dependencies that need to be satisfied and these come from the main squeeze-backports one. The actual command used is below:

sudo apt-get install -t squeeze-backports iceweasel

While that was all that I needed to do to get IceWeasel 18.0.1 in place, some may need the pkg-mozilla-archive-keyring package installed too. For those needing more information that what’s here, there’s always the Debian Mozilla team.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.