Technology Tales

Adventures & experiences in contemporary technology

Using .htaccess to control hotlinking

10th October 2020

There are times when blogs cease to exist and the only place to find the content is on the Wayback Machine. Even then, it is in danger of being lost completely. One such example is the subject of this post.

Though this website makes use of the facilities of Cloudflare for various functions that include the blocking of image hotlinking, the same outcome can be achieved using .htaccess files on Apache web servers. It may work on Nginx to a point too but there are other configuration files that ought to be updated instead of using a .htaccess when some frown upon the approach. In any case, the lines that need adding to .htaccess are listed below though the web address needs to include your own domain in place of the dummy example provided:

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?yourdomain.com(/)?.*$ [NC]
RewriteRule .*\.(gif|jpe?g|png|bmp)$ [F,NC]

The first line turns on the mod_rewrite engine and you may have that done anyway. Of course, the module needs enabling in your Apache configuration for this to work and you have to be allowed to perform the required action as well. This means changing the Apache configuration files. The next pair of lines look at the HTTP referer strings and the third one only allows images to be served from your own web domain and not others. To add more, you need to copy the third line and change the web address accordingly. Any new lines need to precede the last line that defines the file extensions that are to be blocked to other web addresses.

RewriteEngine on
RewriteCond %{HTTP_REFERER} !^$
RewriteCond %{HTTP_REFERER} !^http://(www\.)?yourdomain.com(/)?.*$ [NC]
RewriteRule \.(gif|jpe?g|png|bmp)$ /images/image.gif [L,NC]

Another variant of the previous code involves changing the last line to display a default image showing others what is happening. That may not reduce the bandwidth usage as much as complete blocking but it may be useful for telling others what is happening.

One way to fix slow CyberGhost VPN connections on Windows 10

31st January 2020

Due to a need to access websites with country blocking, I have decided to give CyberGhost a go and it also will come in handy when connecting devices to other Wi-Fi connections. What I have got is the three year subscription package and all went went well on the first day of use. However, things became unusable on the second and a reboot did not sort it.

The problem seemed to affect a phone running Android too and I even got to suspecting my router an broadband provider. Even terminating the subscription came to mind but it did not come to that. Instead, I did a bit more research and tried changing the maximum transition unit (MTU) for the connection to 1300 as suggested in a CyberGhost help article. Using the Control Panel meant that it was resetting to 1500 on my Windows 10 machine so I then turned to a command line based solution.

To do that, I started PowerShell in administrator mode from the context menu produced by right clicking on the Start Menu icon on the taskbar. Then, I entered the following command to see what connections I had and what the MTU settings were:

netsh interface ipv4 show subinterfaces

From looking through the Settings and Control Panel applications, I already had worked out what network interface belonged to the CyberGhost connection. Seeing that the MTU setting was 1500, I then issued a command like the following to change that to 1300.

netsh interface ipv4 set subinterface "<name of ethernet interface>" mtu=1300 store=persistent

Here, <name of ethernet interface> gets replace by the name of your connection and the string is quoted to avoid spaces in the name causing problems with executing the command. Once that second command had been run, the first one was issued again and the output checked to ensure that the MTU setting was as expected.

This was done when the VPN connection was inactive but it may work also with an active connection. After making the change, I again reconnected to the VPN and all has been as expected since then and I found a better connection for my Android phone too.

Adding a new domain or subdomain to an SSL certificate using Certbot

11th June 2019

On checking the Site Health page of a WordPress blog, I saw errors that pointed to a problem with its SSL set up. The www subdomain was not included in the site’s certificate and was causing PHP errors as a result though they had no major effect on what visitors saw. Still, it was best to get rid of them so I needed to update the certificate as needed. Execution of a command like the following did the job:

sudo certbot --expand -d existing.com,www.example.com

Using a Let’s Encrypt certificate meant that I could use the certbot command since that already was installed on the server. The --expand and -d switches ensured that the listed domains were added to the certificate to sort out the observed problem. In the above, a dummy domain name is used but this was replaced by the real one to produce the desired effect and make things as they should have been.

Running cron jobs using the www-data system account

22nd December 2018

When you set up your own web server or use a private server (virtual or physical), you will find that web servers run using the www-data account. That means that website files need to be accessible to that system account if not owned by it. The latter is mandatory if you you want WordPress to be able to update itself with needing FTP details.

It also means that you probably need scheduled jobs to be executed using the privileges possess by the www-data account. For instance, I use WP-CLI to automate spam removal and updates to plugins, themes and WordPress itself. Spam removal can be done without the www-data account but the updates need file access and cannot be completed without this. Therefore, I got interested in setting up cron jobs to run under that account and the following command helps to address this:

sudo -u www-data crontab -e

For that to work, your own account needs to be listed in /etc/sudoers or be assigned to the sudo group in /etc/group. If it is either of those, then entering your own password will open the cron file for www-data and it can be edited as for any other account. Closing and saving the session will update cron with the new job details.

In fact, the same approach can be taken for a variety of commands where files only can be access using www-data. This includes copying, pasting and deleting files as well as executing WP-CLI commands. The latter issues a striking message if you run a command using the root account, a pervasive temptation given what it allows. Any alternative to the latter has to be better from a security standpoint.

Moving a website from shared hosting to a virtual private server

24th November 2018

This year has seen some optimisation being applied to my web presences guided by the results of GTMetrix scans. It was then that I realised how slow things were, so server loads were reduced. Anything that slowed response times, such as WordPress plugins, got removed. Usage of Matomo also was curtailed in favour of Google Analytics while HTML, CSS and JS minification followed. What had yet to happen was a search for a faster server. Now, another website has been moved onto a virtual private server (VPS) to see how that would go.

Speed was not the only consideration since security was a factor too. After all, a VPS is more locked away from other users than a folder on a shared server. There also is the added sense of control, so Let’s Encrypt SSL certificates can be added using the Electronic Frontier Foundation’s Certbot. That avoids the expense of using an SSL certificate provided through my shared hosting provider and a successful transition for my travel website may mean that this one undergoes the same move.

For the VPS, I chose Ubuntu 18.04 as its operating system and it came with the LAMP stack already in place. Have offload development websites, the mix of Apache, MySQL and PHP is more familiar to me than anything using Nginx or Python. It also means that .htaccess files become more useful than they were on my previous Nginx-based platform. Having full access to the operating system by means of SSH helps too and should mean that I have fewer calls on technical support since I can do more for myself. Any extra tinkering should not affect others either, since this type of setup is well known to me and having an offline counterpart means that anything riskier is tried there beforehand.

Naturally, there were niggles to overcome with the move. The first to fix was to make the MySQL instance accept calls from outside the server so that I could migrate data there from elsewhere and I even got my shared hosting setup to start using the new database to see what performance boost it might give. To make all this happen, I first found the location of the relevant my.cnf configuration file using the following command:

find / -name my.cnf

Once I had the right file, I commented out the following line that it contained and restarted the database service afterwards using another command to stop the appearance of any error 111 messages:

bind-address 127.0.0.1
service mysql restart

After that, things worked as required and I moved onto another matter: uploading the requisite files. That meant installing an FTP server so I chose proftpd since I knew that well from previous tinkering. Once that was in place, file transfer commenced.

When that was done, I could do some testing to see if I had an active web server that loaded the website. Along the way, I also instated some Apache modules like mod-rewrite using the a2enmod command, restarting Apache each time I enabled another module.

Then, I discovered that Textpattern needed php-7.2-xml installed, so the following command was executed to do this:

apt install php7.2-xml

Then, the following line was uncommented in the correct php.ini configuration file that I found using the same method as that described already for the my.cnf configuration and that was followed by yet another Apache restart:

extension=php_xmlrpc.dll

Addressing the above issues yielded enough success for me to change the IP address in my Cloudflare dashboard so it pointed at the VPS and not the shared server. The changeover happened seamlessly without having to await DNS updates as once would have been the case. It had the added advantage of making both WordPress and Textpattern work fully.

With everything working to my satisfaction, I then followed the instructions on Certbot to set up my new Let’s Encrypt SSL certificate. Aside from a tweak to a configuration file and another Apache restart, the process was more automated than I had expected so I was ready to embark on some fine-tuning to embed the new security arrangements. That meant updating .htaccess files and Textpattern has its own, so the following addition was needed there:

RewriteCond %{HTTPS} !=on
RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L]

This complemented what was already in the main .htaccess file and WordPress allows you to include http(s) in the address it uses, so that was another task completed. The general .htaccess only needed the following lines to be added:

RewriteCond %{SERVER_PORT} 80
RewriteRule ^(.*)$ https://www.assortedexplorations.com/$1 [R,L]

What all these achieve is to redirect insecure connections to secure ones for every visitor to the website. After that, internal hyperlinks without https needed updating along with any forms so that a padlock sign could be shown for all pages.

With the main work completed, it was time to sort out a lingering niggle regarding the appearance of an FTP login page every time a WordPress installation or update was requested. The main solution was to make the web server account the owner of the files and directories, but the following line was added to wp-config.php as part of the fix even if it probably is not necessary:

define('FS_METHOD', 'direct');

There also was the non-operation of WP Cron and that was addressed using WP-CLI and a script from Bjorn Johansen. To make double sure of its effectiveness, the following was added to wp-config.php to turn off the usual WP-Cron behaviour:

define('DISABLE_WP_CRON', true);

Intriguingly, WP-CLI offers a long list of possible commands that are worth investigating. A few have been examined but more await attention.

Before those, I still need to get my new VPS to send emails. So far, sendmail has been installed, the hostname changed from localhost and the server restarted. More investigations are needed but what I have not is faster than what was there before, so the effort has been rewarded already.

Improving a website contact form

23rd April 2018

On another website, I have had a contact form but it was missing some functionality. For instance, it stored the input in files on a web server instead of emailing them. That was fixed more easily than expected using the PHP mail function. Even so, it remains useful to survey corresponding documentation on the w3schools website.

The other changes affected the way the form looked to a visitor. There was a reset button and that was removed on finding that such things are out of favour these days. Thinking again, there hardly was any need for it any way.

Newer additions that came with HTML5 had their place too. Including user hints using the placeholder attribute should add some user friendliness although I have avoided experimenting with browser-powered input validation for now. Use of the required attribute has its uses for tell a visitor that they have forgotten something but I need to check how that is handled in CSS more thoroughly before I go with that since there are new :required, :optional, :valid and :invalid pseudoclasses that can be used to help.

It seems that there is much more to learn about setting up forms since I last checked. This is perhaps a hint that a few books need reading as part of catching with how things are done these days. There also is something new to learn.

Installing Firefox Developer Edition in Linux Mint

22nd April 2018

Having moved beyond the slow response and larger memory footprint of Firefox ESR, I am using Firefox Developer Edition in its place even if it means living without a status bar at the bottom of the window. Hopefully, someone will create an equivalent of the old add-on bar extensions that worked before the release of Firefox Quantum.

Firefox Developer Edition may be pre-release software with some extras for web developers like being able to to drill into an HTML element and see its properties but I am finding it stable enough for everyday use. It is speedy too, which helps, and it has its own profile so it can co-exist on the same machine as regular releases of Firefox like its ESR and Quantum variants.

Installation takes a little added effort though and there are various options available. My chosen method involved Ubuntu Make. Installing this involves setting up a new PPA as the first step and the following commands added the software to my system:

sudo add-apt-repository ppa:ubuntu-desktop/ubuntu-make
sudo apt-get update
sudo apt-get install ubuntu-make

With the above completed, it was simple to install Firefox Developer edition using the following command:

umake web firefox-dev

Where things got a bit more complicated was getting entries added to the Cinnamon Menu and Docky. The former was sorted using the cinnamon-menu-editor command but the latter needed some tinkering with my firefox-developer.desktop file found in .local/share/applications/ within my user area to get the right icon shown. Discovering this took me into .gconf/apps/docky-2/Docky/Interface/DockPreferences/%gconf.xml where I found the location of the firefox-developer.desktop that needed changing. Once this was completed, there was nothing else to do from the operating system side.

Within Firefox itself, I opted to turn off warnings about password logins on non-https websites by going to about:config using the address bar, then looking for security.insecure_field_warning.contextual.enabled and changing its value from True to False. Some may decry this but there are some local websites on my machine that need attention at times. Otherwise, Firefox is installed with user access so I can update it as if it were a Windows or MacOS application and that is useful given that there are frequent new releases. All is going as I want it so far.

Killing a hanging SSH session

20th April 2018

My web hosting provider offers SSH access that I often use for such things as updating Matomo and Drupal together with more intensive file moving than an FTP session can support. However, I have found in recent months that I no longer can exit cleanly from such sessions using the exit command.

Because this produces a locked terminal session, I was keen to find an alternative to shutting down the terminal application before starting it again. Handily, there is a keyboard shortcut that does just what I need.

It varies a little according to the keyboard that you have. Essentially, it combines the carriage return key with ones for the tilde (~) and period (.) characters. The tilde may need to be produced by the combining the shift and backtick keys on some keyboard layouts but that is not needed on mine. So far, I have found that the <CR>+~+. combination does what I need until SSH sessions start exiting as expected.

Updating Piwik using the Linux Command Line

28th November 2016

Because updating Piwik using its web interface has proved tempestuous, I have decided update the self-hosted analytics application an ssh session. The production web servers that I use are hosted on Linux systems so that is why any commands apply to the Linux or UNIX command line only. What is needed for Windows servers may differ.

The first step is to down the required ZIP file with this command:

wget https://builds.piwik.org/piwik.zip

Once the download is complete, the contents of the ZIP archive are extracted into a new subfolder. This is a process that I carry out in a separate folder to that where the website files are kept before copying everything from the extract folder in there. Here is the unzip command and the -o switch turns on overwriting of any previously existing files:

unzip -o piwik.zip

Without the requirement folder in web server area updated, the next step is to do the actual system update that includes any updates to the Piwik database that you are using. There are two commands that you can use once you have specified the location of your Piwik installation. The second is needed when the first option cannot find where the PHP executable is stored. My systems had something more specific than these because both PHP 5.6 and PHP 7.0 are installed. Looking in /usr/bin was enough to find what i needed to execute in place of php below. Otherwise, the command was the same.

./[path to piwik]/console core:update

php [path to piwik]/console core:update

While the upgrade is ongoing, it prompts you to permit it to continue before it goes and makes changes to the database. This did not take long on my systems but that depends on how much data there is. Once, the process has completed, you can delete any extraneous files using the rm command.

Overriding replacement of double or triple hyphenation in WordPress

7th June 2016

On here, I have posts with example commands that include double hyphens and they have been displayed merged together, something that has resulted in a comment posted by a visitor to this part of the web. All the while, I have been blaming the fonts that I have been using only for it to be the fault of WordPress itself.

Changing multiple dashes to something else has been a feature of Word autocorrect but I never expected to see WordPress aping that behaviour and it has been doing so for a few years now. The culprit is wptexturize and that cannot be disabled for it does many other useful things.

What happens is that the wptexturize filter changes ‘---‘ (double hyphens) to ‘–’ (&#8211; in web entity encoding) and ‘---‘ (triple hyphens) to ‘—’ (&#8212; in web entity encoding). The solution is to add another filter to the content that changes these back to the way they were and the following code does this:

add_filter( ‘the_content’ , ‘mh_un_en_dash’ , 50 );
function mh_un_en_dash( $content ) {
$content = str_replace( ‘&#8211;’ , ‘--‘ , $content );
$content = str_replace( ‘&#8212;’ , ‘---‘ , $content );
return $content;
}

The first line of the segment adds in the new filter that uses the function defined below it. The third and fourth lines above do the required substitution before the function returns the post content for display in the web page. The whole code block can be used to create a plugin or placed the theme’s functions.php file. Either way, things appear without the substitution confusing your readers. It makes me wonder if a bug report has been created for this because the behaviour looks odd to me.

  • All the views that you find expressed on here in postings and articles are mine alone and not those of any organisation with which I have any association, through work or otherwise. As regards editorial policy, whatever appears here is entirely of my own choice and not that of any other person or organisation.

  • Please note that everything you find here is copyrighted material. The content may be available to read without charge and without advertising but it is not to be reproduced without attribution. As it happens, a number of the images are sourced from stock libraries like iStockPhoto so they certainly are not for abstraction.

  • With regards to any comments left on the site, I expect them to be civil in tone of voice and reserve the right to reject any that are either inappropriate or irrelevant. Comment review is subject to automated processing as well as manual inspection but whatever is said is the sole responsibility of the individual contributor.