WordPress URL management with canonical tags and permalink simplification
29th March 2025Recently, I have been going through the content, rewriting things where necessary. In the early days, there were some posts following diary and announcement styles that I now avoid. Some now have been moved to a more appropriate place for those, while others have been removed.
While this piece might fall into the announcement category, I am going to mix up things too. After some prevarication, I have removed dates from the addresses of entries like this after seeing some duplication. Defining canonical URL's in the page header like this does help:
<link rel="canonical" href="https://technologytales.com/an-alternate-approach-to-setting-up-a-local-git-repository-with-a-remote-github-connection/">
However, it becomes tricky when you have zero-filled and non-zero-filled dates going into URL's. Using the following in a .htaccess
file redirects the latter to the former, which is a workaround:
RewriteRule ^([0-9]{4})/([1-9])/([0-9]{1,2})/(.*)$ /$1/0$2/$3/$4 [R=301,L]
RewriteRule ^([0-9]{4})/(0[1-9]|1[0-2])/([1-9])/(.*)$ /$1/$2/0$3/$4 [R=301,L]
The first of these lines zero-fills the month component, while the second zero-fills the day component. Here, [0-9]{4}
looks for a four digit year. Then, [1-9]
picks up the non-zero-filled components that need zero-prefixing. The replacements are 0$2
or 0$3
as needed.
Naturally, this needs URL rewriting to be turned on for it to work, which it does. Since my set-up is on Apache, the MOD_REWRITE module needs to be activated too. Then, your configuration needs to allow its operation. With dates removed from WordPress permalinks, I needed to add the following line to redirect old addresses to new ones for the sake of search engine optimisation:
RedirectMatch 301 ^/([0-9]{4})/([0-9]{2})/([0-9]{2})/(.*)$ /$4
Here, [0-9]{4}
picks up the four digit year and [0-9]{2}
finds the two-digit month and day. The, (.*)
is the rest of the URL that is retained as signalled by /$4
at the end. That redirects things nicely, without my having to have a line for every post on the website. Another refinement was to remove query strings from every page a visitor would see:
RewriteCond %{REQUEST_URI} !(^/wp-admin/|^/wp-login\.php$) [NC]
RewriteCond %{QUERY_STRING} .
RewriteCond %{QUERY_STRING} !(&preview=true) [NC]
RewriteRule ^(.*)$ /$1? [R=301,L]
This still allows the back end and login screens to work as before, along with post previews during the writing stage. One final note is that I am not using the default login address for the sake of added security, yet that needs to be mentioned nowhere in the .htaccess
file anyway.
Claude Projects: Reusing your favourite AI prompts
28th March 2025Some things that I do with Anthropic Claude, I end up repeating. Generating titles for pieces of text or rewriting text to make it read better are activities that happen a lot. Others would include the generation of single word previews for a piece or creating a summary.
Python or R scripts come in handy for summarisation, either for a social media post or for introduction into other content. In fact, this is how I go much of the time. Nevertheless, I found another option: using Projects in the Claude web interface.
These allow you to store a prompt that you reuse a lot in the Project Knowledge panel. Otherwise, you need to supply a title and a description too. Once completed, you just add your text in there for the AI to do the rest. Title generation and text rewriting already are set up like this, and keywords could follow. It is a great way to reuse and refine prompts that you use a lot.
An alternate approach to setting up a local Git repository with a remote GitHub connection
24th March 2025For some reason, I ended with two versions of this at the draft stage, forcing me to compare and contrast before merging them together to produce what you find here. The inspiration was something that I encountered a while ago: getting a local repository set up in a perhaps unconventional manner.
The simpler way of working would be to set up a repo on GitHub and clone it to the local machine, yet other needs are the cause of doing things differently. In contrast, this scheme stars with initialising the local directory first using the following command after creating it with some content and navigating there:
git init
This marks the directory as a Git repository, allowing you to track changes by creating a hidden .git
directory. Because security measures often require verification of directories when executing Git commands, it is best to configure a safe directory with the following command to avoid any issues:
git config --global --add safe.directory [path to directory]
In the above, replace the path with your specific project directory. This ensures that Git recognises your directory as safe and authorised for operations, avoiding any messages whenever you work in there.
With that completed, it is time to add files to the staging area, which serves as an area where you can review and choose changes to be committed to the repository. Here are two commands that show different ways of accomplishing this:
git add README.md
git add .
The first command stages the README.md file, preparing it for the next commit, while the second stages all files in the directory (the .
is a wildcard operator that includes everything in there).
Once your files are staged, you are ready to commit them. A commit is essentially a snapshot of your changes, marking a specific point in the project's history. The following command will commit your staged changes:
git commit -m "first commit"
The -m
flag allows you to add a descriptive message for context; here, it is "first commit." This message helps with understanding the purpose of the commit when reviewing project history.
Before pushing your local files online, you will need to create an empty repository on GitHub using the GitHub website if you do not have one already. While still on the GitHub website, click on the Code button and copy the URL shown under the HTTPS tab that is displayed. This takes the form https://github.com/username/repository.git
and is required for running the next command in your local terminal session:
git remote add origin https://github.com/username/repository.git
This command establishes a remote connection under the alias origin
. By default, Git sets the branch name to 'master'. However, recent conventions prefer using 'main'. To rename your branch, execute:
git branch -M main
This command will rename your current branch to 'main', aligning it with modern version control standards. Finally, you must push your changes from the local repository to the remote repository on GitHub, using the following command:
git push -u origin main
The -u
flag sets the upstream reference, meaning future push and pull operations will default to this remote branch. This last step completes the process of setting up a local repository, linking it to a remote one on GitHub, staging any changes, committing these and pushing them to the remote repository.
Enhancing focus and wellbeing by eliminating digital distractions while browsing the web
23rd March 2025Such is the state of the world at the moment that I ration my news intake for the sake of my mental wellbeing. That also includes the content that websites present to me. Last November, I was none too please to see Perplexity showing me something unwanted on its home page. However, there appeared to be no way to turn this off, in contrast to the default page shown in a new browser tab. Then, I decided to tolerate the intrusion, only for the practice to develop over time.
Then, I happened on uBlock Origin after finding that it will block unwanted parts of web pages. While it was a bit hit-and-miss to get things going on the Perplexity website, it did the job after some trial and error. Things can change, which means the blocking may need refinement. Even so, I can handle that. When YouTube became another place where I needed to block distractions like previews of other videos during a webinar.
Now, uBlock Origin has become the only ad blocker that and I still use with Firefox. Others like Ghostery broke websites, especially that of the UK Met Office with its cookie blocking; the Ryanair one was another casualty, and became one that fell foul of Pi-hole too. Thus, they were left after me for a single shot approach. Though some websites may complain, anything that cuts out distractions has to help productivity and emotional wellbeing.
VirtualBox memory allocation error: Solving Linux Mint host issues after LLM usage
22nd March 2025It happened to me today when I tried starting up Windows virtual machines in VirtualBox on my main Linux Mint workstation as a host after a long layover for these. They failed to start, only for these messages to appear:
Out of memory condition when allocating memory with low physical backing. (VERR_NO_LOW_MEMORY).
Result Code:
NS_ERROR_FAILURE (0x80004005)
Component:
ConsoleWrap
Interface:
IConsole {6ac83d89-6ee7-4e33-8ae6-b257b2e81be8}
Since the messages are cryptic in the circumstances, I had to seek out their meaning. The system has plenty of memory, so it could be that. Various suggestions came my way like installing the VirtualBox Extension Pack or reinstalling VirtualBox Extensions in the affected VM. The first had no effect, while the second was impossible.
However, there was one more suggestion: fragmentation of memory, much like file fragmentation on a disk drive. Thus, I opted for a reboot, which sorted out things, making it look as if that were the problem. If it comes up again, I might try compacting the memory with the following command, leaving for a while to complete due to any temporary system slowdown:
echo 1 > /proc/sys/vm/compact_memory
Because there had been some on-machine usage of an LLM, I now reckon that caused the malaise. These can be as heavy on memory as they are on processors, so fragmentation can result. That is yet another likely lesson learned from experimenting with this much-hyped technology.
How to make Firefox vertical scrollbars more visible on Windows 11
21st March 2025While some articles on the web have reading time added to them, thus the vertical scrollbar of a web browser can act as a hint of the length of a piece. Unfortunately, they are being made less conspicuous for the sake of aesthetics and at the expense of utility. Since Firefox is the browser that I use most of the time, addressing the matter there became a priority for me, Here then is how you configure things on Windows 11.
The first step is to open a new tab before entering about:config in the URL bar and pressing the return key on your keyboard. If doing this for the first time, you will meet a warning screen that you can disable. Agreeing to the warning conveys you to the next screen, where you can enter the string "scrollbar" and use the enter key to bring up a swathe of settings.
There are two that you need to set to false by double-clicking on the pre-existing value of true: widget.windows.overlay-scrollbars.enabled
and widget.non-native-theme.win.scrollbar.use-system-size
. There is one more setting that you need to tweak: widget.non-native-theme.scrollbar.size.override
should have a value greater than zero, the default. Using one of ten did what I wanted once I restarted Firefox. After that, I have things as I want them to be, though you may want to refine the width setting for your needs.
Keeping a graphical eye on CPU temperature and power consumption on the Linux command line
20th March 2025Following my main workstation upgrade in January, some extra monitoring has been needed. This follows on from the experience with building its predecessor more than three years ago.
Being able to do this in a terminal session keeps things lightweight, and I have done that with text displays like what you see below using a combination of sensors
and nvidia-smi
in the following command:
watch -n 2 "sensors | grep -i 'k10'; sensors | grep -i 'tdie'; sensors | grep -i 'tctl'; echo "" | tee /dev/fd/2; nvidia-smi"
Everything is done within a watch
command that refreshes the display every two seconds. Then, the panels are built up by a succession of commands separated with semicolons, one for each portion of the display. The grep
command is used to pick out the desired output of the sensors
command that is piped to it; doing that twice gets us two lines. The next command, echo "" | tee /dev/fd/2
, adds an extra line by sending a space to STDERR output before the output of nvidia-smi
is displayed. The result can be seen in the screenshot below.
However, I also came across a more graphical way to do things using commands like turbostat
or sensors
along with AWK programming and ttyplot
. Using the temperature output from the above and converting that needs the following:
while true; do sensors | grep -i 'tctl' | awk '{ printf("%.2f\n", $2); fflush(); }'; sleep 2; done | ttyplot -s 100 -t "CPU Temperature (Tctl)" -u "°C"
This is done in an infinite while
loop to keep things refreshing; the watch
command does not work for piping output from the sensors
command to both the awk
and ttyplot
commands in sequence and on a repeating, periodic basis. The awk
command takes the second field from the input text, formats it to two places of decimals and prints it before flushing the output buffer afterwards. The ttyplot
command then plots those numbers on the plot seen below in the screenshot with a y-axis scaled to a maximum of 100 (-s
), units of °C
(-u
) and a title of CPU Temperature (Tctl)
(-t
).
A similar thing can be done for the CPU wattage, which is how I learned of the graphical display possibilities in the first place. The command follows:
sudo turbostat --Summary --quiet --show PkgWatt --interval 1 | sudo awk '{ printf("%.2f\n", $1); fflush(); }' | sudo ttyplot -s 200 -t "Turbostat - CPU Power (watts)" -u "watts"
Handily, the turbostat
can be made to update every so often (every second in the command above), avoiding the need for any infinite while
loop. Since only a summary is needed for the wattage, all other output can be suppressed, though everything needs to work using superuser privileges, unlike the sensors
command earlier. Then, awk
is used like before to process the wattage for plotting; the first field is what is being picked out here. After that, ttyplot
displays the plot seen in the screenshot below with appropriate title, units and scaling. All works with output from one command acting as input to another using pipes.
All of this offers a lightweight way to keep an eye on system load, with the top
command showing the impact of different processes if required. While there are graphical tools for some things, command line possibilities cannot be overlooked either.
Avoiding repeated token requests by installing the Git credential helper on Linux Mint
19th March 2025On a new machine, I found asking for the same access token repeatedly. Since this is a long string, that is convenient and does not take long to become irritating. Thus, I sought a way to make it more streamlined. My initial attempt produced the following message:
git: 'credential-libsecret' is not a git command
The main cause for the above was the absence of the libsecret
credential helper, crucial for managing credentials securely in a keyring, from my system. The solution was to install the required packages from the command line:
sudo apt install libsecret-1-0 libsecret-1-dev
Following installation, the next step was to navigate to the appropriate directory and execute the make
command to compile the files within the directory, transforming them into an executable credential helper:
cd /usr/share/doc/git/contrib/credential/libsecret; sudo make
With the credential helper fully built, Git needed to be configured to use it by executing the following:
git config --global credential.helper /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret
Since one error message is enough for any new activity, it made sense to confirm that the credential helper resided in the correct location. That was accomplished by issuing this command:
ls -l /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret
All was well in my case, saving the need to reinstall Git or repeat the manual compilation of the credential helper. When all was done, I was ready to automate things further.
Dealing with this Python error message on Windows: UnicodeEncodeError: 'charmap' codec can't encode characters in position 56-57: character maps to <undefined>
14th March 2025Recently, I got caught out by the above message when summarising some text using Python and Open AI's API while working within VS Code. There was no problem on Linux or macOS, but it was triggered on the Windows command line from within VS Code. Unlike the Julia or R REPL's, everything in Python gets executed in the console like this:
& "C:/Program Files/Python313/python.exe" script.py
The Windows command line shell operated with cp1252 character encoding, and that was tripping up the code like the following:
with open("out.txt", "w") as file:
file.write(new_text)
The cure was to specify the encoding of the output text as utf-8:
with open("out.txt", "w", encoding='utf-8') as file:
file.write(new_text)
After that, all was well and text was written to a file like in the other operating systems. One other thing to note is that the use of backslashes in file paths is another gotcha. Adding an r before the quotes gets around this to escape the contents, like using double backslashes. Using forward slashes is another option.
with open(r"c:\temp\out.txt", "w", encoding='utf-8') as file:
file.write(new_text)
Finding human balance in an age of AI code generation
12th March 2025Recently, I was asked about how I felt about AI. Given that the other person was not an enthusiast, I picked on something that happened to me, not so long ago. It involved both Perplexity and Google Gemini when I was trying to debug something: both produced too much code. The experience almost inspired a LinkedIn post, only for some of the thinking to go online here for now. A spot of brainstorming using an LLM sounds like a useful exercise.
Going back to the original question, it happened during a meeting about potential freelance work. Thus, I tapped into experiences with code generators over several decades. The first one involved a metadata-driven tool that I developed; users reported that there was too much imperfect code to debug with the added complexity that dealing with clinical study data brings. That challenge resurfaced with another bespoke tool that someone else developed, and I opted to make things simpler: produce some boilerplate code and let users take things from there. Later, someone else again decided to have another go, seemingly with more success.
It is even more challenging when you are insufficiently familiar with the code that is being produced. That happened to me with shell scripting code from Google Gemini that was peppered with some Awk code. There was no alternative but to learn a bit more about the language from Tutorials Point and seek out an online book elsewhere. That did get me up to speed, and I will return to these when I am in need again.
Then, there was the time when I was trying to get a Julia script to deal with Google Drive needing permissions to be set. This started Google Gemini into adding more and more error checking code with try catch blocks. Since I did not have the issue at that point, I opted to halt and wait for its recurrence. When it did, I opted for a simpler approach, especially with the gdrive CLI tool starting up a web server for completing the process of reactivation. While there are times when shell scripting is better than Julia for these things, I added extra robustness and user-friendliness anyway.
During that second task, I was using VS Code with the GitHub Copilot plugin. There is a need to be careful, yet that can save time when it adds suggestions for you to include or reject. The latter may apply when it adds conditional logic that needs more checking, while simple code outputting useful text to the console can be approved. While that certainly is how I approach things for now, it brings up an increasingly relevant question for me.
How do we deal with all this code production? In an environment with myriads of unit tests and a great deal of automation, there may be more capacity for handling the output than mere human inspection and review, which can overwhelm the limitations of a human context window. A quick search revealed that there are automated tools for just this purpose, possibly with their own learning curves; otherwise, manual working could be a better option in some cases.
After all, we need to do our own thinking too. That was brought home to me during the Julia script editing. To come up with a solution, I had to step away from LLM output and think creatively to come up with something simpler. There was a tension between the two needs during the exercise, which highlighted how important it is to learn not to be distracted by all the new technology. Being an introvert in the first place, I need that solo space, only to have to step away from technology to get that when it was a refuge in the first place.
For anyone with a programming hobby, they have to limit all this input to avoid being overwhelmed; learning a programming language could involve stripping out AI extensions from a code editor, for instance, LLM output has its place, yet it has to be at a human scale too. That perhaps is the genius of a chat interface, and we now have Agentic AI too. It is as if the technology curve never slackens, at least not until the current boom ends, possibly when things break because they go too far beyond us. All this acceleration is fine until we need to catch up with what is happening.