Technology Tales

Adventures in consumer and enterprise technology

The critical differences between Generative AI, AI Agents, and Agentic Systems

9th April 2025

The distinction between three key artificial intelligence concepts can be explained without technical jargon. Here then are the descriptions:

  • Generative AI functions as a responsive assistant that creates content when prompted but lacks initiative, memory or goals. Examples include ChatGPT, Claude and GitHub Copilot.
  • AI Agents represent a step forward, actively completing tasks by planning, using tools, interacting with APIs and working through processes independently with minimal supervision, similar to a junior colleague.
  • Agentic AI represents the most sophisticated approach, possessing goals and memory while adapting to changing circumstances; it operates as a thinking system rather than a simple chatbot, capable of collaboration, self-improvement and autonomous operation.

This evolution marks a significant shift from building applications to designing autonomous workflows, with various frameworks currently being developed in this rapidly advancing field.

Add Canonical Tags to WordPress without plugins

31st March 2025

Search engines need to know which is which because they cannot know which is the real content when there is any duplication, unless you tell them. That is where canonical tags come in handy. By default, WordPress appears to add these for posts and pages, which makes sense. However, you can add them for other places too. While a plugin can do this for you, adding some code to your theme's functions.php file also does the job. This is how it could look:

function add_canonical_link() {
    global $post;

    // Check if we're on a single post/page
    if (is_singular()) {
        $canonical_url = get_permalink($post->ID);
    } 
    // For the homepage
    elseif (is_home() || is_front_page()) {
        $canonical_url = home_url('/');
    }
    // For category archives
    elseif (is_category()) {
        $canonical_url = get_category_link(get_query_var('cat'));
    }
    // For tag archives
    elseif (is_tag()) {
        $canonical_url = get_tag_link(get_query_var('tag_id'));
    }
    // For other archive pages
    elseif (is_archive()) {
        $canonical_url = get_permalink();
    }
    // Fallback for other pages
    else {
        $canonical_url = get_permalink();
    }

    // Output the canonical link
    echo '' . "\n";
}

// Hook the function to wp_head
add_action('wp_head', 'add_canonical_link');

// Remove default canonical link
remove_action('wp_head', 'rel_canonical');

The first part defines a function to define the canonical URL and create the tag to be added. With that completed, the penultimate piece of code hooks it into the wp_head part of the web page, while the last function gets rid of the default link to get avoid any duplication of output.

WordPress URL management with canonical tags and permalink simplification

29th March 2025

Recently, I have been going through the content, rewriting things where necessary. In the early days, there were some posts following diary and announcement styles that I now avoid. Some now have been moved to a more appropriate place for those, while others have been removed.

While this piece might fall into the announcement category, I am going to mix up things too. After some prevarication, I have removed dates from the addresses of entries like this after seeing some duplication. Defining canonical URL's in the page header like this does help:

<link rel="canonical" href="https://technologytales.com/an-alternate-approach-to-setting-up-a-local-git-repository-with-a-remote-github-connection/">

However, it becomes tricky when you have zero-filled and non-zero-filled dates going into URL's. Using the following in a .htaccess file redirects the latter to the former, which is a workaround:

RewriteRule ^([0-9]{4})/([1-9])/([0-9]{1,2})/(.*)$ /$1/0$2/$3/$4 [R=301,L]
RewriteRule ^([0-9]{4})/(0[1-9]|1[0-2])/([1-9])/(.*)$ /$1/$2/0$3/$4 [R=301,L]

The first of these lines zero-fills the month component, while the second zero-fills the day component. Here, [0-9]{4} looks for a four digit year. Then, [1-9] picks up the non-zero-filled components that need zero-prefixing. The replacements are 0$2 or 0$3 as needed.

Naturally, this needs URL rewriting to be turned on for it to work, which it does. Since my set-up is on Apache, the MOD_REWRITE module needs to be activated too. Then, your configuration needs to allow its operation. With dates removed from WordPress permalinks, I needed to add the following line to redirect old addresses to new ones for the sake of search engine optimisation:

RedirectMatch 301 ^/([0-9]{4})/([0-9]{2})/([0-9]{2})/(.*)$ /$4

Here, [0-9]{4} picks up the four digit year and [0-9]{2} finds the two-digit month and day. The, (.*) is the rest of the URL that is retained as signalled by /$4 at the end. That redirects things nicely, without my having to have a line for every post on the website. Another refinement was to remove query strings from every page a visitor would see:

RewriteCond %{REQUEST_URI} !(^/wp-admin/|^/wp-login\.php$) [NC]
RewriteCond %{QUERY_STRING} .
RewriteCond %{QUERY_STRING} !(&preview=true) [NC]
RewriteRule ^(.*)$ /$1? [R=301,L]

This still allows the back end and login screens to work as before, along with post previews during the writing stage. One final note is that I am not using the default login address for the sake of added security, yet that needs to be mentioned nowhere in the .htaccess file anyway.

Claude Projects: Reusing your favourite AI prompts

28th March 2025

Some things that I do with Anthropic Claude, I end up repeating. Generating titles for pieces of text or rewriting text to make it read better are activities that happen a lot. Others would include the generation of single word previews for a piece or creating a summary.

Python or R scripts come in handy for summarisation, either for a social media post or for introduction into other content. In fact, this is how I go much of the time. Nevertheless, I found another option: using Projects in the Claude web interface.

These allow you to store a prompt that you reuse a lot in the Project Knowledge panel. Otherwise, you need to supply a title and a description too. Once completed, you just add your text in there for the AI to do the rest. Title generation and text rewriting already are set up like this, and keywords could follow. It is a great way to reuse and refine prompts that you use a lot.

An alternate approach to setting up a local Git repository with a remote GitHub connection

24th March 2025

For some reason, I ended with two versions of this at the draft stage, forcing me to compare and contrast before merging them together to produce what you find here. The inspiration was something that I encountered a while ago: getting a local repository set up in a perhaps unconventional manner.

The simpler way of working would be to set up a repo on GitHub and clone it to the local machine, yet other needs are the cause of doing things differently. In contrast, this scheme stars with initialising the local directory first using the following command after creating it with some content and navigating there:

git init

This marks the directory as a Git repository, allowing you to track changes by creating a hidden .git directory. Because security measures often require verification of directories when executing Git commands, it is best to configure a safe directory with the following command to avoid any issues:

git config --global --add safe.directory [path to directory]

In the above, replace the path with your specific project directory. This ensures that Git recognises your directory as safe and authorised for operations, avoiding any messages whenever you work in there.

With that completed, it is time to add files to the staging area, which serves as an area where you can review and choose changes to be committed to the repository. Here are two commands that show different ways of accomplishing this:

git add README.md

git add .

The first command stages the README.md file, preparing it for the next commit, while the second stages all files in the directory (the . is a wildcard operator that includes everything in there).

Once your files are staged, you are ready to commit them. A commit is essentially a snapshot of your changes, marking a specific point in the project's history. The following command will commit your staged changes:

git commit -m "first commit"

The -m flag allows you to add a descriptive message for context; here, it is "first commit." This message helps with understanding the purpose of the commit when reviewing project history.

Before pushing your local files online, you will need to create an empty repository on GitHub using the GitHub website if you do not have one already. While still on the GitHub website, click on the Code button and copy the URL shown under the HTTPS tab that is displayed. This takes the form https://github.com/username/repository.gitand is required for running the next command in your local terminal session:

git remote add origin https://github.com/username/repository.git

This command establishes a remote connection under the alias origin. By default, Git sets the branch name to 'master'. However, recent conventions prefer using 'main'. To rename your branch, execute:

git branch -M main

This command will rename your current branch to 'main', aligning it with modern version control standards. Finally, you must push your changes from the local repository to the remote repository on GitHub, using the following command:

git push -u origin main

The -u flag sets the upstream reference, meaning future push and pull operations will default to this remote branch. This last step completes the process of setting up a local repository, linking it to a remote one on GitHub, staging any changes, committing these and pushing them to the remote repository.

Enhancing focus and wellbeing by eliminating digital distractions while browsing the web

23rd March 2025

Such is the state of the world at the moment that I ration my news intake for the sake of my mental wellbeing. That also includes the content that websites present to me. Last November, I was none too please to see Perplexity showing me something unwanted on its home page. However, there appeared to be no way to turn this off, in contrast to the default page shown in a new browser tab. Then, I decided to tolerate the intrusion, only for the practice to develop over time.

Then, I happened on uBlock Origin after finding that it will block unwanted parts of web pages. While it was a bit hit-and-miss to get things going on the Perplexity website, it did the job after some trial and error. Things can change, which means the blocking may need refinement. Even so, I can handle that. When YouTube became another place where I needed to block distractions like previews of other videos during a webinar.

Now, uBlock Origin has become the only ad blocker that and I still use with Firefox. Others like Ghostery broke websites, especially that of the UK Met Office with its cookie blocking; the Ryanair one was another casualty, and became one that fell foul of Pi-hole too. Thus, they were left after me for a single shot approach. Though some websites may complain, anything that cuts out distractions has to help productivity and emotional wellbeing.

VirtualBox memory allocation error: Solving Linux Mint host issues after LLM usage

22nd March 2025

It happened to me today when I tried starting up Windows virtual machines in VirtualBox on my main Linux Mint workstation as a host after a long layover for these. They failed to start, only for these messages to appear:

Out of memory condition when allocating memory with low physical backing. (VERR_NO_LOW_MEMORY).

Result Code:
NS_ERROR_FAILURE (0x80004005)
Component:
ConsoleWrap
Interface:
IConsole {6ac83d89-6ee7-4e33-8ae6-b257b2e81be8}

Since the messages are cryptic in the circumstances, I had to seek out their meaning. The system has plenty of memory, so it could be that. Various suggestions came my way like installing the VirtualBox Extension Pack or reinstalling VirtualBox Extensions in the affected VM. The first had no effect, while the second was impossible.

However, there was one more suggestion: fragmentation of memory, much like file fragmentation on a disk drive. Thus, I opted for a reboot, which sorted out things, making it look as if that were the problem. If it comes up again, I might try compacting the memory with the following command, leaving for a while to complete due to any temporary system slowdown:

echo 1 > /proc/sys/vm/compact_memory

Because there had been some on-machine usage of an LLM, I now reckon that caused the malaise. These can be as heavy on memory as they are on processors, so fragmentation can result. That is yet another likely lesson learned from experimenting with this much-hyped technology.

How to make Firefox vertical scrollbars more visible on Windows 11

21st March 2025

While some articles on the web have reading time added to them, thus the vertical scrollbar of a web browser can act as a hint of the length of a piece. Unfortunately, they are being made less conspicuous for the sake of aesthetics and at the expense of utility. Since Firefox is the browser that I use most of the time, addressing the matter there became a priority for me, Here then is how you configure things on Windows 11.

The first step is to open a new tab before entering about:config in the URL bar and pressing the return key on your keyboard. If doing this for the first time, you will meet a warning screen that you can disable. Agreeing to the warning conveys you to the next screen, where you can enter the string "scrollbar" and use the enter key to bring up a swathe of settings.

There are two that you need to set to false by double-clicking on the pre-existing value of true: widget.windows.overlay-scrollbars.enabled and widget.non-native-theme.win.scrollbar.use-system-size. There is one more setting that you need to tweak: widget.non-native-theme.scrollbar.size.override should have a value greater than zero, the default. Using one of ten did what I wanted once I restarted Firefox. After that, I have things as I want them to be, though you may want to refine the width setting for your needs.

Keeping a graphical eye on CPU temperature and power consumption on the Linux command line

20th March 2025

Following my main workstation upgrade in January, some extra monitoring has been needed. This follows on from the experience with building its predecessor more than three years ago.

Being able to do this in a terminal session keeps things lightweight, and I have done that with text displays like what you see below using a combination of sensors and nvidia-smi in the following command:

watch -n 2 "sensors | grep -i 'k10'; sensors | grep -i 'tdie'; sensors | grep -i 'tctl'; echo "" | tee /dev/fd/2; nvidia-smi"

Everything is done within a watch command that refreshes the display every two seconds. Then, the panels are built up by a succession of commands separated with semicolons, one for each portion of the display. The grep command is used to pick out the desired output of the sensors command that is piped to it; doing that twice gets us two lines. The next command, echo "" | tee /dev/fd/2, adds an extra line by sending a space to STDERR output before the output of nvidia-smi is displayed. The result can be seen in the screenshot below.

However, I also came across a more graphical way to do things using commands like turbostat or sensors along with AWK programming and ttyplot. Using the temperature output from the above and converting that needs the following:

while true; do sensors | grep -i 'tctl' | awk '{ printf("%.2f\n", $2); fflush(); }'; sleep 2; done | ttyplot -s 100 -t "CPU Temperature (Tctl)" -u "°C"

This is done in an infinite while loop to keep things refreshing; the watch command does not work for piping output from the sensors command to both the awk and ttyplot commands in sequence and on a repeating, periodic basis. The awk command takes the second field from the input text, formats it to two places of decimals and prints it before flushing the output buffer afterwards. The ttyplot command then plots those numbers on the plot seen below in the screenshot with a y-axis scaled to a maximum of 100 (-s), units of °C (-u) and a title of CPU Temperature (Tctl) (-t).

A similar thing can be done for the CPU wattage, which is how I learned of the graphical display possibilities in the first place. The command follows:

sudo turbostat --Summary --quiet --show PkgWatt --interval 1 | sudo awk '{ printf("%.2f\n", $1); fflush(); }' | sudo ttyplot -s 200 -t "Turbostat - CPU Power (watts)" -u "watts"

Handily, the turbostat can be made to update every so often (every second in the command above), avoiding the need for any infinite while loop. Since only a summary is needed for the wattage, all other output can be suppressed, though everything needs to work using superuser privileges, unlike the sensors command earlier. Then, awk is used like before to process the wattage for plotting; the first field is what is being picked out here. After that, ttyplot displays the plot seen in the screenshot below with appropriate title, units and scaling. All works with output from one command acting as input to another using pipes.

All of this offers a lightweight way to keep an eye on system load, with the top command showing the impact of different processes if required. While there are graphical tools for some things, command line possibilities cannot be overlooked either.

Avoiding repeated token requests by installing the Git credential helper on Linux Mint

19th March 2025

On a new machine, I found asking for the same access token repeatedly. Since this is a long string, that is convenient and does not take long to become irritating. Thus, I sought a way to make it more streamlined. My initial attempt produced the following message:

git: 'credential-libsecret' is not a git command

The main cause for the above was the absence of the libsecret credential helper, crucial for managing credentials securely in a keyring, from my system. The solution was to install the required packages from the command line:

sudo apt install libsecret-1-0 libsecret-1-dev

Following installation, the next step was to navigate to the appropriate directory and execute the make command to compile the files within the directory, transforming them into an executable credential helper:

cd /usr/share/doc/git/contrib/credential/libsecret; sudo make

With the credential helper fully built, Git needed to be configured to use it by executing the following:

git config --global credential.helper /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret

Since one error message is enough for any new activity, it made sense to confirm that the credential helper resided in the correct location. That was accomplished by issuing this command:

ls -l /usr/share/doc/git/contrib/credential/libsecret/git-credential-libsecret

All was well in my case, saving the need to reinstall Git or repeat the manual compilation of the credential helper. When all was done, I was ready to automate things further.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.