Technology Tales

Adventures in consumer and enterprise technology

TOPIC: CROSS-PLATFORM SOFTWARE

Secure email services: Protecting your digital communications

21st August 2025

In an era where digital privacy faces increasing threats from corporate surveillance, government oversight and cyberattacks, traditional email services often fall short of protecting sensitive communications. Major providers frequently scan messages for advertising purposes and store data in ways that leave users vulnerable to breaches and unauthorised access.

The following three email services represent a different approach—one that prioritizes user privacy through robust encryption, transparent practices and a genuine commitment to data protection. Each offers end-to-end encryption that ensures only intended recipients can read messages, while employing various technical and legal safeguards to keep user data secure from third-party access.

From Belgium's court-order requirements to Switzerland's strong privacy laws and Germany's open-source transparency, these services demonstrate how geography, technology and philosophy combine to create truly private communication platforms that put users back in control of their digital correspondence.

Mailfence

This encrypted email service launched in 2013 by ContactOffice Group operates from Brussels, Belgium, providing users with OpenPGP-based end-to-end encryption and digital signature capabilities whilst maintaining servers under Belgian privacy protection laws that require court approval for any access requests. The platform generates private keys within the browser and encrypts them using AES-256 with user passphrases, ensuring the service provider cannot access user encryption keys and supports standard security protocols including SPF, DKIM, DMARC, TLS and two-factor authentication alongside anti-spam filtering. Beyond secure email functionality that works with POP, IMAP, SMTP and Exchange ActiveSync protocols, the service integrates calendar management with CalDAV support, contact organisation with various import and export formats, document storage with online editing through WebDAV access and group collaboration tools for sharing files and calendars.

Proton

Founded in 2014 by CERN scientists and now majority-owned by the non-profit Proton Foundation, this Swiss technology company has built a comprehensive suite of privacy-focused services that serve over 100 million users worldwide. The company's flagship service, Proton Mail, offers end-to-end encryption for secure email communication, ensuring that only senders and recipients can read messages whilst the company itself cannot access the content.

The ecosystem has expanded to include Proton VPN for secure internet browsing, Proton Drive for encrypted cloud storage, Proton Calendar for scheduling, Proton Pass as an open-source password manager and recently Lumo, a privacy-centred AI chatbot. All services operate under Swiss privacy laws and employ zero-access encryption technology. The company's mission centres on protecting user privacy against both authoritarian surveillance and big technology companies' data collection practices, offering an alternative to advertisement-supported services that typically monetise user information.

Tuta

Formerly known as Tutanota, this German-developed encrypted email service operates under a freemium model and was founded by Tutao GmbH in 2011, officially rebranding to Tuta in November 2023. The platform provides automatic end-to-end encryption for emails, subject lines, attachments and calendars using hybrid encryption including AES-256 and RSA-2048, whilst newer accounts benefit from post-quantum cryptography through the TutaCrypt protocol featuring algorithms like X25519 and Kyber-1024.

Registration requires no phone number and the service claims no IP logging, with private and public keys generated locally, and private keys encrypted by user passwords before storage. The open-source platform is accessible via webmail with applications for Android, iOS, Windows, macOS and Linux, and includes an encrypted calendar, contact storage, search functionality and two-factor authentication support. The service has received recognition for its strong encryption and privacy focus but has faced challenges including a significant drop in Google search visibility following Digital Markets Act implementation.

How complexity can blind you

17th August 2025

Visitors may not have noticed it, but I was having a lot of trouble with this website. Intermittent slowdowns beset any attempt to add new content or perform any other administration. This was not happening on any other web portal that I had, even one sharing the same publishing software.

Even so, WordPress did get the blame at first, at least when deactivating plugins had no effect. Then, it was the turn of the web server, resulting in a move to something more powerful and my leaving Apache for Nginx at the same time. Redis caching was another suspect, especially when things got in a twist on the new instance. As if that were not enough, MySQL came in for some scrutiny too.

Finally, another suspect emerged: Cloudflare. Either some settings got mangled or something else was happening, but cutting out that intermediary was enough to make things fly again. Now, I use bunny.net for DNS duties instead, and the simplification has helped enormously; the previous layering was no help with debugging. With a bit of care, I might add some other tools behind the scenes while taking things slowly to avoid confusion in the future.

An alternate approach to setting up a local Git repository with a remote GitHub connection

24th March 2025

For some reason, I ended with two versions of this at the draft stage, forcing me to compare and contrast before merging them together to produce what you find here. The inspiration was something that I encountered a while ago: getting a local repository set up in a perhaps unconventional manner.

The simpler way of working would be to set up a repo on GitHub and clone it to the local machine, yet other needs are the cause of doing things differently. In contrast, this scheme stars with initialising the local directory first using the following command after creating it with some content and navigating there:

git init

This marks the directory as a Git repository, allowing you to track changes by creating a hidden .git directory. Because security measures often require verification of directories when executing Git commands, it is best to configure a safe directory with the following command to avoid any issues:

git config --global --add safe.directory [path to directory]

In the above, replace the path with your specific project directory. This ensures that Git recognises your directory as safe and authorised for operations, avoiding any messages whenever you work in there.

With that completed, it is time to add files to the staging area, which serves as an area where you can review and choose changes to be committed to the repository. Here are two commands that show different ways of accomplishing this:

git add README.md

git add .

The first command stages the README.md file, preparing it for the next commit, while the second stages all files in the directory (the . is a wildcard operator that includes everything in there).

Once your files are staged, you are ready to commit them. A commit is essentially a snapshot of your changes, marking a specific point in the project's history. The following command will commit your staged changes:

git commit -m "first commit"

The -m flag allows you to add a descriptive message for context; here, it is "first commit." This message helps with understanding the purpose of the commit when reviewing project history.

Before pushing your local files online, you will need to create an empty repository on GitHub using the GitHub website if you do not have one already. While still on the GitHub website, click on the Code button and copy the URL shown under the HTTPS tab that is displayed. This takes the form https://github.com/username/repository.gitand is required for running the next command in your local terminal session:

git remote add origin https://github.com/username/repository.git

This command establishes a remote connection under the alias origin. By default, Git sets the branch name to 'master'. However, recent conventions prefer using 'main'. To rename your branch, execute:

git branch -M main

This command will rename your current branch to 'main', aligning it with modern version control standards. Finally, you must push your changes from the local repository to the remote repository on GitHub, using the following command:

git push -u origin main

The -u flag sets the upstream reference, meaning future push and pull operations will default to this remote branch. This last step completes the process of setting up a local repository, linking it to a remote one on GitHub, staging any changes, committing these and pushing them to the remote repository.

Keeping a graphical eye on CPU temperature and power consumption on the Linux command line

20th March 2025

Following my main workstation upgrade in January, some extra monitoring has been needed. This follows on from the experience with building its predecessor more than three years ago.

Being able to do this in a terminal session keeps things lightweight, and I have done that with text displays like what you see below using a combination of sensors and nvidia-smi in the following command:

watch -n 2 "sensors | grep -i 'k10'; sensors | grep -i 'tdie'; sensors | grep -i 'tctl'; echo "" | tee /dev/fd/2; nvidia-smi"

Everything is done within a watch command that refreshes the display every two seconds. Then, the panels are built up by a succession of commands separated with semicolons, one for each portion of the display. The grep command is used to pick out the desired output of the sensors command that is piped to it; doing that twice gets us two lines. The next command, echo "" | tee /dev/fd/2, adds an extra line by sending a space to STDERR output before the output of nvidia-smi is displayed. The result can be seen in the screenshot below.

However, I also came across a more graphical way to do things using commands like turbostat or sensors along with AWK programming and ttyplot. Using the temperature output from the above and converting that needs the following:

while true; do sensors | grep -i 'tctl' | awk '{ printf("%.2f\n", $2); fflush(); }'; sleep 2; done | ttyplot -s 100 -t "CPU Temperature (Tctl)" -u "°C"

This is done in an infinite while loop to keep things refreshing; the watch command does not work for piping output from the sensors command to both the awk and ttyplot commands in sequence and on a repeating, periodic basis. The awk command takes the second field from the input text, formats it to two places of decimals and prints it before flushing the output buffer afterwards. The ttyplot command then plots those numbers on the plot seen below in the screenshot with a y-axis scaled to a maximum of 100 (-s), units of °C (-u) and a title of CPU Temperature (Tctl) (-t).

A similar thing can be done for the CPU wattage, which is how I learned of the graphical display possibilities in the first place. The command follows:

sudo turbostat --Summary --quiet --show PkgWatt --interval 1 | sudo awk '{ printf("%.2f\n", $1); fflush(); }' | sudo ttyplot -s 200 -t "Turbostat - CPU Power (watts)" -u "watts"

Handily, the turbostat can be made to update every so often (every second in the command above), avoiding the need for any infinite while loop. Since only a summary is needed for the wattage, all other output can be suppressed, though everything needs to work using superuser privileges, unlike the sensors command earlier. Then, awk is used like before to process the wattage for plotting; the first field is what is being picked out here. After that, ttyplot displays the plot seen in the screenshot below with appropriate title, units and scaling. All works with output from one command acting as input to another using pipes.

All of this offers a lightweight way to keep an eye on system load, with the top command showing the impact of different processes if required. While there are graphical tools for some things, command line possibilities cannot be overlooked either.

Finding human balance in an age of AI code generation

12th March 2025

Recently, I was asked about how I felt about AI. Given that the other person was not an enthusiast, I picked on something that happened to me, not so long ago. It involved both Perplexity and Google Gemini when I was trying to debug something: both produced too much code. The experience almost inspired a LinkedIn post, only for some of the thinking to go online here for now. A spot of brainstorming using an LLM sounds like a useful exercise.

Going back to the original question, it happened during a meeting about potential freelance work. Thus, I tapped into experiences with code generators over several decades. The first one involved a metadata-driven tool that I developed; users reported that there was too much imperfect code to debug with the added complexity that dealing with clinical study data brings. That challenge resurfaced with another bespoke tool that someone else developed, and I opted to make things simpler: produce some boilerplate code and let users take things from there. Later, someone else again decided to have another go, seemingly with more success.

It is even more challenging when you are insufficiently familiar with the code that is being produced. That happened to me with shell scripting code from Google Gemini that was peppered with some Awk code. There was no alternative but to learn a bit more about the language from Tutorials Point and seek out an online book elsewhere. That did get me up to speed, and I will return to these when I am in need again.

Then, there was the time when I was trying to get a Julia script to deal with Google Drive needing permissions to be set. This started Google Gemini into adding more and more error checking code with try catch blocks. Since I did not have the issue at that point, I opted to halt and wait for its recurrence. When it did, I opted for a simpler approach, especially with the gdrive CLI tool starting up a web server for completing the process of reactivation. While there are times when shell scripting is better than Julia for these things, I added extra robustness and user-friendliness anyway.

During that second task, I was using VS Code with the GitHub Copilot plugin. There is a need to be careful, yet that can save time when it adds suggestions for you to include or reject. The latter may apply when it adds conditional logic that needs more checking, while simple code outputting useful text to the console can be approved. While that certainly is how I approach things for now, it brings up an increasingly relevant question for me.

How do we deal with all this code production? In an environment with myriads of unit tests and a great deal of automation, there may be more capacity for handling the output than mere human inspection and review, which can overwhelm the limitations of a human context window. A quick search revealed that there are automated tools for just this purpose, possibly with their own learning curves; otherwise, manual working could be a better option in some cases.

After all, we need to do our own thinking too. That was brought home to me during the Julia script editing. To come up with a solution, I had to step away from LLM output and think creatively to come up with something simpler. There was a tension between the two needs during the exercise, which highlighted how important it is to learn not to be distracted by all the new technology. Being an introvert in the first place, I need that solo space, only to have to step away from technology to get that when it was a refuge in the first place.

For anyone with a programming hobby, they have to limit all this input to avoid being overwhelmed; learning a programming language could involve stripping out AI extensions from a code editor, for instance, LLM output has its place, yet it has to be at a human scale too. That perhaps is the genius of a chat interface, and we now have Agentic AI too. It is as if the technology curve never slackens, at least not until the current boom ends, possibly when things break because they go too far beyond us. All this acceleration is fine until we need to catch up with what is happening.

Getting Pylance to recognise locally installed packages in VSCode running on Linux Mint

17th December 2024

When using VSCode on Linux Mint, I noticed that it was not finding any Python package installed into my user area, as is my usual way of working. Thus, it was being highlighted as being missing when it was already there.

The solution was to navigate to File > Preferences > Settings and click on the Open Settings (JSON) icon in the top right-hand corner of the app to add something like the following snippet in there.

"python.analysis.extraPaths": [
"/home/[user]/.local/bin"
]

Once you have added your user account to the above, saving the file is the next step. Then, VSCode needs a restart to pick up the new location. After that, the false positives get eliminated.

What to do when the externally-managed environment error appears while using pip to install Python packages on Linux Mint 22

16th December 2024

After upgrading to Linux Mint 22, the following message appeared when attempting to install Python packages using the pip command:

error: externally-managed-environment

× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.

If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.

If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.

See /usr/share/doc/python3.12/README.venv for more information.

note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.

This will frustrate anyone following how-tos on the web, so users will need to know about it. On something like Linux Mint, the repositories may not be as up-to-date as PyPI, so picking up the very latest version has its advantages. Thus, I initially used the unrecommended --break-system-packages switch to get things going as before, since doing never broke anything before. While the way of working feels like an overkill in some ways, using pipx probably is the way forward as long as things work as I want them to do.

There is wisdom in using virtual environments too, especially when AI models are involved. For most of what I get to do, that may be getting too elaborate. Then, deleting or renaming the message file in /usr/lib/python3.12/EXTERNALLY-MANAGED is tempting if that gets around things, as retrograde as that probably is. After all, I never broke anything before this message started to appear, possibly since my interests are data related.

Clearing the Julia REPL

23rd September 2024

During development, there are times when you need to clear the Julia REPL. It can become so laden with content that it becomes hard to perform debugging of your code. One way to accomplish this is issuing the CTRL + L keyboard shortcut while focus is within the REPL; you need to click on it first. Another is to issue the following in the REPL itself:

print("\033c")

Here \033 is an escape code in octal format. It is often used in terminal control sequences. The c character is what resets the terminal to its initial state. Printing this sequence is what does the clearance, and variations can be used to clear other kinds of console screens too. That makes it a more generic solution.

Dropping to an underlying shell using the ; character is another possibility. Then, you can use the clear or cls commands as needed; the latter is for Windows systems.

One last option is to define a Julia function for doing this:

function clear_console()
    run(`clear`)  # or `cls` for Windows
end

Calling the clear_console function then clears the screen programmatically, allowing for greater automation. The run function is the one that sends that command in backticks to the underlying shell for execution. Even using that alone should work too.

Avoiding errors caused by missing Julia packages when running code on different computers

15th September 2024

As part of an ongoing move to multi-location working, I am sharing scripts and other artefacts via GitHub. This includes Julia programs that I have. That has led me to realise that a bit of added automation would help iron out any package dependencies that arise. Setting up things as projects could help, yet that feels a little too much effort for what I have. Thus, I have gone for adding extra code to check on and install any missing packages instead of having failures.

For adding those extra packages, I instate the Pkg package as follows:

import Pkg

While it is a bit hackish, I then declare a single array that lists the packages to be checked:

pkglits =["HTTP", "JSON3", "DataFrames", "Dates", "XLSX"]

After that, there is a function that uses a try catch construct to find whether a package exists or not, using the inbuilt eval macro to try a using declaration:

tryusing(pkgsym) = try
@eval using $pkgsym
return true
catch e
return false
end

The above function is called in a loop that both tests the existence of a package and, if missing, installs it:

for i in 1:length(pkglits)
rslt = tryusing(Symbol(pkglits[i]))
if rslt == false
Pkg.add(pkglits[i])
end
end

Once that has completed, using the following line to instate the packages required by later processing becomes error free, which is what I sought:

using HTTP, JSON3, DataFrames, Dates, XLSX

Version control of large files on GitHub

27th August 2024

When you try pushing large files to a GitHub repository, you may find that you breach its 100 MB limit. When you do, you either need to buy a data pack or exclude the file from being tracked. In my case, I decided that the monthly fee for 50 GB was not overly onerous, so I added that. Excluding such files using the .gitignore functionality makes a lot of sense, too.

If you decide to proceed as I did, you will need to install git-lfs. Since that may vary by operating system, I am leaving to you to look for those details on the website that I have linked to earlier. Activating it for your user account needs the following:

git lfs install

Following that, you need to flag the file or type of file using a command like the following:

git lfs track "[file path with name or search pattern]"

Executing the above adds the file path including the file name or the search pattern (normal operating system wildcards like * work here) to a file named .gitattributes in the root of the repository folder hierarchy. If that file no longer exists, it will get created the first time that this is done. It will also need to be added to the repository using git add like any other file. A general command like the following will also do it anyway, since it covers everything in the relevant folder:

git add .

After making a commit, the next step is to push the contents into GitHub. At this stage, the large file or files will be recognised and sent to large file storage with only a text link in the main area. Everything else will be handled as normal.

While on this subject, I need to add a few words of warning. Pushing a large file to GitHub without doing things up front will cause the operation to fail. That may make the transition over to large file storage all the more tricky, since things will be out of order. Moving everything to a temporary folder and again cloning the repository was how I got out of this impasse when it happened to me. Then, I could get the large file handling set up before getting going again. It is better to sort things like this out at the start of the process, rather than attempting to remedy things part way through the process.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.