Technology Tales

Adventures in consumer and enterprise technology

Reactivating Touch ID on an iMac when the options are greyed out in System Settings

23rd October 2024

Recently, when the battery in my iMac keyboard ran out of charge, I merely connected it to the all-in-one system using the supplied cable. However, a software upgrade meant a system restart, which lost the ability to unlock the iMac using the Touch ID.

When I went to Touch ID & Password within the Systems Settings app, I found all the options greyed out, preventing me from restoring things that way. The result was that I needed to disconnect the cable before turning off the keyboard in advance of turning it back on again. That was enough to restore Touch ID usage; the settings were not only activated but turned on for me. It is a little lesson on how different things can be for a new Mac user.

What to do when Tuta Mail issues this message when logging into an account on macOS: Could not access Secret Storage

24th September 2024

Two things changed before Tuta Mail stopped working as before: modifying Keychain Access settings and upgrading macOS from Sonoma to Sequoia. Either could have been a cause or none of them. The first of these was more likely a culprit than the other.

The result was the same: logging into Tuta Mail yielded an error like this: Could not access Secret Storage. The solution essentially is a two-step process: remove the app and delete its settings folder. Reinstallation then happens after these.

In Finder, go to Applications and move Tuta Mail to the Bin before clearing it from there. That uninstalls the app.

The next step needs you show hidden files and folders using the Command + Shift + . shortcut. Then, go to your home folder (this may need use of the Command + Shift + H shortcut). Open up the Library folder and find the folder called Application Support. Enter that and find the subfolder named tutanota-desktop. That needs to go to the Bin too before expunging it from there. Doing that provides the clean slate for restoration to commence.  After this, using the Command + Shift + . shortcut again hides the normally hidden files and folders once more.

Nothing is resolved with the removal of /Users/[username]/Library/Application Support/tutanota-desktop. Using the rm command from the command line interface will remove it faster than Finder, though that may be easier for many users.

Clearing the Julia REPL

23rd September 2024

During development, there are times when you need to clear the Julia REPL. It can become so laden with content that it becomes hard to perform debugging of your code. One way to accomplish this is issuing the CTRL + L keyboard shortcut while focus is within the REPL; you need to click on it first. Another is to issue the following in the REPL itself:

print("\033c")

Here \033 is an escape code in octal format. It is often used in terminal control sequences. The c character is what resets the terminal to its initial state. Printing this sequence is what does the clearance, and variations can be used to clear other kinds of console screens too. That makes it a more generic solution.

Dropping to an underlying shell using the ; character is another possibility. Then, you can use the clear or cls commands as needed; the latter is for Windows systems.

One last option is to define a Julia function for doing this:

function clear_console()
    run(`clear`)  # or `cls` for Windows
end

Calling the clear_console function then clears the screen programmatically, allowing for greater automation. The run function is the one that sends that command in backticks to the underlying shell for execution. Even using that alone should work too.

Little helpers

22nd September 2024

This could have been a piece that appeared on my outdoors blog until I got second thoughts. One reason why I might have done so is that I am making more use of Perplexity for searching the web and gaining more value from its output. However, that is proving more useful in writing what you find on here. Knowing the sources for a dynamically generated article adds more confidence when fact checking, and it is remarkable what comes up that you would find quickly with Google. There is added value with this one.

A better candidate would have been Anthropic's Claude. That has come in handy when writing trip reports. Being able to use a stub to prototype a blog entry really has its uses. The reality is that everything gets rewritten before anything gets published; these tools are never so good as to feature everything that you want to mention, even if they do a good job of mimicking your writing tone and style. Nevertheless, being able to work with the content beyond doing a brain dump from one's memory is an undeniable advance.

Sometimes, there are occasions when using Bing's access to OpenAI through Copilot helps with production of images. In reality, I do have an extensive personal library of images, so they possibly should suffice in many ways. However, curiosity about the technology overrides the effort that photo processing requires.

While there may be some level of controversy surrounding the use of AI tools in content creation, using such tooling for proofing content should not raise too much ire. Grammarly comes up a lot, though it is LanguageTool that I use to avoid excessive butting into my writing style. That has changed to comply with rules that had passed me without my noticing, but there are other things that need to be turned off. Configuring the proof tools in other ways might be better, so that is something to explore, or we could end up with too much standardisation of writing; there needs to be room for human creativity at all times.

All of these are just a sample of what is available. Just checking in with The Rundown AI will reveal that there is an onslaught of innovation right now. Hype also is a problem, yet we need to learn to use these tools. The changeover is equivalent to the explosive increase in availability of personal computing a generation ago. That brought its own share of challenges (some were on the curve while others were not) until everything settled down, and it will be the same with what is happening now.

Avoiding errors caused by missing Julia packages when running code on different computers

15th September 2024

As part of an ongoing move to multi-location working, I am sharing scripts and other artefacts via GitHub. This includes Julia programs that I have. That has led me to realise that a bit of added automation would help iron out any package dependencies that arise. Setting up things as projects could help, yet that feels a little too much effort for what I have. Thus, I have gone for adding extra code to check on and install any missing packages instead of having failures.

For adding those extra packages, I instate the Pkg package as follows:

import Pkg

While it is a bit hackish, I then declare a single array that lists the packages to be checked:

pkglits =["HTTP", "JSON3", "DataFrames", "Dates", "XLSX"]

After that, there is a function that uses a try catch construct to find whether a package exists or not, using the inbuilt eval macro to try a using declaration:

tryusing(pkgsym) = try
@eval using $pkgsym
return true
catch e
return false
end

The above function is called in a loop that both tests the existence of a package and, if missing, installs it:

for i in 1:length(pkglits)
rslt = tryusing(Symbol(pkglits[i]))
if rslt == false
Pkg.add(pkglits[i])
end
end

Once that has completed, using the following line to instate the packages required by later processing becomes error free, which is what I sought:

using HTTP, JSON3, DataFrames, Dates, XLSX

Saving yourself a reboot: remounting any overlooked volumes in Linux

14th September 2024

Recently, I got things a little out of order when starting up my main Linux system after an absence. Usually, I start up my NAS first so that the volumes get mounted when I start my Linux machine. However, it happened that I near enough started them together. Thus, my workstation completed it startup without having the NAS volumes mounted. A reboot would have sorted this, but there was another way: issuing the command that you see below:

sudo mount -a

This looked in my /etc/fstab file and mounted anything that was missing as long as the noauto option was not set. Because this was executed after the NAS had completed its own boot process, it volumes were not mounted on my system and fully available for what I needed to do next. If I had wanted to see what had been mounted, then I needed to issue the following command instead:

sudo mount -av

In addition to the a switch that triggers the mounting of missing volumes, there is now a v (for verbose) one for telling you what has happened. Needless to say, all this happens only if your /etc/fstab file is set up properly. If you are adding a new volume, and I was not, it does no harm to mount it manually before updating the configuration file. That should catch any errors first.

What to do when a GPG signature becomes invalid for a package repository on Linux Mint

12th September 2024

During a package update on my main Linux system, I encountered the following kind of error message:

An error occurred during the signature verification. The repository is not updated and the previous index files will be used. GPG error: https://cli.github.com/packages stable InRelease: The following signatures were invalid: EXPKEYSIG <GPG Key> GitHub CLI

The message indicated a problem with the GPG signature verification for the GitHub CLI repository. The cause was that the signature for the repository was invalid, preventing the package manager from updating the repository's index files. The first step then was to remove the invalid GPG key using the following command:

sudo apt-key del <GPG Key>

With the invalid GPG key removed, the next step is to add the new GPG key for the GitHub CLI repository by issuing the following command:

curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo tee /usr/share/keyrings/githubcli-archive-keyring.gpg > /dev/null

Once I had the new GPG key, I was able to use my usual system update process without any problem. The error message was gone, and updates and upgrades proceeded as intended.

Getting rsync to resolve symbolic links

11th September 2024

Given how Dropbox changed its handling of symbolic links in 2019 such that internal links within a Dropbox file hierarchy got fixed and links leading outside from the Dropbox area no longer worked. Thankfully, the rsync utility found in many Linux and UNIX settings does not do that, as long as you have called it correctly.

By default, symbolic links are synchronised like any other file. That is what Dropbox does now. To get rsync to resolve the links as shortcuts to either a single file or more likely a folder containing more than one file, it needs the -L switch or option in the command. When that is present, the linked file or files will get synchronised and honours the point of having these links in the first place: allowing more flexibility with folder structures and avoiding any duplication of files and folders.

Unzipping more than one file at a time in Linux and macOS

10th September 2024

To me, it sounded like a task for shell scripting, but I wanted to extract three zip archives in one go. They had come from Google Drive and contained different splits of the files that I needed, raw images from a camera. However, I found a more succinct method than the line of code that you see below (it is intended for the BASH shell):

for z in *.zip; do; unzip "$z"; done

That loops through each file that matches a glob string. All I needed was something like this:

unzip '*.zip'

Without embarking on a search, I got close but have not quoted the search string. Without the quoting, it was not working for me. To be sure that I was extracting more than I needed, I made the wildcard string more specific for my case.

Once the extraction was complete, I moved the files into a Lightroom Classic repository for working on them later. All this happened on an iMac, but the extraction itself should work on any UNIX-based operating system, so long as the shell supports it.

A way to survey hours of daylight for locations of interest

9th September 2024

A few years back, I needed to get sunrise and sunset information for a location in Ireland. This was to help me plan visits to a rural location with a bus service going nearby, and I did not want to be waiting on the side of the road in the dark on my return journey. It ended up being a project that I undertook using the Julia programming language.

This had other uses too: one was the planning of trips to North America. This was how I learned that evenings in San Francisco were not as long as their counterparts in Ireland. Later, it had its uses in assessing the feasibility of seeing other parts of the Pacific Northwest during the month of August. Other matters meant that such designs never came to anything.

The Sunrise Sunset API was used to get the times for the start and end of daylight. That meant looping through the days of the year to get the information, but I needed to get the latitude and longitude information from elsewhere to fuel that process. While Google Maps has its uses with this, it is a manual and rather fiddly process. Sparing use of Nomintim's API is what helped with increasing the amount of automation and user-friendliness, especially what comes from OpenStreetMap.

Accessing using Julia's HTTP package got me the data in JSON format that I then converted into atomic vectors and tabular data. The end product is an Excel spreadsheet with all the times in UTC. A next step would be to use the solar noon information to port things to the correct timezone. It can be done manually in Excel and its kind, but some more automation would make things smoother.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.