Not so fast: When tasks using AI may take more time and attention than you expect
29th November 2025If you believed all the hype that surrounds AI, you might believe that all of us would out of work before we knew it. The truth is that the new technology is not that miraculous, especially when based on some experiences that I have been having. Firstly, there are deficiencies and then there will be new things that need doing as well as becoming possible for the first time.
PowerShell Scripting
One pertained to spinning up PowerShell scripts for doing code reviews of SAS programs submitted by a vendor to a client of mine. While all worked well for simple cases, I found that more complex tasks like finding the datasets using in code and comparing them against what is listed in the program headers became too complicated and probably needed a week of my time to get things in order, which was the amount of time that I did not have.
Picking out macro calls from code and comparing them against lists in the headers was more successful because the code situations were less variable. Other tasks were really handy, though, even if I would benefit from AI teaching me how to write PowerShell scripts by myself. That would give me more scope to critique the code that was being produced. Starting simple and progressing one step at a time would ensure sounder embedding of PowerShell commands in my memory.
Article Writing
It is all too tempting to get AI to write articles on subjects of your choosing for website content production. That which sounds like a labour-saving way to go can command a higher amount of attention than some realise. Sometimes, writing it all by yourself might be a better approach, one that I am using for this piece.
My workflow often involves these steps when AI is involved: assembly of the source material, conversion of source material into an article by one AI, fact checking of the same text by another AI and restructuring by that second AI with added links for those wanting to find out more. While human content production is reduced, the need for human oversight, along with fact and link checking, means that time is used in other ways.
In short, it is best not to rush this, as I found when assembling two articles on Canadian rail travel. You also need to watch how much content is being processed because that can both overwhelm human bandwidth and undermine human engagement. This is more than proofreading of what is produced; you need to know something about a given subject yourself too.
Image Production
While AI can do well with producing some images, there are ones where it will struggle because of lack of training. An example is when I asked for an image with cyclists placing bicycles on a bus before boarding it. None of the generated images worked, meaning that a trip to a stock library was in order.
While some can specify everything in a prompt at one sitting, I work more iteratively, which probably adds to any task, especially with image generation. It proves that still is a place for stock libraries and having your own personal library as well. We need to remain as orchestrators in all of this, and lack of personal talent can remain a limitation.
System Administration
While this may not be something that I do professionally, my keeping an eye on the worlds of DevOps and DevSecOps means that I am seeing that the presence of AI is adding work of its own. This has no sign of lessening, proving that work is changing dramatically instead of reducing, especially you bring Agentic AI into the equation.
It feels much like the advent of personal computing and that produced a similar seismic shift in the workplace in more innocent times. This time around, nefarious actors are misusing AI, a not unexpected if ominous trend, adding to the security woes that have beset computing for a few decades now.
A Human in the Loop?
At a recent conference, much was being made of keeping humanity in the loop when it came to using AI. There is a catch, though: how do we have engaged humans in the loop? After all, creating computer code allows one to get into flow and remain engaged, possibly overriding any feelings of fatigue. This is what needs replicating, hardly an experience reported with automation in other professions.
The use of AI is a developing field, bringing new challenges as well as solving old problems. That also means upskilling on a grand scale, something happened over time with personal and business computing. While it looks as if the process could be faster this time around, it is too early to know enough about where this revolution is going to take us. That may be enough to keep us engaged.
Launching SAS Analytics Pro on Viya with automated Docker image clean-up
28th November 2025For my freelancing, I have a licensed version of SAS Analytics Pro running in a Docker container on my main Linux workstation. Every time there is a new release, a new Docker is made available, which means that a few of them could accumulate on your system. Aside from taking up disk space that could have other uses, it also makes it tricky to automate the startup of the associate Docker container. Avoiding this means pruning the Docker images available on the system, something that also needs automation.
To make things clearer, let me work through the launch script that I use; this is called by the script that checks for and then downloads any new image that is available, should that be needed. First up is the shebang, and this uses the -e switch to exit the script in the event of there being an error. That puts a stop to any potentially destructive outcomes from later commands being executed afterwards and without having the input that they need.
#!/bin/bash -e
Next comes the command to shut down the existing container. Should a new image get instated, this would lock up the old one, preventing its removal. Also, doing the rest of the steps with an already running container will result in errors anyway.
if docker container ls -a --format '{{.Names}}' | grep -q '^sas-analytics-pro$'; then
docker container stop sas-analytics-pro
fi
After that, the step to find the latest image is performed. Once, I did this by looping through the ages by days, weeks and months, hardly an elegant or robust approach. What follows is something all the more effective.
# Find latest SAS Analytics Pro image
IMAGE=$(docker image ls --format '{{.Repository}}:{{.Tag}} {{.CreatedAt}}' \
| grep 'sas-analytics-pro' \
| sort -k2,3r \
| head -n 1 \
| awk '{print $1}')
echo "Chosen image: $IMAGE"
Since there is quite a lot happening above, let us unpack the actions. The first part lists all Docker images, formatting each line to show the image name (repository:tag) followed by its creation timestamp: docker image ls --format '{{.Repository}}:{{.Tag}} {{.CreatedAt}}'. The next piece picks out all the images that are for SAS Analytics Pro: grep 'sas-analytics-pro'. The crucial step, sort -k2,3r, comes next and this sorts the results by the second and third fields (the creation date and time) in reverse order, so the newest images appear first. With that done, it is time to pick out the most recent image using head -n 1. To pick out the image name, you need awk '{print $1}. This wrapped within IMAGE=$(...) to assign the result to a variable that is printed to the console using an echo statement.
With the image selected, you can then spin up the container once you specify the other parameters to use and allow some sleep time afterwards before proceeding to the clean-up steps:
run_args="
-e SASLOCKDOWN=0
--name=sas-analytics-pro
--rm
--detach
--hostname sas-analytics-pro
--env RUN_MODE=developer
--env SASLICENSEFILE=[Path to SAS licence file]
--publish 8080:80
--volume ${PWD}/sasinside:/sasinside
--volume ${PWD}/sasdemo:/data2
--volume [location of SAS files on the system]:/data
--cap-add AUDIT_WRITE
--cap-add SYS_ADMIN
--publish 8222:22
"
if ! docker run -u root ${run_args} "$IMAGE" "$@" > /dev/null 2>&1; then
echo "Failed to run the image."
exit 1
fi
sleep 5
With the new container in action, the subsequent step is to find the older images and delete those. Again, the docker image command is invoked, with its output fed to a selection command for SAS Analytics Pro images. Once the current image has been removed from the listing by the grep -v command, the list of images to be deleted is assigned to the IMAGES_TO_REMOVE variable.
IMAGES_TO_REMOVE=$(docker image ls --format '{{.Repository}}:{{.Tag}}' \
| grep 'sas-analytics-pro' \
| grep -v "^$IMAGE$")
echo "Will remove older images:"
echo "$IMAGES_TO_REMOVE"
After that has happened, iterating through the list of images using a for loop will remove them one at a time using the docker image rm command:
for OLD in $IMAGES_TO_REMOVE; do
echo "Removing $OLD"
docker image rm "$OLD" || echo "Could not remove $OLD"
done
All this concludes the operation of spinning up a new SAS Analytics Pro Docker container while also removing any superseded Docker images. One last step is to capture the password to use for logging into the SAS Studio interface that is available at localhost:8080 or whatever address and port is being used to serve the application:
docker logs sas-analytics-pro 2>&1 | grep "Password=" > pw.txt
Folding updating and housekeeping into the same activity as spinning up the Docker container means that I need not think of doing anything else. The time taken by the other activities repay the effort by always having the latest version running in a tidy environment. That just saves having to remember to do all of this, which is what is needed without automation.
Blocking unwanted interface elements in ChatGPT with uBlock Origin
27th November 2025This time last year, I was a regular user of Perplexity. Unfortunately, it began to live to its name when news items began to appear on its previously clean home page. When ChatGPT and Anthropic Claude gained the ability to search the web one after another, there was little need to use Perplexity any longer. Before that happened, I began to use uBlock Origin to block the offending panels that I found so intrusive.
However, I still retain an enduring intolerance of intrusions into clean interfaces on public GenAI tools. Thus, when ChatGPT started to offer inspiration for using it in a dropdown panel below the text box, I began to look for ways to block it. It is not as if I need ideas from others anyway; quite enough come up for me from my daily computing.
While disabling memory may help, I sought another way to turn the dropdown panel, only to find that there was none. That left uBlock Origin as my means of control. Unfortunately, OpenAI do not make it easy to block the offending insertion; Perplexity was very simple: right-click on the item and navigate to uBlock Origin > Block element... on the context menu that appears. Making the selection on the ChatGPT interface was unavailable because of how they structure things.
Ironically, I started to pursue the matter using the ChatGPT tool itself. All of this was on Firefox, so I could explore the code by right-clicking on the page and selecting Inspect from the context menu that appeared. Just viewing the source code was not an option either; obfuscation on the OpenAI end saw to that: they appear to use JavaScript to convert indecipherable symbols into code that a browser can render. There was some toing and froing before I got as far as a workable solution.
This needed me to get into the uBlock Origin Dashboard through selecting its icon on the toolbar (while I have it pinned there, you may need to click on the Extensions button in the same place as an additional step before all the steps that I describe here) and then clicking on the gears icon in the bottom right of the panel that appears. Once into the uBlock Origin interface, go to the My Filters tab and add the following code in there:
chatgpt.com##ul.divide-token-border-light.flex-col.divide-y > li.w-full
The first part (before the ## separator) is the URL, which may be chatgpt.openai.com for you. The rest selects the ideas panel while leaving the prompt text and hyperlink in place. That sufficed for me; a generic item is not as intrusive as anything built from your history or any other source of information. Naturally, the interface may change again, which might mean that I need to revisit the filter, but this works for now. We all learn as we go.
Using the LIKE operator in PROC SQL WHERE clauses in SAS
26th November 2025Recently, I was working in SAS and decided to trying picking out datasets and variables from its dictionary tables, eventually picking out the maximum length of a variable type for assigning the length of a new variable. This could have been done using a long-established technique:
proc sql;
select distinct memname into :dsns separated by '#'
from dictionary.tables
where lowcase(libname) = 'work'
and index(lowcase(memname), "r_") = 1
and index(lowcase(memname), "visit") = 0;
quit;
The result is that it creates a macro variable containing a delimited list of work datasets with names beginning with r_ and not containing the string visit. As well as using the index function to find the placing of one string within another, I have seen the count function used for similar purposes, albeit without the placement specificity. Since the =: operator which looks for a search string at the start of a larger is not something that works in SQL (data step is more than fine), you cannot do something like this:
proc print data = sashelp.vmember noobs;
where lowcase(libname) = 'work'
and lowcase(memname) =: "r_";
run;
While the contains operator works similarly to the count function when it comes to search text positioning, yet another option is the like operator, and that is shown in the example below:
proc sql;
select distinct memname into :dsns separated by '#'
from dictionary.tables
where lowcase(libname) = 'work'
and lowcase(memname) like 'r\_%' escape '\'
and lowcase(memname) not like '%visit%';
quit;
Here, % and _ are placeholder characters, with the first matching zero or more characters and the second matching one character. Thus, the underscore in r_ needs escaping to look for that pattern (otherwise, it will look for the letter r at the start of a string and a single character after it) and a backslash character (\) will cover that duty. To ensure that it does what you want, adding escape '\' after the expression tells SAS what is happening.
Another thing to watch is that the percent (%) character needs a form escaping from the SAS Macro language processor, and placing the search term in single quotes attends to that. That means that %visit% does not cause any errors when you are looking for visit within a dataset name and using negation (the not operator) to exclude that possibility from the search results. However, using _%visit% might be a better pattern for finding visit at the end of a name, though.
Should you wish to play around with the above to see what happens for your own learning, try using something like this to give you a few test datasets:
data r_test r_visit visit;
set sashelp.class;
run;
Otherwise, feel free to add your own test cases to cement the ideas even further. All too often, we look up something, deploy it and then forget about, especially when AI is involved. Nevertheless, the fastest way to write code can be to use what is embedded in your memory.
Installing PowerShell on Linux Mint for some cross-platform testing
25th November 2025Given how well shell scripting works on Linux and my familiarity with it, the need to install PowerShell on a Linux system may seem surprising. However, this was part of some testing that I wanted to do on a machine that I controlled before moving the code to a client's system. The first step was to ensure that any prerequisites were in place:
sudo apt update
sudo apt install -y wget apt-transport-https software-properties-common
After that, the next moves were to download and install the required package for instating Microsoft repository details:
wget -q https://packages.microsoft.com/config/ubuntu/24.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
Then, I could install PowerShell itself:
sudo apt update
sudo apt install -y powershell
When it was in place, issuing the following command started up the extra shell for what I needed to do:
pwsh
During my investigations, I found that my local version of PowerShell was not the same as on the client's system, meaning that any code was not as portable as I might have expected, Nevertheless, it is good to have this for future reference and proves how interoperable Microsoft has needed to become.
Restoring photo dates with ExifTool after an Olympus camera loses its settings following a complete battery discharge
24th November 2025Here is the story behind what I am sharing here. My Olympus OM-D E-M10 II had been left aside for long enough to allow its battery to discharge fully. That also had the effect of causing it to lose its date and time settings. Then, I recharged the battery and went about using it without checking on those date and time settings. The result was a set of photos with a capture date and time of 1970-01-01T00:00 (midnight on New Year's Day in 1970!).
This was noticed when I loaded them onto my computer for appraisal with Lightroom. Thankfully, this had not gone on for too long, so I could work out the dates on which the images had been made. Thus, I could use ExifTool to update the capture dates while leaving the times alone. A command like the following will accomplish this while overwriting the images (the originals were retained elsewhere).
exiftool -overwrite_original \
-DateTimeOriginal='2025:06:02 ${DateTimeOriginal;s/^.* //}' \
-CreateDate='2025:06:02 ${CreateDate;s/^.* //}' \
-ModifyDate='2025:06:02 ${ModifyDate;s/^.* //}' \
*.ORF
The above command updates the original date, the capture date and the modified date. In practice, I only set two of these, leaving aside the modified date. Omitting the -overwrite_original switch would cause the creation of backup files, should that be what you require. Some think that specifying the *.ORF wildcard search is not desirable, preferring the following instead:
exiftool -overwrite_original \
-DateTimeOriginal='2025:06:02 ${DateTimeOriginal;s/^.* //}' \
-ext orf .
It is the -ext switch that picks up the ORF extension while . refers to the folder in which you are located in your shell session, and you can define your own path in the place of the dot if that is what is needed. Also, using -ext orf -ext dng will allow you to work with more than one file type at a time, a handy thing when more than one is found in the same directory, not that I organise my files like that.
With the date metadata fixed, removing the affected photos from Lightroom and reimporting them brought in the altered metadata. In the future, I will pay more attention to the Y/M/D display on the camera when it starts up, now that I realise what the display is trying to tell me. Then involves a trip to the settings using the Menu button on the back of the camera. Once in there, navigating to the spanner icon and then to the clock one gets you to the time settings where you can adjust it as needed. Pressing OK commits the setting to memory for the future, and you are then ready to go.
While on the subject of settings, the Info button is where you can set the levels to appear in the image display (on the viewfinder in my case); somehow I managed to lose these until I recalled how to get them back. Next on the list is another button that needs care on the top of the camera near the shutter release: one with a magnifying glass icon on there: this is the electronic zoom that has caught me out in the past. Naturally, other exposure settings dials also need care too, so it is never a good idea to rush the operation of a modern digital camera. Keeping their batteries charged will help too, especially in avoiding the predicament whose resolution led to the writing of this piece.
Adding visual appeal to bash command line scripts with colour variables on Linux
23rd November 2025While I was updating some scripts to improve their functionality, I made some unexpected discoveries. One involved adding some colour to the output, and a second will come up later. The colours can be defined as values of variables, as you can see below:
# Colours
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # no colour
In all cases, \033 is the shell escape sequence while [ is the control sequence initiator and m closes the sequence for colour definitions like we have here. A numeric value of 0 resets things to the default, which is how it is used in the no colour (NC) case that we have above to ensure that the colouration does not overflow beyond the intended text. Otherwise, 31 specifies red, 32 specifies green and 33 specifies yellow, giving options to use later on in the code. All of this is in line with the ANSI standard.
This is how these colour variables get used:
echo -e "\n${YELLOW}$(printf '*' {1..40}) All done $(printf '*' {1..40})${NC}\n"
The above is for an example with yellow text produced using ${YELLOW} segment after the newline sequence (\n) that is activated b y the -e switch passed to the echo command. This is turned off by the ${NC} portion at the end of the text, again before a terminating newline sequence. One extra addition here is the part that outputs forty asterisks: $(printf '*' {1..40}). You could have $(printf '*%.0s' {1..40}) instead, which is clearer to some because of the null output character sequence %.0s. In the earlier example, I opted for the simpler option.
Latest developments in the AI landscape: Consolidation, implementation and governance
22nd November 2025Artificial intelligence is moving through another moment of consolidation and capability gain. New ways to connect models to everyday tools now sit alongside aggressive platform plays from the largest providers, a steady cadence of model upgrades, and a more defined conversation about risk and regulation. For companies trying to turn all this into practical value, the story is becoming less about chasing the latest benchmark and more about choosing a platform, building the right connective tissue, and governing data use with care. The coming year looks set to reward those who simplify the user experience, embed AI directly into work and adopt proportionate controls rather than blanket bans.
I. Market Structure and Competitive Dynamics
Platform Consolidation and Lock-In
Enterprise AI appears to be settling into a two-platform market. Analysts describe a landscape defined more by integration and distribution than raw model capability, evoking the cloud computing wars. On one side sit Microsoft and OpenAI, on the other Google and Gemini. Recent signals include the pricing of Gemini 3 Pro at around two dollars per million tokens, which undercuts much of the market, Alphabet's share price strength, and large enterprise deals for Gemini integrated with Google's wider software suite. Google is also promoting Antigravity, an agent-first development environment with browser control, asynchronous execution and multi-agent support, an attempt to replicate the pull of VS Code within an AI-native toolchain.
The implication for buyers is higher switching costs over time. Few expect true multi-cloud parity for AI, and regional splits will remain. Guidance from industry commentators is to prioritise integration across the existing estate rather than incremental model wins, since platform choices now look like decade-long commitments. Events lined up for next year are already pointing to that platform view.
Enterprise Infrastructure Alignment
A wider shift in software development is also taking shape. Forecasts for 2026 emphasise parallel, multi-agent systems where a planning agent orchestrates a set of execution agents, and harnesses tune themselves as they learn from context. There is growing adoption of a mix-of-models approach in which expensive frontier models handle planning, and cheaper models do the bulk of execution, bringing near-frontier quality for less money and with lower latency. Team structures are changing as a result, with more value placed on people who combine product sense with engineering craft and less on narrow specialisms.
ServiceNow and Microsoft have announced a partnership to coordinate AI agents across organisations with tighter oversight and governance, an attempt to avoid the sprawl that plagued earlier automation waves. Nvidia has previewed Apollo, a set of open AI physics models intended to bring real-time fidelity to simulations used in science and industry. Albania has appointed an AI minister, which has kicked off debate about how governments should manage and oversee their own AI use. CIOs are being urged to lead on agentic AI as systems become capable of automating end-to-end workflows rather than single steps.
New companies and partnerships signal where capital and talent are heading. Jeff Bezos has returned to co-lead Project Prometheus, a start-up with $6.2 billion raised and a team of about one hundred hires from major labs, focused on AI for engineering and manufacturing in the physical world, an aim that aligns with Blue Origin interests. Vik Bajaj is named as co-CEO.
Deals underline platform consolidation. Microsoft and Nvidia are investing up to $5 billion and $10 billion respectively (totalling $15 billion) in Anthropic, whilst Anthropic has committed $30 billion in Azure capacity purchases with plans to co-design chips with Nvidia.
Commercial Model Evolution
Events and product launches continue at pace. xAI has released Grok 4.1 with an emphasis on creativity and emotional intelligence while cutting hallucinations. On the tooling front, tutorials explain how ChatGPT's desktop app can record meetings for later summarisation. In a separate interview, DeepMind's Demis Hassabis set out how Gemini 3 edges out competitors in many reasoning and multimodal benchmarks, slightly trails Claude Sonnet 4.5 in coding, and is being positioned for foundations in healthcare and education though not as a medical-grade system. Google is encouraging developers towards Antigravity for agentic workflows.
Industry leaders are also sketching commercial models that assume more agentic behaviour, with Microsoft's Satya Nadella promising a "positive-sum" vision for AI while hinting at per-agent pricing and wider access to OpenAI IP under Microsoft's arrangements.
II. Technical Implementation and Capability
Practical Connectivity Over Capability
A growing number of organisations are starting with connectors that allow a model to read and write across systems such as Gmail, Notion, calendars, CRMs, and Slack. Delivered via the Model Context Protocol, these links pull the relevant context into a single chat, so users spend less time switching windows and more time deciding what to do. Typical gains are in hours saved each week, lower error rates, and quicker responses. With a few prompts, an assistant can draft executive email summaries, populate a Notion database with leads from scattered sources, or propose CRM follow-ups while showing its working.
The cleanest path is phased: enable one connector using OAuth, trial it in read-only mode, then add simple routines for briefs, meeting preparation or weekly reports before switching on write access with a "show changes before saving" step. Enterprise controls matter here. Connectors inherit user permissions via OAuth 2.0, process data in memory, and vendors point to SOC 2, GDPR and CCPA compliance alongside allow and block lists, policy management, and audit logs. Many governance teams prefer to begin read-only and require approvals for writes.
There are limits to note, including API rate caps, sync delays, context window constraints and timeouts for long workflows. They are poor fits for classified data, considerable bulk operations or transactions that cannot tolerate latency. Some industry observers regard Claude's current MCP implementation, particularly on desktop, as the most capable of the group. Playbooks for a 30-day rollout are beginning to circulate, as are practitioner workshops introducing go-to-market teams to these patterns.
Agentic Orchestration Entering Production
Practical comparisons suggest the surrounding tooling can matter more than the raw model for building production-ready software. One report set a 15-point specification across several environments and found that Claude Code produced all features end-to-end. The same spec built with Gemini 3 inside Antigravity delivered two thirds of the features, while Sonnet 4.5 in Antigravity delivered a little more than half, with omissions around batching, progress indicators and robust error handling.
Security remains a live issue. One newsletter reports that Anthropic said state-backed Chinese hackers misused Claude to autonomously support a large cyberattack, which has intensified calls for governance. The background hum continues, from a jump in voice AI adoption to a German ruling on lyric copyright involving OpenAI, new video guidance steps in Gemini, and an experimental "world model" called Marble. Tools such as Yorph are receiving attention for building agentic data pipelines as teams look to productionise these patterns.
Tooling Maturity Defining Outcomes
In engineering practice, Google's Code Wiki brings code-aware documentation that stays in sync with repositories using Gemini, supported by diagrams and interactive chat. GitLab's latest survey suggests AI increases code creation but also pushes up demand for skilled engineers alongside compliance and human oversight. In operations, Chronosphere has added AI remediation guidance to cut observability noise and speed root-cause analysis while performance testing is shifting towards predictive, continuous assurance rather than episodic tests.
Vertical Capability Gains
While the platform picture firms up, model and product updates continue at pace. Google has drawn attention with a striking upgrade to image generation, based on Gemini 3. The system produces 4K outputs with crisp text across multiple languages and fonts, can use up to 14 reference images, preserves identity, and taps Google Search to ground data for accurate infographics.
Separately, OpenAI has broadened ChatGPT Group Chats to as many as 20 people across all pricing tiers, with privacy protections that keep group content out of a user's personal memory. Consumer advocates have used the moment to call out the risks of AI toys, citing safety, privacy and developmental concerns, even as news continues to flow from research and product teams, from the release of OLMo 3 to mobile features from Perplexity and a partnership between Stability and Warner Music Group.
Anthropic has answered with Claude Opus 4.5, which it says is the first model to break the 80 percent mark on SWE-Bench Verified while improving tool use and reasoning. Opus 4.5 is designed to orchestrate its smaller Haiku models and arrives with a price cut of roughly two thirds compared to the 4.1 release. Product changes include unlimited chat length, a Claude Code desktop app, and integrations that reach across Chrome and Excel.
OpenAI's additions have a more consumer flavour, with a Shopping Research feature in ChatGPT that produces personalised product guidance using a GPT-5 mini variant and plans for an Instant Checkout flow. In government, a new US executive order has launched the "Genesis Mission" under the Department of Energy, aiming to fuse AI capabilities across 17 national labs for advances in fields such as biotechnology and energy.
Coding tools are evolving too. OpenAI has previewed GPT-5.1-Codex-Max, which supports long-running sessions by compacting conversational history to preserve context while reducing overhead. The company reports 30 percent fewer tokens and faster performance over sessions that can run for more than a day. The tool is already available in the Codex CLI and IDE, with an API promised.
Infrastructure news out of the Middle East points to large-scale investment, with Saudi HUMAIN announcing data centre plans including xAI's first international facility alongside chips from Nvidia and AWS, and a nationwide rollout of Grok. In computer vision, Meta has released SAM 3 and SAM 3D as open-source projects, extending segmentation and enabling single-photo 3D reconstruction, while other product rollouts continue from GPT-5.1 Pro availability to fresh funding for audio generation and a marketing tie-up between Adobe and Semrush.
On the image side, observers have noted syntax-aware code and text generation alongside moderation that appears looser than some rivals. A playful "refrigerator magnet" prompt reportedly revealed a portion of the system prompt, a reminder that prompt injection is not just a developer concern.
Video is another area where capabilities are translating into business impact. Sora 2 can generate cinematic, multi-shot videos with consistent characters from text or images, which lets teams accelerate marketing content, broaden A/B testing and cut the need for studios on many projects. Access paths now span web, mobile, desktop apps and an API, and the market has already produced third-party platforms that promise exports without watermarks.
Teams experimenting with Sora are being advised to measure success by outcomes such as conversion rates, lower support loads or improved lead quality rather than just aesthetic fidelity. Implementation advice favours clear intent, structured prompts and iterative variation, with more advanced workflows assembling multi-shot storyboards, using match cuts to maintain rhythm, controlling lighting for continuity and anchoring character consistency across scenes.
III. Governance, Risk and Regulation
Governance as a Product Requirement
Amid all this activity, data risk has become a central theme for AI leaders. One governance specialist has consolidated common problem patterns into the PROTECT framework, which offers a way to map and mitigate the most material risks.
The first concern is the use of public AI tools for work content, which raises the chance of leakage or unwanted training on proprietary data. The recommended answer combines user guidance, approved internal alternatives, and technical or legal controls such as data scanning and blocking.
A second pressure point is rogue internal projects that bypass review, create compliance blind spots and build up technical debt. Proportionate oversight is key, calibrated to data sensitivity and paired with streamlined governance, so teams are not incentivised to route around it.
Third-party vendors can be opportunistic with data, so due diligence and contractual clauses need to prevent cross-customer training and make expectations clear with templates and guidance.
Technical attacks are another strand, from prompt injection to data exfiltration or the misuse of agents. Layered defences help here, including input validation, prompt sanitisation, output filtering, monitoring, red-teaming, and strict limits on access and privilege.
Embedded assistants and meeting bots come with permission risks when they operate over shared drives and channels, and agentic systems can amplify exposure if left unchecked, so the advice is to enforce least-privilege access, start on low-risk data, and keep robust audit trails.
Compliance risks span privacy laws such as GDPR with their demands for a lawful basis, IP and copyright constraints, contractual obligations, and the AI Act's emphasis on data quality. Legal and compliance checks need to be embedded at data sourcing, model training and deployment, backed by targeted training.
Finally, cross-border restrictions matter. Transfers should be mapped across systems and sub-processors, with checks for Data Privacy Framework certification, standard contractual clauses where needed, and transfer impact assessments that take account of both GDPR and newer rules such as the US Bulk Data Transfer Rule.
Regulatory Pragmatism
Regulators are not standing still, either. In the European Commission has proposed amendments to the AI Act through a Digital Omnibus package as the trilogue process rolls on. Six changes are in focus:
- High-risk timelines would be tied to the approval of standards, with a backstop of December 2027 for Annex III systems and August 2028 for Annex I products if delays continue, though the original August 2026 date still holds otherwise.
- Transparency rules on AI-detectable outputs under Article 50(2) would be delayed to February 2027 for systems placed on the market before August 2026, with no delay for newer systems.
- The plan removes the need to register Annex III systems in the public database where providers have documented under Article 6(3) that a system is not high risk.
- AI literacy would shift from a mandatory organisation-wide requirement to encouragement, except where oversight of high-risk systems demands it.
- There is also a move to centralise supervision by the AI Office for systems built on general-purpose models by the same provider, and for huge online platforms and search engines, which is intended to reduce fragmentation across member states.
- Finally, proportionality measures would define Small Mid-Cap companies and extend simplified obligations and penalty caps that currently apply to SMEs.
If adopted, the package would grant more time and reduce administrative load in some areas, at the expense of certainty and public transparency.
IV. Strategic Implications
The picture that emerges is one of pragmatic integration. Connectors make it feasible to keep work inside a single chat while drawing on the systems people already use. Platform choices are converging, so it makes sense to optimise for the suite that fits the current stack and to plan for switching costs that accumulate over time.
Agentic orchestration is moving from slides to code, but teams will get further by focusing on reliable tooling, clear governance and value measures that match business goals. Regulation is edging towards more flexible timelines and centralised oversight in places, which may lower administrative load without removing the need for discipline.
The sensible posture is measured experimentation: start with read-only access to lower-risk data, design routines that remove drudgery, introduce write operations with approvals, and monitor what is actually changing. The tools are improving quickly, yet the organisations that benefit most will be those that match innovation with proportionate controls and make thoughtful choices now that will hold their shape for the decade ahead.
Keyboard remapping on macOS with Karabiner-Elements for cross-platform work
20th November 2025This is something that I have been planing to share for a while; working across macOS, Linux and Windows poses a challenge to muscle memory when it comes to keyboard shortcuts. Since the macOS set up varies from the others, it was that which I set to harmonise with the others. Though the result is not full compatibility, it is close enough for my needs.
The need led me to install Karabiner-Elements and Karabiner-EventViewer. The latter has its uses for identifying which key is which on a keyboard, which happens to be essential when you are not using a Mac keyboard. While it is not needed all the time, the tool is a godsend when doing key mappings.
Karabiner-Elements is what holds the key mappings and needs to run all the time for them to be activated. Some are simple and others are complex; it helps the website is laden with examples of the latter. Maybe that is how an LLM can advise on how to set up things, too. Before we come to the ones that I use, here are the simple mappings that are active on my Mac Mini:
left_command → left_control
left_comtrol → left_command
This swaps the left-hand Command and Control keys while leaving their right-hand ones alone. It means that the original functionality is left for some cases when changing it for the keys that I use the most. However, I now find that I need to use the Command key in the Terminal instead of the Control counterpart that I used before the change, a counterintuitive situation that I overlook given how often the swap is needed in other places like remote Linux and Windows sessions.
grave_accent_and_tilde → non_us_backslash
non_us_backslash → non_us_pound
non_us_pound → grave_accent_and_tilde
It took a while to get this three-way switch figured out, and it is a bit fiddly too. All the effort was in the name of getting backslash and hash (pound in the US) keys the right way around for me, especially in those remote desktop sessions. What made the thing really tricky was the need to deal with Shift key behaviour, which necessitated the following script:
{
"description": "Map grave/tilde key to # and ~ (forced behaviour, detects Shift)",
"manipulators": [
{
"conditions": [
{
"name": "shift_held",
"type": "variable_if",
"value": 1
}
],
"from": {
"key_code": "grave_accent_and_tilde",
"modifiers": { "optional": ["any"] }
},
"to": [{ "shell_command": "osascript -e 'tell application \"System Events\" to keystroke \"~\"'" }],
"type": "basic"
},
{
"conditions": [
{
"name": "shift_held",
"type": "variable_unless",
"value": 1
}
],
"from": {
"key_code": "grave_accent_and_tilde",
"modifiers": { "optional": ["any"] }
},
"to": [
{
"key_code": "3",
"modifiers": ["option"]
}
],
"type": "basic"
},
{
"from": { "key_code": "left_shift" },
"to": [
{
"set_variable": {
"name": "shift_held",
"value": 1
}
},
{ "key_code": "left_shift" }
],
"to_after_key_up": [
{
"set_variable": {
"name": "shift_held",
"value": 0
}
}
],
"type": "basic"
},
{
"from": { "key_code": "right_shift" },
"to": [
{
"set_variable": {
"name": "shift_held",
"value": 1
}
},
{ "key_code": "right_shift" }
],
"to_after_key_up": [
{
"set_variable": {
"name": "shift_held",
"value": 0
}
}
],
"type": "basic"
}
]
}
Here, I resorted to AI to help get this put in place. Even then, there was a deal of toing and froing before the setup worked well. After that, it was time to get the quote (") and at (@) symbols assigned to what I was used to having on a British English keyboard:
{
"description": "Swap @ and \" keys (Shift+2 and Shift+quote)",
"manipulators": [
{
"from": {
"key_code": "2",
"modifiers": {
"mandatory": ["shift"],
"optional": ["any"]
}
},
"to": [
{
"key_code": "quote",
"modifiers": ["shift"]
}
],
"type": "basic"
},
{
"from": {
"key_code": "quote",
"modifiers": {
"mandatory": ["shift"],
"optional": ["any"]
}
},
"to": [
{
"key_code": "2",
"modifiers": ["shift"]
}
],
"type": "basic"
}
]
}
The above possibly was one of the first changes that I made, and took less time than some of the others that came after it. There was another at the end that was even simpler again: neutralising the Caps Lock key. That came up while I was perusing the Karabiner-Elements website, so here it is:
{
"manipulators": [
{
"description": "Change caps_lock to command+control+option+shift.",
"from": {
"key_code": "caps_lock",
"modifiers": { "optional": ["any"] }
},
"to": [
{
"key_code": "left_shift",
"modifiers": ["left_command", "left_control", "left_option"]
}
],
"type": "basic"
}
]
}
That was the simplest of the lot to deploy, being a simple copy and paste effort. It also halted mishaps when butter-fingered actions on the keyboard activated capitals when I did not need them. While there are occasions when the facility would have its uses, it has not noticed its absence since putting this in place.
At the end of all the tinkering, I now have a set-up that works well for me. While possible enhancements may include changing the cursor positioning and corresponding highlighting behaviours, I am happy to leave these aside for now. Compatibly with British and Irish keyboards together with smoother working in remote sessions was what I sought, and I largely have that. Thus, I have no complaints so far.
Ansible automation for Linux Mint updates with repository failover handling
7th November 2025Recently, I had a Microsoft PPA output disrupt an Ansible playbook mediated upgrade process for my main Linux workstation. Thus, I ended up creating a failover for this situation, and the first step in the playbook was to define the affected repo:
vars:
microsoft_repo_url: "https://packages.microsoft.com/repos/code/dists/stable/InRelease"
The next move was to start defining tasks, with the first testing the repo to pick up any lack of responsiveness and flag that for subsequent operations.
tasks:
- name: Check Microsoft repository availability
uri:
url: "{{ microsoft_repo_url }}"
method: HEAD
return_content: no
timeout: 10
register: microsoft_repo_check
failed_when: false
- name: Set flag to skip Microsoft updates if unreachable
set_fact:
skip_microsoft_repos: "{{ microsoft_repo_check.status is not defined or microsoft_repo_check.status != 200 }}"
In the event of a failure, the next task was to disable the repo to allow other processing to take place. This was accomplished by temporarily renaming the relevant files under /etc/apt/sources.list.d/.
- name: Temporarily disable Microsoft repositories
become: true
shell: |
for file in /etc/apt/sources.list.d/microsoft*.list; do
[ -f "$file" ] && mv "$file" "${file}.disabled"
done
for file in /etc/apt/sources.list.d/vscode*.list; do
[ -f "$file" ] && mv "$file" "${file}.disabled"
done
when: skip_microsoft_repos | default(false)
changed_when: false
With that completed, the rest of the update actions could be performed near enough as usual.
- name: Update APT cache (retry up to 5 times)
apt:
update_cache: yes
register: apt_update_result
retries: 5
delay: 10
until: apt_update_result is succeeded
- name: Perform normal upgrade
apt:
upgrade: yes
register: apt_upgrade_result
retries: 3
delay: 10
until: apt_upgrade_result is succeeded
- name: Perform dist-upgrade with autoremove and autoclean
apt:
upgrade: dist
autoremove: yes
autoclean: yes
register: apt_dist_result
retries: 3
delay: 10
until: apt_dist_result is succeeded
After those, another renaming operation restores the earlier filenames to what they were.
- name: Re-enable Microsoft repositories
become: true
shell: |
for file in /etc/apt/sources.list.d/*.disabled; do
base="$(basename "$file" .disabled)"
if [[ "$base" == microsoft* || "$base" == vscode* || "$base" == edge* ]]; then
mv "$file" "/etc/apt/sources.list.d/$base"
fi
done
when: skip_microsoft_repos | default(false)
changed_when: false
Needless to say, this disabling only happens in the event of there being a system failure. Otherwise, the steps are skipped and everything else is completed as it should be. While there is some cause for extended the repository disabling actions to other third repos as well, that is something that I will leave aside for now. Even this shows just how much can be done using Ansible playbooks and how much automation can be achieved. As it happens, I even get Flatpaks updated in much the same way:
- name: Ensure Flatpak is installed
apt:
name: flatpak
state: present
update_cache: yes
cache_valid_time: 3600
- name: Update Flatpak remotes
command: flatpak update --appstream -y
register: flatpak_appstream
changed_when: "'Now at' in flatpak_appstream.stdout"
failed_when: flatpak_appstream.rc != 0
- name: Update all Flatpak applications
command: flatpak update -y
register: flatpak_result
changed_when: "'Now at' in flatpak_result.stdout"
failed_when: flatpak_result.rc != 0
- name: Install unused Flatpak applications
command: flatpak uninstall --unused
register: flatpak_cleanup
changed_when: "'Nothing' not in flatpak_cleanup.stdout"
failed_when: flatpak_cleanup.rc != 0
- name: Repair Flatpak installations
command: flatpak repair
register: flatpak_repair
changed_when: flatpak_repair.stdout is search('Repaired|Fixing')
failed_when: flatpak_repair.rc != 0
The ability to call system commands as you see in the above sequence is an added bonus, though getting the response detection completely sorted remains an outstanding task. All this has only scratched the surface of what is possible.