Technology Tales

Notes drawn from experiences in consumer and enterprise technology

AI & Data Science Jottings

19:46, 26th February 2026

Agent teams enable the coordination of multiple Claude Code instances that work together on shared tasks through centralised management and inter-agent messaging. This experimental feature, disabled by default, allows one session to act as a team lead that assigns work and synthesises results whilst teammates operate independently in their own context windows and communicate directly with each other. The approach proves most effective for parallel exploration tasks such as research, debugging with competing hypotheses, cross-layer coordination and developing new modules where teammates can operate independently without excessive coordination overhead. Unlike subagents that only report back to their creator, agent teams share a task list and allow direct interaction with individual teammates, though this comes at significantly higher token costs since each teammate runs as a separate Claude instance. Teams can operate in two display modes, either within the main terminal or across split panes using tmux or iTerm2, with teammates claiming tasks through file locking to prevent conflicts. The lead manages team creation, task assignment and delegation based on natural language instructions, whilst teammates can work in read-only plan mode, requiring approval before implementation begins. Current limitations include the inability to resume sessions with in-process teammates, occasional lag in task status updates, slow shutdown behaviour and the requirement that only the lead can manage team structure, with no support for nested teams or leadership transfer.

19:26, 26th February 2026

SAS Viya Workbench, a cloud-based development environment, offers users the ability to write code in SAS, Python, or R through familiar tools such as Visual Studio Code or Jupyter Notebook. It provides instant access to advanced analytics capabilities, supports both SAS 9 and Viya procedures and integrates seamlessly with existing codebases. Designed for developers and data scientists, the platform prioritises efficiency, allowing teams to focus on experimentation and model development without the overhead of managing environments or ensuring consistency across platforms. Its split-plane architecture enhances data security, while the self-terminating cloud-native design ensures scalability and stability. The solution caters to a wide range of industries, from finance to healthcare, and distinguishes itself from broader platforms like SAS Viya by offering a lightweight, single-user workspace tailored for individual developers. Available on AWS and Microsoft Azure, it supports data connections to various sources and is accessible through private offers on cloud marketplaces. Users seeking to expand computational resources or streamline collaboration can explore its features as part of their analytical workflow.

19:20, 26th February 2026

Current AI systems, particularly large language models, face persistent issues such as hallucinations and unreliable performance despite significant investment in scaling models. These systems rely on statistical pattern recognition from vast datasets, which limits their ability to understand abstract rules or apply knowledge to novel situations, leading to errors in reasoning or logic. Scaling models has proven inefficient, costly and ethically problematic, with diminishing returns on reliability. An alternative approach, neurosymbolic AI, integrates neural networks with symbolic reasoning to extract and apply abstract rules, enhancing reliability, efficiency and explainability. This method allows systems to generalise beyond training data, reduce computational demands and support verifiable decision-making, addressing critical gaps in current AI. By combining the flexibility of learning with the precision of logical reasoning, neurosymbolic AI offers a more robust framework for developing trustworthy and adaptable systems, marking a potential shift in the evolution of artificial intelligence.

19:13, 26th February 2026

Cybercriminals are increasingly exploiting generative artificial intelligence to enhance the sophistication and efficiency of their attacks, though the technology has thus far primarily improved productivity rather than creating entirely new attack methods. Attackers are leveraging generative AI to craft more convincing phishing emails with personalised content drawn from social media and other sources, develop malware with detailed code documentation suggesting AI assistance, and accelerate vulnerability discovery and exploitation, reducing the time from disclosure to exploit from 47 days to just 18 days according to one study. More concerning developments include the emergence of AI-orchestrated espionage campaigns that automate approximately 80 per cent of attack activities, the creation of unregulated large language models such as WormGPT and FraudGPT built without safety guardrails, and the theft of cloud credentials to hijack costly LLM resources for criminal purposes. Attackers are also employing deepfakes for social engineering through voice and video impersonation, using generative AI to create fraudulent advertising campaigns that impersonate legitimate brands, poisoning AI model memories with malicious data, and compromising AI infrastructure through supply chain attacks on servers and dependencies. Whilst these tools lower barriers to entry for less skilled criminals and enable faster execution of traditional attack methods, security experts note that AI-generated attacks still face fundamental limitations and have yet to produce completely novel exploit techniques, suggesting that defensive applications of AI combined with robust identity management and anomaly detection remain effective countermeasures against these evolving threats.

19:09, 26th February 2026

Anthropic's introduction of Claude Code Security, an AI-driven tool that scans code for vulnerabilities and suggests patches, has sparked significant discussion within the cybersecurity community. While the feature is currently in a limited research preview and requires human oversight for final approval, it joins a growing list of AI-powered security initiatives by companies such as Amazon, Microsoft, Google and OpenAI, each developing tools to identify and address software flaws. These systems leverage large language models to detect vulnerabilities at scale, though they remain context-aware and rely on human judgment to validate fixes. Industry experts acknowledge the potential benefits of such tools in improving code quality and security, but also highlight ongoing challenges, including the need for transparency in performance metrics, the risk of false positives and the necessity of human involvement to ensure accuracy and mitigate potential oversights.

18:36, 18th February 2026

The notion that AI will decimate the job market for developers is greatly exaggerated. In fact, AI represents a platform shift that's changing what it looks like to build software and ushering in a period of enormous demand for ambitious, innovative and highly specialised code. This demand is driven by the imagination engine of the human mind, which constantly comes up with better ways of doing things. Each imagined future requires software to become reality, leading to new jobs and new approaches to existing ones. The changing nature of development work means that developers are shifting from writing every line of code by hand to orchestrating AI agents that generate code. New roles are emerging, such as AI orchestrators, prompt engineers and human-AI collaboration architects, which require an in-depth understanding of both traditional computer science fundamentals and how to work effectively with AI tools.

14:53, 29th January 2026

Claude has been integrated into Microsoft Excel as a beta feature available to Pro, Max, Team and Enterprise subscribers. The AI assistant can analyse entire workbooks, including complex formulas and dependencies across multiple tabs, whilst providing explanations with specific cell references for verification. Users can test different scenarios by updating assumptions throughout their models without disrupting existing formulas, with all changes clearly highlighted and explained. The tool can identify and help resolve common spreadsheet errors such as reference errors, value errors and circular dependencies by tracing them back to their origin. Additionally, Claude can generate draft financial models based on user requirements or populate existing templates with new data whilst preserving all formulas and structural elements.

10:50, 26th January 2026

Following user feedback that Claude Code was being applied to non-coding tasks, Anthropic has introduced Cowork, a simplified version designed for general productivity work rather than software development. Available initially as a research preview for Claude Max subscribers on macOS, Cowork allows users to grant Claude access to specific folders on their computers where it can read, edit and create files autonomously. The system can handle tasks such as reorganising downloads, generating spreadsheets from screenshots or drafting reports from notes, operating with greater independence than standard conversational interactions by making plans and executing them whilst keeping users informed of progress. Users can enhance Cowork's capabilities through existing connectors and newly added skills for document creation, and can combine it with Claude in Chrome for browser-based tasks.

Whilst users maintain control by selecting which folders Claude can access and receive prompts before significant actions occur, the system carries risks including potential file deletion through misinterpreted instructions and vulnerability to prompt injection attacks where malicious content might alter Claude's behaviour. The company plans to expand availability to other subscription tiers, add cross-device synchronisation and Windows support, and continue developing safety features based on feedback from this early release.

14:07, 22nd January 2026

After open weight language models made it cheaper and more practical to run capable systems outside proprietary platforms, many teams have found that hosting them locally still demands extreme hardware, so attention has shifted to specialist API providers that charge by tokens and remove most of the infrastructure burden. The piece compares several providers using benchmark and live performance observations, focusing on speed, latency, cost, accuracy and reliability: Cerebras stands out for very high throughput on large models, Fireworks AI and Groq emphasise very low latency suited to interactive and real-time agent use, Together.ai aims for broadly strong, steady production performance on conventional GPU infrastructure and Clarifai targets enterprise needs with hybrid deployment control and cost management. A lower cost option, DeepInfra, is presented as suitable for batch or non-critical workloads, with the trade-off of weaker reliability than the leading services.

10:42, 15th January 2026

As software systems grow more complex and delivery cycles shorten, requirements engineering is under pressure to stay rigorous while moving faster, and a recent article argues that artificial intelligence is increasingly being used to support that shift. It outlines how AI can help teams capture and refine requirements earlier through meeting transcription and summarisation, automatic drafting of user stories and acceptance criteria, clustering and sentiment analysis to surface disagreement and themes and live translation to improve collaboration in distributed teams. It also describes tools that generate diagrams, detect duplicates, suggest tests, support traceability and predict change impacts, alongside general purpose assistants that help analysts brainstorm, rephrase and review specifications. Alongside these potential gains in efficiency and consistency, it stresses the need for careful governance around bias, privacy and transparency, with human oversight, clear accountability and compliance measures such as GDPR remaining essential.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.