Technology Tales

Notes drawn from experiences in consumer and enterprise technology

AI & Data Science Jottings

12:55, 20th March 2026

The AI Act is undergoing several key revisions aimed at refining its regulatory framework. One major change involves expanding the scope of when sensitive personal data can be processed for bias detection and correction, extending beyond high-risk AI systems to other AI models and deployers. This adjustment lowers the threshold for data usage from "strictly necessary" to "necessary," though the Council has proposed narrower conditions compared to the Commission’s plan. Another significant shift is the centralisation of oversight for AI systems built on general-purpose AI models. The European AI Office would gain exclusive authority over such systems when developed by the same provider, though exceptions remain for sectors like law enforcement, financial services and critical infrastructure, which would retain national regulatory control.

The Act also introduces proportionality measures for small mid-cap enterprises, aligning their penalty caps with those of SMEs. This includes simplified technical documentation requirements for high-risk AI system providers within this category. Other updates include broadening the use of sensitive data for bias correction, enhancing the European AI Office’s regulatory powers and adjusting the conditions under which such data can be processed. The Council has tempered some of the Commission’s proposals, particularly in areas involving national security and critical services. These changes reflect a balance between tightening oversight in high-risk domains and providing flexibility for smaller organisations, while also addressing concerns around data privacy and regulatory overlap.

20:23, 4th March 2026

KDnuggets is a long-established online platform focused on data science, machine learning, artificial intelligence and analytics, founded in 1997 by Gregory Piatetsky-Shapiro. It provides a range of content including articles, tutorials and industry insights, curated by a team of editors and contributors with expertise in various technical and academic areas. The site has received recognition from multiple organisations for its influence and contributions to the field, and it maintains a substantial audience through email subscriptions and social media.

20:22, 4th March 2026

Kaggle hosts a community of over 30 million users, including data scientists, researchers and AI developers, who engage in evaluating and advancing machine learning through competitions, collaborative projects and open-source resources. The platform provides access to extensive datasets, pre-trained models and benchmarking tools to assess model performance, alongside a range of learning materials and courses to develop practical skills. It supports initiatives such as crowdsourced evaluations, research benchmarks and industry-specific challenges, fostering innovation in areas like natural language processing, computer vision and enterprise workflows. Kaggle also facilitates knowledge sharing through public notebooks, solution write-ups and discussions, enabling users to explore techniques, share insights and participate in events that test the capabilities of AI systems in real-world scenarios.

09:58, 28th February 2026

SAS Curiosity

A collection of projects and initiatives explores how data, artificial intelligence and analytics are being applied across a range of social and environmental challenges, including efforts to protect endangered right whales and sea turtles using AI and digital twins, the use of analytics to lift families out of poverty and reduce food waste in cheese production, research into how people around the world spend their time and the implications for health, and work to make AI education more accessible to historically underserved countries. Additional examples highlight how individual passion projects, such as a medical student's city travel initiative and an AI-powered batting laboratory, demonstrate the broader potential for data-driven innovation in everyday life.

19:46, 26th February 2026

Agent teams enable the coordination of multiple Claude Code instances that work together on shared tasks through centralised management and inter-agent messaging. This experimental feature, disabled by default, allows one session to act as a team lead that assigns work and synthesises results whilst teammates operate independently in their own context windows and communicate directly with each other. The approach proves most effective for parallel exploration tasks such as research, debugging with competing hypotheses, cross-layer coordination and developing new modules where teammates can operate independently without excessive coordination overhead. Unlike subagents that only report back to their creator, agent teams share a task list and allow direct interaction with individual teammates, though this comes at significantly higher token costs since each teammate runs as a separate Claude instance. Teams can operate in two display modes, either within the main terminal or across split panes using tmux or iTerm2, with teammates claiming tasks through file locking to prevent conflicts. The lead manages team creation, task assignment and delegation based on natural language instructions, whilst teammates can work in read-only plan mode, requiring approval before implementation begins. Current limitations include the inability to resume sessions with in-process teammates, occasional lag in task status updates, slow shutdown behaviour and the requirement that only the lead can manage team structure, with no support for nested teams or leadership transfer.

19:26, 26th February 2026

SAS Viya Workbench, a cloud-based development environment, offers users the ability to write code in SAS, Python, or R through familiar tools such as Visual Studio Code or Jupyter Notebook. It provides instant access to advanced analytics capabilities, supports both SAS 9 and Viya procedures and integrates seamlessly with existing codebases. Designed for developers and data scientists, the platform prioritises efficiency, allowing teams to focus on experimentation and model development without the overhead of managing environments or ensuring consistency across platforms. Its split-plane architecture enhances data security, while the self-terminating cloud-native design ensures scalability and stability. The solution caters to a wide range of industries, from finance to healthcare, and distinguishes itself from broader platforms like SAS Viya by offering a lightweight, single-user workspace tailored for individual developers. Available on AWS and Microsoft Azure, it supports data connections to various sources and is accessible through private offers on cloud marketplaces. Users seeking to expand computational resources or streamline collaboration can explore its features as part of their analytical workflow.

19:20, 26th February 2026

Current AI systems, particularly large language models, face persistent issues such as hallucinations and unreliable performance despite significant investment in scaling models. These systems rely on statistical pattern recognition from vast datasets, which limits their ability to understand abstract rules or apply knowledge to novel situations, leading to errors in reasoning or logic. Scaling models has proven inefficient, costly and ethically problematic, with diminishing returns on reliability. An alternative approach, neurosymbolic AI, integrates neural networks with symbolic reasoning to extract and apply abstract rules, enhancing reliability, efficiency and explainability. This method allows systems to generalise beyond training data, reduce computational demands and support verifiable decision-making, addressing critical gaps in current AI. By combining the flexibility of learning with the precision of logical reasoning, neurosymbolic AI offers a more robust framework for developing trustworthy and adaptable systems, marking a potential shift in the evolution of artificial intelligence.

19:13, 26th February 2026

Cybercriminals are increasingly exploiting generative artificial intelligence to enhance the sophistication and efficiency of their attacks, though the technology has thus far primarily improved productivity rather than creating entirely new attack methods. Attackers are leveraging generative AI to craft more convincing phishing emails with personalised content drawn from social media and other sources, develop malware with detailed code documentation suggesting AI assistance, and accelerate vulnerability discovery and exploitation, reducing the time from disclosure to exploit from 47 days to just 18 days according to one study. More concerning developments include the emergence of AI-orchestrated espionage campaigns that automate approximately 80 per cent of attack activities, the creation of unregulated large language models such as WormGPT and FraudGPT built without safety guardrails, and the theft of cloud credentials to hijack costly LLM resources for criminal purposes. Attackers are also employing deepfakes for social engineering through voice and video impersonation, using generative AI to create fraudulent advertising campaigns that impersonate legitimate brands, poisoning AI model memories with malicious data, and compromising AI infrastructure through supply chain attacks on servers and dependencies. Whilst these tools lower barriers to entry for less skilled criminals and enable faster execution of traditional attack methods, security experts note that AI-generated attacks still face fundamental limitations and have yet to produce completely novel exploit techniques, suggesting that defensive applications of AI combined with robust identity management and anomaly detection remain effective countermeasures against these evolving threats.

19:09, 26th February 2026

Anthropic's introduction of Claude Code Security, an AI-driven tool that scans code for vulnerabilities and suggests patches, has sparked significant discussion within the cybersecurity community. While the feature is currently in a limited research preview and requires human oversight for final approval, it joins a growing list of AI-powered security initiatives by companies such as Amazon, Microsoft, Google and OpenAI, each developing tools to identify and address software flaws. These systems leverage large language models to detect vulnerabilities at scale, though they remain context-aware and rely on human judgment to validate fixes. Industry experts acknowledge the potential benefits of such tools in improving code quality and security, but also highlight ongoing challenges, including the need for transparency in performance metrics, the risk of false positives and the necessity of human involvement to ensure accuracy and mitigate potential oversights.

10:31, 23rd February 2026

PumasAI is a pharmaceutical technology company based in Dover, Delaware, that develops data analytics tools designed to support drug development and healthcare delivery. Its flagship software, Pumas, has been used in over 26 successful regulatory submissions, and the company claims to have saved its 60-plus clients a combined total of one billion dollars. The firm has recently released version 2.8 of its platform, which includes a new feature called PumasAide, and has been recognised at the Biotechnology Awards for its contributions to the pharmaceutical industry. Alongside its software products, the company offers consulting services aimed at helping clients navigate the regulatory approval process, as well as complimentary access to its modelling tools for those engaged in non-commercial research and education.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.