Technology Tales

Adventures in consumer and enterprise technology

TOPIC: FIGMA

AI infrastructure under pressure: Outages, power demands and the race for resilience

1st November 2025

The past few weeks brought a clear message from across the AI landscape: adoption is racing ahead, while the underlying infrastructure is working hard to keep up. A pair of major cloud outages in October offered a stark stress test, exposing just how deeply AI has become woven into daily services.

At the same time, there were significant shifts in hardware strategy, a wave of new tools for developers and creators and a changing playbook for how information is found online. There is progress on resilience and efficiency, yet the system is still bending under demand. Understanding where it held, where it creaked and where it is being reinforced sets the scene for what comes next.

Infrastructure Stress and Outages

The outages dominated early discussion. An AWS incident that lasted around 15 hours and disrupted more than a thousand services was followed nine days later by a global Azure failure. Each cascaded across systems that depend on them, illustrating how AI now amplifies the consequences of platform problems.

This was less about a single point of failure and more about the growing blast radius when connected services falter. The effect on productivity was visible too: a separate 10-hour ChatGPT downtime showed how fast outages of core AI tools now translate into lost work time.

Power Demand and Grid Strain

Behind the headlines sits a larger story about electricity, grids and planning. Data centres accounted for roughly 4% of US electricity use in 2024, about 183 TWh and the International Energy Agency projects around 945 TWh by 2030, with AI as a principal driver.

The averages conceal stark local effects. Wholesale prices near dense clusters have spiked by as much as 267% at times, household bills are rising by about $16–$18 per month in affected areas and capacity prices in the PJM market jumped from $28.92 per megawatt to $329.17. The US grid faces an upgrade bill of about $720 billion by 2030, yet permitting and build timelines are long, creating a bottleneck just as demand accelerates.

Technical Grid Issues

Technical realities on the grid add another layer of challenge. Fast load swings from AI clusters, harmonic distortions and degraded power quality are no longer theoretical concerns. A Virginia incident in which 60 data centres disconnected simultaneously did not trigger a collapse but did reveal the fragility introduced by concentrated high-performance compute.

Security and New Failure Modes

Security risks are evolving in parallel. Agentic systems that can plan, reason and call tools open new failure modes. AI-enabled spear phishing appears to be 350% more effective than traditional attempts and could be 50 times more profitable, a worrying backdrop when outages already have a clear link to lost productivity.

Security considerations now reach into the tools people use to access AI as well. New AI browsers attract attention, and with that comes scrutiny. OpenAI's Atlas and Perplexity's Comet launched with promising features, yet researchers flagged critical issues.

Comet is vulnerable to "CometJacking", a malicious URL hijack that enables data theft, while Atlas suffered a cross-site request forgery weakness that allowed persistent code injection into ChatGPT memory. Both products have been noted for assertive data collection.

Caution and good hygiene are prudent until the fixes and policies settle. It is a reminder that the convenience of integrating models directly into browsing comes with a new attack surface.

Efficiency and Mitigation Strategies

Industry responses are gathering pace. Efficiency remains the first lever. Hyperscalers now report power usage effectiveness around 1.08 to 1.09, compared with more typical figures of 1.5 to 1.6. Direct chip cooling can cut energy needs by up to 40%.

Grid-interactive operations and more work at the edge offer ways to smooth demand and reduce concentration risk, while new power partnerships hint at longer-term change. Microsoft's agreement with Constellation on nuclear power is one example of how compute providers are thinking beyond incremental efficiency gains.

An emerging pattern is becoming visible through these efforts. Proactive regional planning and rapid efficiency improvements could allow computational output to grow by an order of magnitude, while power use merely doubles. More distributed architectures are being explored to reduce the hazard of over-concentration.

A realistic outlook sets data centres at around 3% of global electricity use by 2030, which is notable but still smaller than anticipated growth from electric vehicles or air conditioning. If the $720 billion in grid investment materialises, it could add around 120 GW of capacity by 2030, as much as half of which would be absorbed by data centres. The resilience gap is real, but it appears to be narrowing, provided the sector moves quickly to apply lessons from each failure.

Regional and Policy Responses

Regional policies are starting to encourage resilience too. Oregon's POWER Act asks operators to contribute to grid robustness, Singapore's tight focus on efficiency has delivered around a 30% power reduction even as capacity expands and a moratorium in Dublin has pushed growth into more distributed build-outs. On the U.S. federal government side, the Department of Homeland Security updated frameworks after a 2024 watchdog warning, with AI risk programmes now in place for 15 of the 16 critical infrastructure sectors.

Hardware Competition and Strategy

Competition is sharpening. Anthropic deepened its partnership with Google Cloud to train on TPUs, a move that challenges Nvidia's dominance and signals a broader rebalancing in AI hardware. Nvidia's chief executive has acknowledged TPUs as robust competition.

Another fresh entry came from Extropic, which unveiled thermodynamic sampling units, a probabilistic chip design that claims up to 10,000-fold lower energy use than GPUs for AI workloads. Development kits are shipping and a Z-1 chip is planned for next year, yet as with any radical architecture, proof at scale will take time.

Nvidia, meanwhile, presented an ambitious outlook, targeting $500 billion in chip revenue by 2026 through its Blackwell and Rubin lines. The US Department of Energy plans seven supercomputers comprising more than 100,000 Blackwell GPUs and the company announced partnerships spanning pharmaceuticals, industrials and consumer platforms.

A $1 billion investment in Nokia hints at the importance of AI-centric networks. New open-source models and datasets accompanied the announcements, and the company's share price surged to a record.

Corporate Restructuring

Corporate strategy and hardware choices also entered a new phase. OpenAI completed its restructuring into a public benefit corporation, with a rebranded OpenAI Foundation holding around $130 billion in equity and allocating $25 billion to health and AI resilience. Microsoft's stake now sits at about 27% and is worth roughly $135 billion, with technology rights retained through 2032. Both parties have scope to work with other partners. OpenAI committed around $250 billion to Azure yet retains the ability to use other compute providers. An independent panel will verify claims of artificial general intelligence, an unusual governance step that will be watched closely.

Search and Discovery Evolution

Away from infrastructure, the way audiences find and trust information is shifting. Search is moving from the old aim of ranking for clicks to answer engine optimisation, where the goal is to be quoted by systems such as ChatGPT, Claude or Perplexity.

The numbers explain why. Google handled more than five trillion queries in 2024, while generative platforms now process around 37.5 million prompt-like searches per day. Google's AI Overviews, which surface summary answers above organic results, have reshaped click behaviour.

Independent analyses report top-ranking pages seeing click-through rates fall by roughly a third where Overviews appear, with some keywords faring worse, and a Pew study finds overall clicks on such results dropping from 15% to 8%. Zero-click searches rose from around 56% to 69% between May 2024 and May 2025.

Chegg's non-subscriber traffic fell by 49% in this period, part of an ongoing dispute with Google. Google counters that total engagement in covered queries has risen by about 10%. Whichever way that one reads the data, the direction is clear: visibility is less about rank position and more about being cited by a summarising engine.

In practice, that means structuring content, so a model can parse, trust and attribute it. Clear Q&A-style sections with direct answers, followed by context and cited evidence, help models extract usable statements. Schema markup for FAQs and how-to content improves machine readability.

Measuring success also changes. Traditional analytics rarely show when an LLM quotes a source, so teams are turning to tools that track citations in AI outputs and tying those to conversion quality, branded search volume and more in-depth engagement with pricing or documentation. It is not a replacement for SEO so much as a layer that reinforces it in an AI-first environment.

Developer Tools and Agentic Workflows

On the tools front, developers saw an acceleration in agent-centred workflows. Cursor launched its first in-house coding model, Composer, which aims for near-frontier quality while generating code around four times faster, often in under 30 seconds.

The broader Cursor 2.0 update added multi-agent capabilities, with as many as eight assistants able to work in parallel, alongside browsing, a test browser and voice controls. The direction of travel is away from single-shot completions and towards orchestration and review. Tutorials are following suit, demonstrating how to scaffold tasks such as a Next.js to-do application using planning files, parallel agent tasks and quick integration, with voice prompts in the loop.

Open-source and enterprise ecosystems continue to expand. GitHub introduced Agent HQ for coordinating coding agents, Google released Pomelli to generate marketing campaigns and IBM's Granite 4.0 Nano models brought larger on-device options in the 350 million to 1.5 billion parameter range.

FlowithOS reported strong scores on agentic web tasks, while Mozilla announced an open speech dataset initiative, and Kilo Code, Hailuo 2.3 and other projects broadened choice across coding and video. Grammarly rebranded as Superhuman, adding "Superhuman Go" agents to speed up writing tasks.

Creative Tools and Partnerships

Creative workflows are evolving quickly, too. Adobe used its MAX event to add AI assistants to Photoshop and Express, previewed an agent called Project Moonlight, and upgraded Firefly with conversational "Prompt to Edit" controls, custom image models and new video features including soundtracks and voiceovers. Partnerships mean Gemini, Veo and Imagen will sit inside Adobe tools, and Premiere's editing capabilities now extend to YouTube Shorts.

Figma acquired Weavy and rebranded it as Figma Weave for richer creative collaboration, and Canva unveiled its own foundation "Design Model" alongside a Creative Operating System meant to produce fully editable, AI-generated designs. New Canva features take in a revised video suite, forms, data connectors, email design, a 3D generator and an ad creation and performance tool called Grow, while Affinity is relaunching as a free, integrated professional app. Other entrants are trying to blend model strengths: one agent was trailed with Sora 2 clip stitching, Veo 3.1 visuals and multimodel blending for faster design output.

Music rights and AI found a new footing. Universal Music Group settled a lawsuit with Udio, the AI music generator, and the two will form a joint venture to launch a licensed platform in 2026. Artists who opt in will be paid both for training models on their catalogues and for remixes. Udio disabled song downloads following the deal, which annoyed some users, and UMG also announced a "responsible AI" alliance with Stability AI to build tools for artists. These arrangements suggest a path towards sanctioned use of style and catalogue, with compensation built in from the start.

Research and Introspection

Research and science updates added depth. Anthropic reported that its Claude system shows limited introspection, detecting planted concepts only about 20% of the time, separating injected "thoughts" from text and modulating its internal focus. That highlights both the promise and limits of transparency techniques, and the potential for models to conceal or fail to surface certain internal states.

UC Berkeley researchers demonstrated an AI-driven load balancing algorithm with around 30% efficiency improvements, a result that could ripple through cloud performance. IBM ran quantum algorithms on AMD FPGAs, pointing to progress in hybrid quantum-classical systems.

OpenAI launched an AI-integrated web browser positioned as a challenger to incumbents, Perplexity released a natural-language patents search and OpenAI's Aardvark, a GPT-5-based security agent, entered private beta.

Anthropic opened a Tokyo office and signed a cooperation pact with Japan's AI Safety Institute. Tether released QVAC Genesis I, a large open STEM dataset of more than one million data points and a local workbench app aimed at making development more private and less dependent on big platforms.

Age Restrictions and Policy

Meanwhile, policy considerations are reaching consumer platforms. Character AI will restrict users under 18 from open-ended chatbot conversations from late November, replacing them with creative tools and adding behaviour-based age detection, a response to pressure and proposals such as the GUARD Act.

Takeaways

Put together, the picture is one of rapid interdependence and swift correction. The infrastructure is not breaking, but it is being stretched, and recent failures have usefully mapped the weak points. If the sector continues to learn quickly from its own missteps, the resilience gap will continue to narrow, and the next round of outages will be less disruptive than the last.

Investment is flowing into grids and cooling, policy is nudging towards resilience, and compute providers are hedging hardware bets by searching for efficiency and supply assurance. On the application layer, agents are becoming a primary interface for work, creative tools are converging around editability and control, and discovery is shifting towards being quoted by machines rather than clicked by humans.

Security lapses at the interface are a reminder that novelty often arrives before maturity. The most likely path from here is uneven but forward: data centre power may rise, yet efficiency and distribution can blunt the impact; answer engines may compress clicks, yet they can send higher intent visitors to clear, well-structured sources; hardware competition may fragment the stack, yet it can also reduce concentration risk.

An Overview of MCP Servers in Visual Studio Code

29th August 2025

Agent mode in Visual Studio Code now supports an expanding ecosystem of Model Context Protocol servers that equip the editor’s built-in assistant with practical tools. By installing these servers, an agent can connect to databases, invoke APIs and perform automated or specialised operations without leaving the development environment. The result is a more capable workspace where routine tasks are streamlined, and complex ones are broken into more manageable steps. The catalogue spans developer tooling, productivity services, data and analytics, business platforms, and cloud or infrastructure management. If something you rely on is not yet present, there is a route to suggest further additions. Guidance on using MCP tools in agent mode is available in the documentation, and the Command Palette, opened with Ctrl+Shift+P, remains the entry point for many workflows.

The servers in the developer tools category concentrate on everyday software tasks. GitHub integration brings repositories, issues and pull requests into reach through a secure API, so that code review and project coordination can continue without switching context. For teams who use design files as a source of truth, Figma support extracts UI content and can generate code from designs, with the note that using the latest desktop app version is required for full functionality. Browser automation is covered by Playwright from Microsoft, which drives tests and data collection using accessibility trees to interact with the page, a technique that often results in more resilient scripts. The attention to quality and reliability continues with Sentry, where an agent can retrieve and analyse application errors or performance issues directly from Sentry projects to speed up triage and resolution.

The breadth of developer capability extends to machine learning and code understanding. Hugging Face integration provides access to models, datasets and Spaces on the Hugging Face Hub, which is useful for prototyping, evaluation or integrating inference into tools. For source exploration beyond a single repository, DeepWiki by Kevin Kern offers querying and information extraction from GitHub repositories indexed on that service. Converting documents is handled by MarkItDown from Microsoft, which takes common files like PDF, Word, Excel, images or audio and outputs Markdown, unifying content for notes, documentation or review. Finding accurate technical guidance is eased by Microsoft Docs, a Microsoft-provided server that searches Microsoft Learn, Azure documentation and other official technical resources. Complementing this is Context7 from Upstash, which returns up-to-date, version-specific documentation and code examples from any library or framework, an approach that addresses the common problem of answers drifting out of date as software evolves.

Visual assets and code health have their own role. ImageSorcery by Sunrise Apps performs local image processing tasks, including object detection, OCR, editing and other transformations, a capability that supports anything from quick asset tweaks to automated checks in a content pipeline. Codacy completes the developer picture with comprehensive code quality and security analysis. It covers static application security testing, secrets detection, dependency scanning, infrastructure as code security and automated code review, which helps teams maintain standards while moving quickly.

Productivity services focus on planning, tracking and knowledge capture. Notion’s server allows viewing, searching, creating and updating pages and databases, meaning an agent can assemble notes or checklists as it progresses. Linear integration brings the ability to create, update and track issues in Linear’s project management platform, reflecting a growing preference for lightweight, developer-centred planning. Asana support provides task and project management together with comments, allowing multi-team coordination. Atlassian’s server connects to Jira and Confluence for issue tracking and documentation, which suits organisations that rely on established workflows for governance and audit trails. Monday.com adds another project management option, with management of boards, items, users, teams and workspace operations. These capabilities sit alongside automation from Zapier, which can create workflows and execute tasks across more than 30,000 connected apps to remove repetitive steps and bind systems together when native integrations are limited.

Two Model Context Protocol utilities add cognitive structure to how the agent works. Sequential Thinking helps break down complex tasks into manageable steps with transparent tracking, so progress is visible and revisable. Memory provides long-lived context across sessions, allowing an agent to store and retrieve relevant information rather than relying on a single interaction. Together, they address the practicalities of working on multi-stage tasks where recalling decisions, constraints or partial results is as important as executing the next action. Used with the productivity servers, these tools underpin a systematic approach to projects that span hours or days.

The data and analytics group is comprehensive, stretching from lightweight local analysis to cloud-scale services. DuckDB by Kentaro Tanaka enables querying and analysis of DuckDB databases both locally and in the cloud, which suits ad hoc exploration as well as embedded analytics in applications. Neon by neondatabase labs provides access to Postgres with the notable addition of natural language operations for managing and querying databases, which lowers the barrier to occasional administrative tasks. Prisma Postgres from Prisma brings schema management, query execution, migrations and data modelling to the agent, supporting teams who already use Prisma’s ORM in their applications. MongoDB integration supports database operations and management, with the ability to execute queries, manage collections, build aggregation pipelines and perform document operations, allowing front-end and back-end tasks to be coordinated through a single interface.

Observability and product insight are also represented. PostHog offers analytics access for creating annotations and retrieving product usage insights so that changes can be correlated with user behaviour. Microsoft Clarity provides analytics data including heatmaps, session recordings and other user behaviour insights that complement quantitative metrics and highlight usability issues. Web data collection has two strong options. Apify connects the agent with Apify’s Actor ecosystem to extract data from websites and automate broader workflows built on that platform. Firecrawl by Mendable focuses on extracting data from websites using web scraping, crawling and search with structured data extraction, a combination that suits building datasets or feeding search indexes. These tools bridge real-world usage and the development cycle, keeping decision-making grounded in how software is experienced.

The business services category addresses payments, customer engagement and web presence. Stripe integration allows the creation of customers, management of subscriptions and generation of payment links through Stripe APIs, which is often enough to pilot monetisation or administer accounts. PayPal provides the ability to create invoices, process payments and access transaction data, ensuring another widely used channel can be managed without bespoke scripts. Square rounds out payment options with facilities to process payments and manage customers across its API ecosystem. Intercom support brings access to customer conversations and support tickets for data analysis, allowing an agent to summarise themes, surface follow-ups or route issues to the right place. For building and running sites, Wix integration helps with creating and managing sites that include e-commerce, bookings and payment features, while Webflow enables creating and managing websites, collections and content through Webflow’s APIs. Together, these options cover a spectrum of online business needs, from storefronts to content-led marketing.

Cloud and infrastructure operations are often the backbone of modern projects, and the MCP catalogue reflects this. Convex provides access to backend databases and functions for real-time data operations, making it possible to work with stateful server logic directly from agent mode. Azure integration supports management of Azure resources, database queries and access to Azure services so that provisioning, configuration and diagnostics can be performed in context. Azure DevOps extends this to project and release processes with management of projects, work items, repositories, builds, releases and test plans, providing an end-to-end view for teams invested in Microsoft’s tooling. Terraform from HashiCorp introduces infrastructure as code management, including plan, apply and destroy operations, state management and resource inspection. This combination makes it feasible to review and adjust infrastructure, coordinate deployments and correlate changes with code or issue history without switching tools.

These servers are designed to be installed like other VS Code components, visible from the MCP section and accessible in agent mode once configured. Many entries provide a direct route to installation, so setup friction is limited. Some include specific requirements, such as Figma’s need for the latest desktop application, and all operate within the Model Context Protocol so that the agent can call tools predictably. The documentation explains usage patterns for each category, from parameterising database queries to invoking external APIs, and clarifies how capabilities appear inside agent conversations. This is useful for understanding the scope of what an agent can do, as well as for setting boundaries in shared environments.

In day-to-day use, the value comes from combining servers to match a workflow. A developer investigating a production incident might consult Sentry for errors, query Microsoft Docs for guidance, pull related issues from GitHub and draft changes to documentation with MarkItDown after analysing logs held in DuckDB. A product manager could retrieve usage insights from PostHog, review session recordings in Microsoft Clarity, create follow-up tasks in Linear and brief customer support by summarising Intercom conversations, all while keeping a running Memory of key decisions. A data practitioner might gather inputs from Firecrawl or Apify, store intermediates in MongoDB, perform local analysis in DuckDB and publish a report to Notion, building a repeatable chain with Zapier where steps can be automated. In infrastructure scenarios, Terraform changes can be planned and applied while Azure resources are inspected, with release coordination handled through Azure DevOps and updates documented in Confluence via the Atlassian server.

Security and quality concerns are woven through these flows. Codacy can evaluate code for vulnerabilities or antipatterns as changes are proposed, surfacing SAST findings, secrets detection problems or dependency risks before they progress. Stripe, PayPal and Square centralise payment operations to a few well-audited APIs rather than bespoke integrations, which reduces surface area and simplifies auditing. For content and data ingestion, ImageSorcery ensures that image transformations occur locally and MarkItDown produces traceable Markdown outputs from disparate file types, keeping artefacts consistent for reviews or archives. Sequential Thinking helps structure longer tasks, and Memory preserves context so that actions are explainable after the fact, which is helpful for compliance as well as everyday collaboration.

Discoverability and learning resources sit close to the tools themselves. The Visual Studio Code website’s navigation surfaces areas such as Docs, Updates, Blog, API, Extensions, MCP, FAQ and Dev Days, while the Download path remains clear for new installations. The MCP area groups servers by capability and links to documentation that explains how agent mode calls each tool. Outside the product, the project’s presence on GitHub provides a route to raise issues or follow changes. Community activity continues on channels including X, LinkedIn, Bluesky and Reddit, and there are broadcast updates through the VS Code Insiders Podcast, TikTok and YouTube. These outlets provide context for new server additions, changes to the protocol and examples of how teams are putting the pieces together, which can be as useful as the tools themselves when establishing good practices.

It is worth noting that the catalogue is curated but open to expansion. If there is an MCP server that you expect to see, there is a path to suggest it, so gaps can be addressed over time. This flows from the protocol’s design, which encourages clean interfaces to external systems, and from the way agent mode surfaces capabilities. The cumulative effect is that the assistant inside VS Code becomes a practical co-worker that can search documentation, change infrastructure, file issues, analyse data, process payments or summarise customer conversations, all using the same set of controls and the same context. The common protocol keeps these interactions predictable, so adding a new server feels familiar even when the underlying service is new.

As the ecosystem grows, the connection between development work and operations becomes tighter, and the assistant’s job is less about answering questions in isolation than orchestrating tools on the developer’s behalf. The MCP servers outlined here provide a foundation for that shift. They encapsulate the services that many teams already rely on and present them inside agent mode so that work can continue where the code lives. For those getting started, the documentation explains how to enable the tools, the Command Palette offers quick access, and the community channels provide a steady stream of examples and updates. The result is a VS Code experience that is better equipped for modern workflows, with MCP servers supplying the functionality that turns agent mode into a practical extension of everyday work.

  • The content, images, and materials on this website are protected by copyright law and may not be reproduced, distributed, transmitted, displayed, or published in any form without the prior written permission of the copyright holder. All trademarks, logos, and brand names mentioned on this website are the property of their respective owners. Unauthorised use or duplication of these materials may violate copyright, trademark and other applicable laws, and could result in criminal or civil penalties.

  • All comments on this website are moderated and should contribute meaningfully to the discussion. We welcome diverse viewpoints expressed respectfully, but reserve the right to remove any comments containing hate speech, profanity, personal attacks, spam, promotional content or other inappropriate material without notice. Please note that comment moderation may take up to 24 hours, and that repeatedly violating these guidelines may result in being banned from future participation.

  • By submitting a comment, you grant us the right to publish and edit it as needed, whilst retaining your ownership of the content. Your email address will never be published or shared, though it is required for moderation purposes.