The enterprise AI landscape took a significant step forward this week as Infosys, one of the world’s largest IT services firms, announced a strategic collaboration with Anthropic to build and deploy AI solutions tailored for complex, heavily regulated industries. The partnership, revealed on 17 February 2026, signals a growing recognition that the real challenge of enterprise AI is not building powerful models but making them work reliably in industries where the stakes are highest.
From Demos to Deployment
The collaboration pairs Anthropic’s Claude models, including Claude Code, with Infosys Topaz, the company’s AI-first suite of services and platforms. Together, the two companies plan to develop AI agents capable of handling multi-step, domain-specific tasks across telecommunications, financial services, manufacturing, and software development.
Anthropic CEO Dario Amodei framed the partnership around a problem that has dogged enterprise AI adoption for years: the gap between what works in a controlled environment and what works in a real business. Infosys, with its deep operational knowledge across industries like telecom and financial services, brings the kind of domain expertise needed to close that gap. The company’s developers are already using Claude Code internally, building firsthand experience that will feed directly into client projects.
Infosys CEO Salil Parekh struck a broader tone, describing the collaboration as part of a shift in how entire industries operate. In his view, the goal is not incremental efficiency but a fundamental reimagining of enterprise operating models, making organisations more intelligent, resilient, and responsible through AI.
The Rise of Agentic AI
At the heart of the partnership is a bet on agentic AI: systems designed not just to answer questions or generate text, but to independently carry out complex, multi-step processes. Consider an AI agent that does not just flag a suspicious transaction but investigates it, cross-references compliance requirements, drafts a report, and routes it for review. Or one that does not just suggest a code fix but writes, tests, and debugs it end to end.
Using tools like the Claude Agent SDK, the two companies plan to build agents that can work persistently across long workflows rather than handling isolated, one-off interactions. This represents a meaningful evolution from the chatbot-era AI that most enterprises have deployed so far, moving towards systems that function more like autonomous digital workers embedded in business processes.
Industry-Specific Ambitions
The collaboration will launch first in telecommunications, an industry defined by operational complexity, legacy infrastructure, and heavy regulation. A dedicated Anthropic Centre of Excellence will focus on building AI agents for network operations, customer lifecycle management, and service delivery.
From there, the partnership will expand into financial services, where AI agents will target risk assessment, compliance reporting, and personalised customer interactions. In manufacturing and engineering, the focus shifts to accelerating product design and simulation, helping engineers compress R&D timelines by testing more iterations before committing to production. And in software development, teams will use Claude Code to move faster from design through deployment, with Infosys already piloting this approach within its own engineering organisation.
Why It Matters
This collaboration reflects a broader trend in enterprise AI: the convergence of frontier model capabilities with deep industry knowledge. As AI models have grown more capable, the bottleneck has shifted from raw intelligence to practical deployment, understanding regulatory constraints, integrating with legacy systems, and building trust with stakeholders who need to know exactly how and why an AI reached a particular decision.
For Anthropic, the partnership extends its reach into enterprise markets where safety, governance, and transparency are not just nice-to-haves but regulatory requirements. For Infosys, it offers a way to differentiate its AI practice with access to frontier models and agentic tooling that can be deployed at scale across its global client base.
The real test, as always, will be in execution. Building AI agents that can reliably operate in regulated environments, where errors carry real consequences, demands a level of rigour that goes well beyond what most organisations have achieved so far. But if the collaboration delivers on its ambitions, it could set a template for how large enterprises bring AI from the lab into the heart of their operations.
















