Back to Blog
industry-insightsFeatured

25 Integration Insights from 2025: The Critical Infrastructure for AI

Sean Matthews
12 min read

Last updated: January 10, 2026

For all of AI's promise, none of it works without integrations. Data integrations, MCP Apps, and MCP UI are the critical path for AI to deliver real value.

25 Integration Insights from 2025: The Critical Infrastructure for AI
Left Hook

TL;DR

The big story: AI needs integrations to work. MCP standardized how AI uses tools. And billions flowed into connectivity infrastructure.

For product owners: If your product doesn't have MCP support by mid-2026, you're invisible to AI assistants, and increasingly, to the humans who use them.

For enterprises: Your integration infrastructure determines what AI can actually do for your organization.

For integration builders: AI will discover patterns and harden them into deterministic code, creating an explosion of micro-integrations. Complex, mission-critical integrations remain irreplaceable.

The counter-hype reality: 95% of enterprise AI pilots fail to deliver ROI. The vibe-coded apps built by prompting Cursor without understanding the underlying systems? They're already breaking. 2026 is cleanup year.


Why Integrations Are Critical Path for AI

Insight #1: AI can't deliver without connectivity.

Every agentic workflow, every AI assistant that "takes action," every automation that spans multiple systems: they all depend on integrations. The promise of AI is that it can orchestrate work across your tools. The reality is that orchestration requires connections.

This shows up in three layers:

  • Data integrations remain foundational, syncing information between systems
  • MCP Apps let AI assistants use external tools through a standard protocol
  • MCP UI (emerging) brings interactive experiences directly into chat interfaces, controlled by external developers

When someone asks "why can't Claude book a meeting and update Salesforce?", the answer is almost always an integration gap.


What Everyone Got Wrong

Insight #2: The "vibe-coded integration" hangover arrived early.

Research shows that Y Combinator's Winter 2025 batch had 25% of startups with 95% AI-generated codebases. Impressive, until those integrations hit production.

People built integrations by prompting Cursor and Claude without deeply understanding the APIs. Some worked great. Some created time bombs. 40% of AI-generated code contains vulnerabilities. And a vibe-coded payment gateway approved $2M in fraudulent transactions due to inadequate input validation.

The counter-hype take: Running a business is 100x different than generating code. Distribution, marketing, and sales still require human judgment. AI amplifies. Make sure you're amplifying something good.

Insight #3: Enterprise AI adoption had a rough reality check.

MIT's "GenAI Divide" study found that 95% of generative AI pilots fail to produce measurable revenue or cost savings. The 5% that succeed? They're extracting millions in value.

The difference isn't the AI model quality. It's the learning gap (for both tools and organizations). And often, it's an integration problem as much as an AI problem.

Insight #4: "AI-native" became the new "mobile-first," but most companies bolted it on.

Remember when every startup needed a mobile strategy? Now it's AI strategy. The difference between AI-native (built around AI from day one) and bolted-on AI is architectural, not cosmetic.

Only 16% of enterprise deployments qualify as true agents (systems where an LLM plans, executes, observes, and adapts). Most are fixed-sequence workflows wrapped around a single model call.


What Actually Matters

Insight #5: MCP went from spec to universal standard.

Anthropic's Model Context Protocol became the way AI assistants use external tools. Then in December, MCP was donated to the Linux Foundation's Agentic AI Foundation with OpenAI, Google, Microsoft, and AWS as platinum members.

97 million monthly SDK downloads. 10,000+ active servers. The "USB-C for AI."

MCP as USB-C for AI - a universal connector standard

And on December 17th, OpenAI opened the ChatGPT App Directory to developer submissions. Real distribution for MCP-based integrations, finally. 800 million weekly active users can now discover your app.

Insight #6: Connectivity infrastructure attracted billions.

The money tells the story across data infrastructure, automation, and integration:

DealValueCategory
Salesforce + Informatica$8BData infrastructure & pipelines
ServiceNow + Moveworks$2.85BAI automation + enterprise service
Workday + PipedreamUndisclosed3,000+ connectors to HR/finance
n8n funding$180MWorkflow automation (valued at $2.5B)
Clay funding$100MGTM data enrichment (valued at $3.1B)

The Informatica deal is data infrastructure, not integration in the traditional SaaS-connector sense, but it underscores the same theme: connectivity is critical infrastructure, and the market is pricing it in.

Insight #7: The 10x value bar became explicit.

An integration needs to be 10x better than the alternative to justify existing. With AI handling manual tasks, that bar got harder to clear.

But here's the counter-hype: AI agents can't replace everything. Complex, mission-critical integrations become MORE valuable precisely because agents can't handle them reliably.


Why AI Can't Replace Integrations (The Counter-Hype Reality)

Insight #8: LLM limitations are fundamental, not fixable.

Research proves that hallucination is inevitable in LLMs. Not a bug to be patched, but a structural consequence of how these systems operate.

"Gemini 2.0 broke new benchmarks around 0.8-0.9% hallucinations... But I think we'll be saturating around 0.5%. There are many fields where that 0.5% is not acceptable."

For mission-critical data flows, you need deterministic behavior. LLM-only architecture won't solve hallucination, drift, or context poisoning.

Insight #9: Agents are real, useful, and not magic.

IBM's analysis confirms what practitioners already knew: very few enterprise agents make it past the pilot stage into production. To reach production, developers compromise and build simpler agents to achieve reliability.

Treat agents like any other tool: test thoroughly, fail gracefully, keep humans in the loop for anything that matters.

Insight #10: Deterministic orchestration is winning.

Stack Overflow's engineering blog advocates for deterministic orchestration: "Granting LLMs full control over tool selection and execution introduces significant risks of hallucination and unpredictability."

Real integrations provide what agents can't:

  • Predictability - Runs the same way every time
  • Cost at scale - Fractions of a cent vs. expensive LLM calls
  • Control & auditability - Compliance teams need real answers
  • Reliability - Either works or throws predictable errors

What's Coming

Insight #11: The "desire path" pattern emerged.

Desire paths in a courtyard - people create paths before they get paved

Like architects who wait to see where walking paths form before paving sidewalks, smart teams started observing what they repeatedly ask AI to do, then hardening those patterns into deterministic integrations.

Here's the economic reality: having an LLM handle the same small task repeatedly is wasteful. Every time you ask, it starts from scratch. The core engine doesn't learn or harden your workflow pathways.

The smarter pattern: let AI discover the patterns, help write the script or workflow, then invoke that hardened integration efficiently. AI for discovery, deterministic code for execution.

Insight #12: An explosion of micro-integrations is coming.

AI will keep paving these small deterministic code pieces ("tools," "scripts," "actions"), creating an explosion of micro-integrations. Over time, these piece together into a dynamic but predictable codebase.

From the user's perspective, the system feels intelligent. Under the hood, it's increasingly a library of hardened integrations that AI orchestrates rather than reinvents.

Insight #13: Build vs. buy got a third option.

Build, buy, or let an AI agent handle it dynamically. But now there's nuance: the agent might help you build the deterministic version after discovering the pattern.

Insight #14: Security concerns caught up to MCP enthusiasm.

Authentication, rate limiting, prompt injection through tool responses: the "wait, should AI have access to this?" conversations finally happened. The Agentic AI Foundation has stated that establishing safe, transparent practices for agentic interactions will be its primary focus heading into 2026.

Insight #15: Integration documentation became AI training data.

Your API docs aren't just for human developers anymore. LLMs read them. Write accordingly.

Insight #16: "Integration surgeon" started feeling like a real role.

Not a full-stack dev, not a no-code builder, but someone who specializes in connecting systems. The niche is solidifying.

Insight #17: The embedded integration category stayed fragmented.

Paragon, Merge, integration.app, Cyclr, Tray Embedded, Workato, Pandium... over 40 vendors now compete in the space. The category didn't consolidate. Everyone's still fighting for position.


What We Heard in Conversations

Insight #18: "We've been meaning to build those integrations..."

Still the most common refrain. Integrations keep slipping for other priorities. The backlog grows.

Insight #19: "Are we doing this right?"

More strategic conversations than ever. Not "build us a connector" but "how should we think about integration strategy?"

Insight #20: "What should we do about MCP?"

Went from zero mentions to every conversation by Q4. With the Linux Foundation formalization, this question is now urgent.

Insight #21: HubSpot's ecosystem became the template.

Date-based API versioning, mature provider program, clear marketplace structure. Other platforms started copying their playbook.

Insight #22: Zapier made its move.

Canvas, the Utopian Labs acquisition: they're clearly seeing the agentic writing on the wall. Not just trigger-action anymore.


The Infrastructure Reality

Insight #23: AI infrastructure spending went parabolic.

Microsoft, Google, Amazon, Meta: everyone announced massive data center investments. When companies invest at this scale, they need returns. That demands AI adoption across every product and workflow.

Insight #24: The big players made their bets explicit.

Apple shipped Apple Intelligence. Microsoft put Copilot everywhere. Google reorganized around AI. The majors stopped hedging. If you build on any of these platforms, AI capabilities become table stakes.

Insight #25: Shadow AI became the norm.

MIT found that while only 40% of companies have official LLM subscriptions, 90% of workers use personal AI tools for job tasks. These shadow systems often outperform corporate tools.


The Takeaway

Integrations are the critical infrastructure that makes AI work. Not just data syncing (though that remains foundational), but the full stack:

  • Data integrations move information between systems
  • MCP Apps let AI assistants use external tools
  • MCP UI (emerging) brings interactive experiences into chat interfaces

AI is the catalyst driving investment. MCP is the standard enabling it. And the vibe-coded hangover? That's the cleanup opportunity.

Complex integrations that AI can't handle dynamically become MORE valuable. Mission-critical, high-volume, audit-required, deeply embedded integrations: these get more important, not less.

If you're staring at an integration backlog, a vibe-coded mess that needs professional attention, or an MCP strategy that doesn't exist yet, that's exactly what we do.

Which insight hit hardest for you? Let us know in the chat. And if someone on your team needs this context, send it their way.


Read next: 26 Things to Watch for 2026 and The 3 Layers Shaping Integration

Integration StrategyMCPAI AgentsEnterprise2025 Trends

Need Integration Expertise?

From Zapier apps to custom integrations, we've been doing this since 2012.

Book Discovery Call