The rush to implement AI agents within organizations reminds me of the early cloud migration days – lots of enthusiasm but often overlooking critical foundations. As we experienced at dunnhumby when scaling data platforms, capabilities without connections create frustration, not value.
Here’s the reality: no matter how brilliant your AI agent is, it’s only as good as the data it can access and process. And there’s the rub.
For an AI agent to perform well, it needs quality data. But even before we talk about data quality, let’s tackle the more fundamental challenge – data accessibility.
For your agent to be effective, it must:
- Know where relevant data lives across your data warehouses, lakes, SaaS platforms, and proprietary systems
- Negotiate access permissions through proper authentication and authorization
- Connect to these systems via APIs, database connections, or other methods
- Understand data structure and semantics across these disparate sources
The problem? No industry-accepted protocols exist for how AI agents should discover and access enterprise data. Each organization is essentially building custom bridges between their AI agents and data systems.
Organizations succeeding with AI agents today are investing as much in data accessibility infrastructure as they are in the agents themselves. They’re building middleware that serves as data translators between their agents and enterprise systems.
Until standardized protocols emerge (thanks Anthropic for the MCP, and good luck!), this custom programming approach will remain necessary. But the investment pays dividends across every agent use case you’ll deploy.
Remember: a smart agent without data access is like hiring a brilliant consultant and then giving them no information about your business.
Has your organization solved the data accessibility challenge for AI agents? What approaches are working?