A one-person cybersecurity firm just hacked McKinsey's internal AI platform in two hours.
CodeWall's AI agent accessed 46.5 million chat messages, 57,000 user accounts, and the system prompts that govern how the platform behaves. McKinsey patched it fast. Says no client data was compromised.
But the breach isn't the story. The decision that created the exposure is.
McKinsey built Lilli in-house. Strategy planning, data analysis, client presentations. 25,000 AI agents for 40,000 employees. AI consulting is 40% of their revenue.
They chose to build instead of buying enterprise AI from well-funded companies that invest a lot of capital on security, red-teaming, and infrastructure. Companies that patch vulnerabilities across thousands of customers at once.
When you build your own, you own all of that. The security posture. The patching cadence. The feature roadmap. And you're doing it with a team that has a hundred other priorities.
I see this pattern a lot. Engineering teams build internal AI tools because they can. Knowledge management systems, internal search engines, custom chatbots.
"Can" isn't "should."
The question isn't whether your team is smart enough. They probably are. The question is whether maintaining, securing, and updating it is the best use of their time when tested alternatives exist and ship improvements weekly.
There are real cases for building custom AI. Proprietary workflows that don't exist in any product. Domain-specific models trained on data no vendor has. Differentiation that depends on owning the stack.
But internal search? Knowledge management? Company chatbots? Those are solved categories. You're competing with companies that have orders of magnitude more capital dedicated to exactly those problems.
Start with what's available. Prove the limitation. Then build.
The complexity has to be earned. McKinsey's Lilli suggests it wasn't.
Want More Like This?
Quanta Bits delivers curated automation insights to your inbox.