Why Your AI Strategy is Only as Strong as Your Oldest Vulnerabilities

By: Herold Prophete
|
04/28/2026

Can a 25-year-old vulnerability compromise a 2026 AI strategy?

The recent findings from CodeWall about McKinsey’s "Lilli" platform suggest the answer is a resounding "yes." In short, the research details how an autonomous AI agent was able to access 46 million chat messages and 728,000 files—in under two hours. It’s a case study that every executive team should have on their desk right now as we navigate the balance between speed and architectural integrity.

History often provides us with a roadmap.
A decade ago, many of us navigated the rapid migration to the Public Cloud. In the rush for agility, security sometimes trailed behind, leading to exposed S3 buckets and identity gaps that took years of technical debt to resolve.

Today, we see a similar trend with Generative AI. There is a natural excitement around the "magic" of the large language model (LLM), but we must ensure we aren't overlooking the foundational security basics that have protected our enterprises for decades.

The striking lesson of the Lilli hack was that the entry point wasn’t a futuristic AI exploit. It was SQL Injection—a vulnerability the industry has managed since the 90s.

A strategic perspective for executives.
As we build these sophisticated AI “brains,” we need to ensure the application “body" they inhabit is just as resilient. In this instance, while data values were protected, the underlying database keys remained accessible. It’s a reminder that even the most advanced innovation is still built on top of traditional software architecture.

Key Strategic Takeaways:

  1. The Speed of Discovery: Automation allows for rapid iteration. An AI agent can perform schema mapping in minutes—tasks that used to take days. Our defensive posture needs to match this new cadence.
  2. Protecting the Logic: Beyond data theft, the real risk is "trust poisoning." If an unauthorized party can modify system prompts, they can subtly influence the AI’s decision-making across the entire organization.
  3. Architectural Integrity: Modern design should ideally separate the AI’s "logic" (system prompts) from its "memory" (user chats).

Recommendations for Moving Forward:

  • Apply Cloud Lessons to AI: Just as we learned that the Cloud requires a shared responsibility model, we should approach AI as a stack that we must own and secure from the ground up.
  • Leverage Identity-Aware Proxies: Ensure AI platforms don't have direct, unfettered access to internal data. Access should always be scoped to the specific user's identity and permissions.
  • Modernize Testing: Traditional audits are great, but "Agentic Red Teaming"—using AI to test AI—is becoming a necessary standard to find these nuanced architectural gaps.

How can Sycomp help?

  • Secure the Foundation: With over 30 years of experience helping Fortune 500 organizations secure their infrastructure, Sycomp can help identify exploitable weaknesses across on-premises and cloud environments—and support you through remediation.
  • Secure AI by Design: Security is built into every AI solution designed and implemented by Sycomp’s DevOps team. Share your requirements with us and we’ll design and implement a secure AI solution that connects your existing tools, improves efficiency, and increases ROI on your current investments.
  • Secure Existing AI: AI tools may already be deployed in your organization with access to internal data and systems. Sycomp can help discover, test, and harden these integrations to reduce risks such as API abuse, data leakage, misconfigurations, and business-logic attacks.

The goal isn't to hit the brakes on AI—it’s to ensure our "innovation engine" has a chassis strong enough to support its power.

About the Author

Imagem

Herold Prophete is a Network Security Architect and Practice Lead at Sycomp, where he designs resilient security architectures for Global 500 organizations. With over 20 years of experience across the Telecom, Finance, Energy, and Retail sectors, Herold excels at translating complex technical requirements into high-level business strategy.

A CISSP-certified professional with deep mastery of multi-vendor ecosystems, he provides vendor-agnostic guidance that prioritizes AI-driven governance, automation, and long-term scalability. As a thought leader in the cybersecurity space, Herold serves as a trusted advisor to C-suite leadership, helping them navigate an evolving threat landscape with a focus on strategic defense and organizational resilience.