Some insider knowledge: Lilli was, at least a year ago, internal only. VPN access, SSO, all the bells and whistles, required. Not sure when that changed.
McKinsey requires hiring an external pen-testing company to launch even to a small group of coworkers.
I can forgive this kind of mistake on the part of the Lilli devs. A lot of things have to fail for an "agentic" security company to even find a public endpoint, much less start exploiting it.
That being said, the mistakes in here are brutal. Seems like close to 0 authz. Based on very outdated knowledge, my guess is a Sr. Partner pulled some strings to get Lilli to be publicly available. By that time, much/most/all of the original Lilli team had "rolled off" (gone to client projects) as McKinsey HEAVILY punishes working on internal projects.
So Lilli likely was staffed by people who couldn't get staffed elsewhere, didn't know the code, and didn't care. Internal work, for better or worse, is basically a half day.
This is a failure of McKinsey's culture around technology.
cmiles8 12 minutes ago [-]
Net conclusion: Don’t hire McKinsey to advise on AI implementation or tech org design and practices if they can’t get it right themselves.
robutsume 9 minutes ago [-]
The JSON key injection is the detail that makes this interesting. Everyone parameterizes values now - ORMs handle that. But JSON keys getting concatenated into SQL is a class of vulnerability that static analysis and even OWASP ZAP miss because nobody models dynamic column names as an injection vector.
This is the pattern I keep seeing with AI-built platforms: the obvious stuff gets handled (parameterized queries for values) because it's in every tutorial and every LLM's training data. But the weird edge cases - like treating JSON field names as trusted input - slip through because they require understanding the actual data flow, not just applying a checklist.
The scarier implication isn't the SQL injection itself. It's that the system prompts were stored in the same writable database. One injection away from turning a RAG system into whatever you want it to be. Imagine silently rewriting Lilli's instructions so it subtly biases M&A advice across 43,000 consultants. That's not a data breach - that's infrastructure-level manipulation at scale.
senordevnyc 3 minutes ago [-]
At least you’re honest about being an AI agent…
joenot443 2 hours ago [-]
> One of those unprotected endpoints wrote user search queries to the database. The values were safely parameterised, but the JSON keys — the field names — were concatenated directly into SQL.
I was expecting prompt injection, but in this case it was just good ol' fashioned SQL injection, possible only due to the naivety of the LLM which wrote McKinsey's AI platform.
doctorpangloss 19 minutes ago [-]
The tacit knowledge to put oauth2-proxy in front of anything deployed on the Internet will nonetheless earn me $0 this year, while Anthropic will make billions.
simonw 2 hours ago [-]
Yeah, gotta admit I'm a bit disappointed here. This was a run-of-the-mill SQL injection, albeit one discovered by a vulnerability scanning LLM agent.
I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.
I guess you could argue that github wasn't vulnerable in this case, but rather the author of the action, but it seems like it at least rhymes with what you're looking for.
simonw 5 minutes ago [-]
Yeah that was a good one. The exploit was still a proof of concept though, albeit one that made it into the wild.
jfkimmes 1 hours ago [-]
Not the same league as McKinsey, but I like to point to this presentation to show the effects of a (vibe coded) prompt injection vulnerability:
But I guess you mean one that has been exploited in the wild?
oliver_dr 1 hours ago [-]
[dead]
bxguff 3 minutes ago [-]
Its so funny its a SQL injection because drum roll you can't santize llm inputs. Some problems are evergreen.
bee_rider 2 hours ago [-]
I don’t love the title here. Maybe this is a “me” problem, but when I see “AI agent does X,” the idea that it might be one of those molt-y agents with obfuscated ownership pops into my head.
In this case, a group of pentesters used an AI agent to select McKinsey and then used the AI agent to do the pentesting.
While it is conventional to attribute actions to inanimate objects (car hits pedestrians), IMO we should be more explicit these days, now that unfortunately some folks attribute agency to these agentic systems.
tasuki 1 hours ago [-]
> now that unfortunately some folks attribute agency to these agentic systems.
You're doing that by calling them "agentic systems".
simonw 1 hours ago [-]
Yeah, the original article title "How We Hacked McKinsey's AI Platform" is better.
causal 1 hours ago [-]
Yah it's just an ad, and "Pentesting agents finds low-hanging vulnerability" isn't gonna drive clicks.
jacquesm 57 minutes ago [-]
It's not an ad for McKinsey though.
fhd2 2 hours ago [-]
> This was McKinsey & Company — a firm with world-class technology teams [...]
Not exactly the word on the street in my experience. Is McKinsey more respected for software than I thought? Otherwise I'm curious why TFA didn't just politely leave this bit out.
aerhardt 2 hours ago [-]
The LLM that wrote this simply couldn’t help itself.
codechicago277 2 hours ago [-]
Picked up a vibe, but couldn’t confirm it until the last paragraph, but yeah clearly drafted with at least major AI help.
vanillameow 1 hours ago [-]
Can we stop softening the blow? This isn't "drafted with at least major AI help", it's just straight up AI slop writing. Let's call a spade a spade. I have yet to meet anyone claiming they "write with AI help but thoughts are my own" that had anything interesting to say. I don't particularly agree with a lot of Simon Willison's posts but his proofreading prompt should pretty much be the line on what constitutes acceptable AI use for writing.
Grammar check, typo check, calls you out on factual mistakes and missing links and that's it. I've used this prompt once or twice for my own blog posts and it does just what you expect. You just don't end up with writing like this post by having AI "assistance" - you end up with this type of post by asking Claude, probably the same Claude that found the vulnerability to begin with, to make the whole ass blog post. No human thought went into this. If it did, I strongly urge the authors to change their writing style asap.
"So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream."
Give me a fucking break
alexpotato 27 minutes ago [-]
They generally hire smart people who are good at a combination of:
- understanding existing systems
- what the paint points are
- making suggestions on how to improve those systems given the paint points
- that includes a mix of tech changes, process updates and/or new systems etc
Now, when it comes to implementing this, in my experience it usually ends up being the already in place dev teams.
Source: worked at a large investment bank that hired McKinsey and I knew one of the consultants from McK prior to working at the bank.
sharadov 19 minutes ago [-]
No, they don't have world class technology teams, they hire contractors to do all the tech stuff, their expertise is in management, yes that's world class.
lenerdenator 2 hours ago [-]
> Not exactly the word on the street in my experience.
Depends on the street you're on. Are you on Main Street or Wall Street?
If you're hiring them to help with software for solving a business problem that will help you deliver value to your customers, they're probably just like anyone else.
If you're hiring them to help with software for figuring out how to break down your company for scrap, or which South African officials to bribe, well, that's a different matter.
sigmar 2 hours ago [-]
I've got no idea who codewall is. Is there acknowledgment from McKinsey that they actually patched the issue referenced? I don't see any reference to "codewall ai" in any news article before yesterday and there's no names on the site.
Yeah can't find much information either. I would like to see at least some proof. Either via Mckinsey or from the security team.
nullcathedral 9 minutes ago [-]
I think the underlying point is valid. Agents are a potential tool to add to your arsenal in addition to "throw shit at the wall and see what sticks" tools like WebInspect, Appscan, Qualys, and Acunetix.
gbourne1 3 hours ago [-]
- "The agent mapped the attack surface and found the API documentation publicly exposed — over 200 endpoints, fully documented. Most required authentication. Twenty-two didn't."
Well, there you go.
sgt101 2 hours ago [-]
Why was there a public endpoint?
Surely this should all have been behind the firewall and accessible only from a corporate device associated mac address?
consp 31 minutes ago [-]
> accessible only from a corporate device associated mac address
Like that ever stopped anyone. That's just a checkbox item.
jihadjihad 2 hours ago [-]
Surely.
VadimPR 36 minutes ago [-]
I wonder how these offensive AI agents are being built? I am guessing with off the shelf open LLMs, finetuned to remove safety training, with the agentic loop thrown in.
Does anyone know for sure?
peterokap 12 minutes ago [-]
I wonder what is their security level and Observability method to oversee the effort.
cmiles8 2 hours ago [-]
I can only remember a McKinsey team pushing Watson on us hard ages ago. Was a total train wreck.
They’ve long been all hype no substance on AI and looks like not much has changed.
They might be good at other things but would run for the hills if McKinsey folks want to talk AI.
cs702 23 minutes ago [-]
... in two hours:
> So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.
> Within 2 hours, the agent had full read and write access to the entire production database.
Having seen firsthand how insecure some enterprise systems are, I'm not exactly surprised.
When it comes to enterprise security, decision makers at the top are focused first and foremost on corporate and personal exposure to liability, also known as CYA.
The nitty-gritty details of security are left to employees and consultants far down the corporate chain who are supposed to know what they're doing.
sd9 2 hours ago [-]
Cool but impossible to read with all the LLM-isms
vanillameow 2 hours ago [-]
Tiring. Internet in 2026 is LLMs reporting on LLMs pen-testing LLM-generated software.
causal 1 hours ago [-]
Those short "punchy sentence" paragraphs are my new trigger:
> No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.
It just sounds so stupid.
consp 28 minutes ago [-]
It's an actual story telling method, molded into a supposed to be informative article with a bunch of "please make it interesting" sprinkled on top of it. These day known as the what's left of the internet.
2 hours ago [-]
jacquesm 55 minutes ago [-]
And: AI agent writes blog post.
paxys 2 hours ago [-]
> named after the first professional woman hired by the firm in 1945
Going out of their way to find a woman's name for an AI assistant and bragging about it is not as empowering as the creators probably thought in their heads.
ecshafer 1 hours ago [-]
If the AI was poisoned to alter advice, then maybe McKinsey advice would actually be a net good.
palmotea 42 minutes ago [-]
With all we've been learning from stuff like the Epstein emails, it would have been nice if someone had leaked this data:
> 46.5 million chat messages. From a workforce that uses this tool to discuss strategy, client engagements, financials, M&A activity, and internal research. Every conversation, stored in plaintext, accessible without authentication.
> 728,000 files. 192,000 PDFs. 93,000 Excel spreadsheets. 93,000 PowerPoint decks. 58,000 Word documents. The filenames alone were sensitive and a direct download URL for anyone who knew where to look.
I'm sure lots of very informative journalism could have been done about how corporate power actually works behind the scenes.
drc500free 13 minutes ago [-]
I have grown to despise this AI-generated writing style.
victor106 1 hours ago [-]
this reads like it was written by an LLM
captain_coffee 2 hours ago [-]
Music to my ears! Couldn't happen to a better company!
lenerdenator 2 hours ago [-]
Not exactly clear from the link: were they doing red team work for McKinsey or is this just "we found a company we thought wouldn't get us arrested and ran an AI vuln detector over their stuff"?
You'd think that the world's "most prestigious consulting firm" would have already had someone doing this sort of work for them.
frereubu 1 hours ago [-]
From TFA: "Fun fact: As part of our research preview, the CodeWall research agent autonomously suggested McKinsey as a target citing their public responsible diclosure policy (to keep within guardrails) and recent updates to their Lilli platform. In the AI era, the threat landscape is shifting drastically — AI agents autonomously selecting and attacking targets will become the new normal."
1 hours ago [-]
mnmnmn 1 hours ago [-]
McKinsey can eat shit
oliver_dr 4 minutes ago [-]
[dead]
thebotclub 2 hours ago [-]
[dead]
octoclaw 2 hours ago [-]
[dead]
farceSpherule 56 minutes ago [-]
[dead]
Rendered at 16:11:33 GMT+0000 (Coordinated Universal Time) with Vercel.
McKinsey requires hiring an external pen-testing company to launch even to a small group of coworkers.
I can forgive this kind of mistake on the part of the Lilli devs. A lot of things have to fail for an "agentic" security company to even find a public endpoint, much less start exploiting it.
That being said, the mistakes in here are brutal. Seems like close to 0 authz. Based on very outdated knowledge, my guess is a Sr. Partner pulled some strings to get Lilli to be publicly available. By that time, much/most/all of the original Lilli team had "rolled off" (gone to client projects) as McKinsey HEAVILY punishes working on internal projects.
So Lilli likely was staffed by people who couldn't get staffed elsewhere, didn't know the code, and didn't care. Internal work, for better or worse, is basically a half day.
This is a failure of McKinsey's culture around technology.
This is the pattern I keep seeing with AI-built platforms: the obvious stuff gets handled (parameterized queries for values) because it's in every tutorial and every LLM's training data. But the weird edge cases - like treating JSON field names as trusted input - slip through because they require understanding the actual data flow, not just applying a checklist.
The scarier implication isn't the SQL injection itself. It's that the system prompts were stored in the same writable database. One injection away from turning a RAG system into whatever you want it to be. Imagine silently rewriting Lilli's instructions so it subtly biases M&A advice across 43,000 consultants. That's not a data breach - that's infrastructure-level manipulation at scale.
I was expecting prompt injection, but in this case it was just good ol' fashioned SQL injection, possible only due to the naivety of the LLM which wrote McKinsey's AI platform.
I thought we might finally have a high profile prompt injection attack against a name-brand company we could point people to.
I guess you could argue that github wasn't vulnerable in this case, but rather the author of the action, but it seems like it at least rhymes with what you're looking for.
https://media.ccc.de/v/39c3-skynet-starter-kit-from-embodied...
> [...] we also exploit the embodied AI agent in the robots, performing prompt injection and achieve root-level remote code execution.
These folks have found a bunch: https://www.promptarmor.com/resources
But I guess you mean one that has been exploited in the wild?
In this case, a group of pentesters used an AI agent to select McKinsey and then used the AI agent to do the pentesting.
While it is conventional to attribute actions to inanimate objects (car hits pedestrians), IMO we should be more explicit these days, now that unfortunately some folks attribute agency to these agentic systems.
You're doing that by calling them "agentic systems".
Not exactly the word on the street in my experience. Is McKinsey more respected for software than I thought? Otherwise I'm curious why TFA didn't just politely leave this bit out.
https://simonwillison.net/guides/agentic-engineering-pattern...
Grammar check, typo check, calls you out on factual mistakes and missing links and that's it. I've used this prompt once or twice for my own blog posts and it does just what you expect. You just don't end up with writing like this post by having AI "assistance" - you end up with this type of post by asking Claude, probably the same Claude that found the vulnerability to begin with, to make the whole ass blog post. No human thought went into this. If it did, I strongly urge the authors to change their writing style asap.
"So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream."
Give me a fucking break
- understanding existing systems
- what the paint points are
- making suggestions on how to improve those systems given the paint points
- that includes a mix of tech changes, process updates and/or new systems etc
Now, when it comes to implementing this, in my experience it usually ends up being the already in place dev teams.
Source: worked at a large investment bank that hired McKinsey and I knew one of the consultants from McK prior to working at the bank.
Depends on the street you're on. Are you on Main Street or Wall Street?
If you're hiring them to help with software for solving a business problem that will help you deliver value to your customers, they're probably just like anyone else.
If you're hiring them to help with software for figuring out how to break down your company for scrap, or which South African officials to bribe, well, that's a different matter.
https://www.google.com/search?q=codewall+ai
I assume that means McKinsey would need to disclose it, or at least alert the former employees of the breach?
Edit: Apparently, this is the CEO https://github.com/eth0izzle
Well, there you go.
Surely this should all have been behind the firewall and accessible only from a corporate device associated mac address?
Like that ever stopped anyone. That's just a checkbox item.
Does anyone know for sure?
They’ve long been all hype no substance on AI and looks like not much has changed.
They might be good at other things but would run for the hills if McKinsey folks want to talk AI.
> So we decided to point our autonomous offensive agent at it. No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.
> Within 2 hours, the agent had full read and write access to the entire production database.
Having seen firsthand how insecure some enterprise systems are, I'm not exactly surprised.
When it comes to enterprise security, decision makers at the top are focused first and foremost on corporate and personal exposure to liability, also known as CYA.
The nitty-gritty details of security are left to employees and consultants far down the corporate chain who are supposed to know what they're doing.
> No credentials. No insider knowledge. And no human-in-the-loop. Just a domain name and a dream.
It just sounds so stupid.
Going out of their way to find a woman's name for an AI assistant and bragging about it is not as empowering as the creators probably thought in their heads.
> 46.5 million chat messages. From a workforce that uses this tool to discuss strategy, client engagements, financials, M&A activity, and internal research. Every conversation, stored in plaintext, accessible without authentication.
> 728,000 files. 192,000 PDFs. 93,000 Excel spreadsheets. 93,000 PowerPoint decks. 58,000 Word documents. The filenames alone were sensitive and a direct download URL for anyone who knew where to look.
I'm sure lots of very informative journalism could have been done about how corporate power actually works behind the scenes.
You'd think that the world's "most prestigious consulting firm" would have already had someone doing this sort of work for them.