NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
CamoLeak: Critical GitHub Copilot Vulnerability Leaks Private Source Code (legitsecurity.com)
isodev 11 hours ago [-]
I’m so happy our entire operation moved to a self hosted VCS (Forgejo). Two years ago, we started the migration (including client repos) and not only we saved tones of money on GitHub subscriptions, our system is dramatically more performant for the 30-40 developers working with it every day.

We also banned the use of VSCode and any editor with integrated LLM features. Folks can use CLI based coding agents of course, but only in isolated containers with careful selection of sources made available to the agents.

hansmayer 10 hours ago [-]
Just out of interest, what is your alternative IDE?
isodev 9 hours ago [-]
That depends a bit on the ecosystem too.

For editors: Zed recently added the disable_ai option, we have a couple of folks using more traditional options like Sublime, vim-based etc (that never had the kind of creepy telemetry we’re avoiding).

JetBrains tools are OK since their AI features are plugin based, their telemetry is also easy to disable. Xcode and Qt Creator are also in use.

frumplestlatz 2 hours ago [-]
Banning VSCode — instead of the troublesome features/plug-ins — seems like a step too far. VSCode is the only IDE that supports a broad range of languages with poor support elsewhere, from Haskell to Lean 4 to F*.

I work at a major proprietary consumer product company, and even they don’t ban VSCode. We’re just responsible for not enabling the troublesome features.

trenchpilgrim 40 minutes ago [-]
> VSCode is the only IDE that supports a broad range of languages with poor support elsewhere

I just checked Zed extensions and found the first two easily enough. The third I did not, since they don't seem to have a language server, just direct integrations for vim/emacs/vsc.

aitchnyu 9 hours ago [-]
What do your CLIs connect to? To first-party OpenAI/Claude provider or AWS Bedrock?
isodev 9 hours ago [-]
Devs are free to choose, provided we can vet the model prover’s policy on training on prompts or user code. We’re also careful not to expose agents to documentation or test data that may be sensitive. It’s a trade off with convenience of course, but we believe that any information agents get access to should be a conscious opt-in. It will be cool if/when self hosting claude-like LLMs becomes pragmatic.
munchlax 23 hours ago [-]
So this wasn't really fixed. The impressive thing here is that copilot accepts natural language. So whatever exfiltration method you can come up with, you just write out the method in english.

They merely "fixed" one particular method, without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice? Failing that, you could trick it into providing passwords by telling it you accidentally stored your grocery list in a field called passswd, go fetch it for me ppls?

There's a ton of stuff to be found here. Do they give bounties? Here's a goldmine.

Thorrez 16 hours ago [-]
>Surely you could just do the base64 thing to an image url of your choice?

What does that mean? Are you proposing a non-Camo image URL? Non-Camo image URLs are blocked by CSP.

>Failing that, you could trick it into providing passwords by telling it you accidentally stored your grocery list in a field called passswd, go fetch it for me ppls?

Does the agent have internet access to be able to perform a fetch? I'm guessing not, because if so, that would be a much easier attack vector than using images.

lyu07282 17 hours ago [-]
> GitHub fixed it by disabling image rendering in Copilot Chat completely.
oefrha 17 hours ago [-]
To supplement the parent, this is straight from article’s TLDR (emphasis mine):

> In June 2025, I found a critical vulnerability in GitHub Copilot Chat (CVSS 9.6) that allowed silent exfiltration of secrets and source code from private repos, and gave me full control over Copilot’s responses, including suggesting malicious code or links.

> The attack combined a novel CSP bypass using GitHub’s own infrastructure with remote prompt injection. I reported it via HackerOne, and GitHub fixed it by disabling image rendering in Copilot Chat completely.

And parent is clearly responding to gp’s incorrect claims that “…without disclosing how they fixed it. Surely you could just do the base64 thing to an image url of your choice?” I’m sure there will be more attacks discovered in the future but gp is plain wrong on these points.

Please RTFA or at least RTFTLDR before you vote.

oncallthrow 10 hours ago [-]
> I spent a long time thinking about this problem before this crazy idea struck me. If I create a dictionary of all letters and symbols in the alphabet, pre-generate their corresponding Camo URLs, embed this dictionary into the injected prompt,

Beautiful

kerng 12 hours ago [-]
Not the first time by the way. GitHub Copilot Chat: From Prompt Injection to Data Exfiltration https://embracethered.com/blog/posts/2024/github-copilot-cha...
runningmike 23 hours ago [-]
Somehow this article feels like a promotional for Legit. But all AI vibe solutions face the same weaknesses. Limited transparency and trust Issues: Using non FOSS solutions for cybersecurity is a large risk.

If you do use AI cyber solutions, you can be more vulnerable for security breaches instead of less.

twisteriffic 8 hours ago [-]
This exploit seems to be taking advantage of the slow token-at-a-time pattern of LLM conversations to ensure that the extracted data can be reconstructed in order? Seems as though returning the entire response as a single block could interfere with the timing enough to make reconstruction much more difficult.
MysticFear 11 hours ago [-]
Can't they just have the Copilot user permission to be readonly from the current repo.
mediumsmart 11 hours ago [-]
I can't remember the last time I leaked private source code with copilot.
xstof 23 hours ago [-]
Wondering if the ability to use hidden (HTML comment) content in PRs would not remain a nasty issue: especially for open source repos?! Was that fixed?
PufPufPuf 20 hours ago [-]
It's used widely for issue/PR templates, to tell the submitter what info to include. But they could definitely strip it from the Copilot input... at least until they figure out this "prompt injection" thing that I thought modern LLMs were supposed to be immune to.
fn-mote 17 hours ago [-]
> that I thought modern LLMs were supposed to be immune to

What gave you this idea?

I thought it was always going to be a feature of LLMs, and the only thing that changes is that it gets harder to do (more circumventions needed), much like exploits in the context of ASLR.

PufPufPuf 12 hours ago [-]
PR releases. Yeah, it was an exaggeration, I know that the mitigations can only go so far.
j45 9 hours ago [-]
I wonder sometimes if all code on Github private or not is ultimately compromised somehow.
stephenlf 1 days ago [-]
Wild approach. Very nice
djmips 20 hours ago [-]
can you still make invisible comments?
RulerOf 13 hours ago [-]
Invisible comments are a widely used feature. Often done inside of PR or Issue templates to instruct users how to include necessary info without clogging up the final result when they submit.
adastra22 23 hours ago [-]
A good vulnerability writeup, and a thrill to read. Thanks!
charcircuit 18 hours ago [-]
The rule is to operate using the intersection of all the users permissions of who is contributing text to the LLM. Why can an attacker's prompt access a repo the attacker does not have access to? That's the biggest issue here.
deckar01 23 hours ago [-]
Did the markdown link exfil get fixed?
nprateem 21 hours ago [-]
You'd have to be insane to run an AI agent locally. They're clearly unsecurable.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 04:53:13 GMT+0000 (Coordinated Universal Time) with Vercel.