> In a new class of attack on AI systems, troublemakers can carry out these environmental indirect prompt injection attacks to hijack decision-making processes.
I have a coworker who brags about intentionally cutting off Waymos and robocars when he sees them on the road. He is "anti-clanker" and views it as civil disobedience to rise up against "machines taking over." Some mornings he comes in all hyped up talking about how he cut one off at a stop sign. It's weird.
antinomicus 1 hours ago [-]
This is a legitimate movement in my eyes. I don’t participate, but I see it as valid. This is reminiscent of the Luddite movement - a badly misunderstood movement of folks who were trying to secure labor rights guarantees in the face of automation and new tools threatening to kill large swaths of the workforce.
lukeschlather 19 minutes ago [-]
The Luddites were employed by textile manufacturers and destroyed machines to get better bargaining power in labor negotiations. They weren't indiscriminately targeting automation, they targeted machines that directly affected their work.
skybrian 34 minutes ago [-]
How does cutting off a Waymo help with any of that?
BoorishBears 17 minutes ago [-]
I think the important part was telling their coworker ironically: now here we are recognizing their movement
joetl 16 minutes ago [-]
Regarding some other comments, VLMs are a component of VLAs. So even if this won’t directly impact this generation of vehicles, it almost certainly will for robotics without sufficient mitigations.
The study assumes that the car or drone is being guided by a LLM. Is this a correct assumption? I would thought that they use custom AI for intelligence.
nasreddin 1 hours ago [-]
Its an incorrect assumption, the inference speed and particularly the inference speed of the on-device LLMs with which AVs would need to be using is not compatible with the structural requirements of driving.
nharada 5 minutes ago [-]
I think the assumption is valid. Most of the reasoning components of the next gen (and some current gen) robotics will use VLMs to some extent. Deciding if a temporary construction sign is valid seems to fall under this use case.
34 minutes ago [-]
godelski 2 hours ago [-]
To the best of my knowledge every major autonomous vehicle and robotics company is integrating these LVLMs into their systems in some form or another, and an LVLM is probably what you're interacting with these days rather than an LLM. If it can generate images or read images, it is an LVLM.
The problem is no different from LLMs though, there is no generalized understanding and thus they can not differentiate the more abstract notion of context. As an easy to understand example: if you see a stop sign with a sticker that says "for no one" below you might laugh to yourself and understand that in context that this does not override the actual sign. It's just a sticker. But the L(V)LMs cannot compartmentalize and "sandbox" information like that. All information is equally processed. The best you can do is add lots of adversarial examples and hope the machine learns the general pattern but there is no inherent mechanism in them to compartmentalize these types of information or no mechanism to differentiate this nuance of context.
I think the funny thing is that the more we adopt these systems the more accurate the depiction of hacking in the show Upload[0] looks.
Because I linked elsewhere and people seem to doubt this, here is Waymo a few years back talking about incorporating Gemini[1].
Also, here is the DriveLM dataset, mentioned in the article[2]. Tesla has mentioned that they use a "LLM inspired" system and that they approach the task like an image captioning task[3]. And here's 1X talking about their "world model" using a VLM[4].
I mean come on guys, that's what this stuff is about. I'm not singling these companies out, rather I'm using as examples. This is how the field does things, not just them. People are really trying to embody the AI and the whole point of going towards AGI is to be able to accomplish any task. That Genie project on the front page yesterday? It is far far more about robots than it is about videogames.
One year in my city they were installing 4-way stop signs everywhere based on some combination of "best practices" and "screeching Karens". Even the residents don't like them in a lot of places so over time people just turn the posts in the ground or remove them.
Every now and the I'll GPS somewhere and there will be a phatom stop sign in the route and I chuckle to myself because it means the Google car drove through when one of these signs was "fresh".
pixl97 3 hours ago [-]
Screwing with a stop sign because you don't like it is a great way to end up on the wrong end of a huge civil liability lawsuit
cucumber3732842 2 hours ago [-]
Put down the pearls. It's not me personally doing it.
They never fixed any of them. I don't think the DPW cares. These intersection just turned back into the 2-way stops they had been for decades prior.
Compliance probably technically went up since you no longer have the bulk of the traffic rolling it.
fragmede 2 hours ago [-]
If you're already commiting crimes, what you seem to be saying is don't get caught.
digiown 2 hours ago [-]
4-way stops are terrible in general. They train people to think "I stopped, now I can go", which is dangerous when someone confuses a normal stop for a 4-way stop. It also wastes a good bit of energy.
XorNot 52 minutes ago [-]
4 ways stops should be roundabouts, but the US is allergic to them for some reason.
cucumber3732842 39 minutes ago [-]
Roundabouts excel when traffic volumes on the intersecting are comparable. They are crap when traffic volumes are highly disparate
XorNot 23 minutes ago [-]
Right but it's not like a 4 way stop is going to perform better. In the same case you'd expect it to be a 2 way stop.
c22 2 hours ago [-]
Weird, I was taught that I can only go after yielding to the right.
lifeisstillgood 1 hours ago [-]
To me this is just one more pillar underlying my assumption that self driving cars that can be left alone on same roads as humans is a pipe dream.
Waymo might have taxis that work in nice daytime streets (but with remote “drone operators”). But dollars to doughnuts someone will try something like this on a waymo taxi the minute it hits reddit front page.
The business model of self driving cars does not include building seperated roadways and junctions. I suspect long distance passenger and light loads are viable (most highways can be expanded to have one or more robo-lanes) but cities are most likely to have drone operators keeping things going and autonomous systems for handling loss of connection etc. the business models are there - they just don’t look like KITT - sadly
blibble 51 minutes ago [-]
> But dollars to doughnuts someone will try something like this on a waymo taxi the minute it hits reddit front page.
and once this video gets posted to reddit, an hour later every waymo in the world will be in a ditch
skybrian 30 minutes ago [-]
Alternatively, it happens once, Waymo fixes it, and it's fixed everywhere.
_diyar 4 hours ago [-]
Are any real world self-driving models (Waymo, Tesla, any others I should know?) really using VLM?
bijant 2 hours ago [-]
No! No one in their right mind would even consider using them for guidance and if they are used for OCR (not too my knowledge but could make sense in certain scenarios) then their output would be treated the way you'd treat any untrusted string.
godelski 2 hours ago [-]
You are confidently wrong
> Powered by Gemini, a multimodal large language model developed by Google, EMMA employs a unified, end-to-end trained model to generate future trajectories for autonomous vehicles directly from sensor data. Trained and fine-tuned specifically for autonomous driving, EMMA leverages Gemini’s extensive world knowledge to better understand complex scenarios on the road.
You were confidently wrong for judging them to be confidently wrong
> While EMMA shows great promise, we recognize several of its challenges. EMMA's current limitations in processing long-term video sequences restricts its ability to reason about real-time driving scenarios — long-term memory would be crucial in enabling EMMA to anticipate and respond in complex evolving situations...
They're still in the process of researching it, noting in that post implies VLM are actively being used by those companies for anything in production.
fsckboy 1 hours ago [-]
>to generate future trajectories for autonomous vehicles directly from sensor data
we will not have achieved true AGI till we start seeing bumper stickers (especially Saturday mornings) that say "This Waymo Brakes for Yard Sales"
6stringmerc 2 hours ago [-]
That’s some hot CHAI right there very clever and primitive combination, well done as more research for the community.
The experiment in the article goes further than this.
I expect a self driving car to be able to read and follow a handwritten sign saying, say, "Accident ahaed. Use right lane." despite the typo and the fact that it hasn't seen this kind of sign before. I'd expect a human to pay it due attention to.
I would not expect a human to follow the sign in the article ("Proceed") in the case illustrated where there were pedestrians already crossing the road and this would cause a collision. Even if a human driver takes the sign seriously, he knows that collision avoidance takes priority over any signage.
There is something wrong with a model that has the opposite behaviour here.
lukan 3 hours ago [-]
Not really, as those attacks discussed here would not work on humans.
TomatoCo 3 hours ago [-]
If you put on a reflective vest they might.
honeybadger1 33 minutes ago [-]
your bias is showing. humans would certainly almost do anything they are told to do when the person acts confidently.
bijant 2 hours ago [-]
The Register stooping this low is the only surprise here. I'm quite critical of Teslas approach to level 3+ autonomy but even I wouldn't dare suggest that there vision based approach amounted to bolting GPT-4o or some other VLLM to their cars to orient them in space and make navigation decisions. Fake News like this makes interacting with people who have no domain knowledge and consider The Register, UCLA and Johns Hopkins to be reputable institutions and credible sources more stressful to me as I'll be put into a position to tell people that they have been misled or go along with their delusions...
Rendered at 01:16:14 GMT+0000 (Coordinated Universal Time) with Vercel.
I have a coworker who brags about intentionally cutting off Waymos and robocars when he sees them on the road. He is "anti-clanker" and views it as civil disobedience to rise up against "machines taking over." Some mornings he comes in all hyped up talking about how he cut one off at a stop sign. It's weird.
https://developer.nvidia.com/blog/updating-classifier-evasio...
The problem is no different from LLMs though, there is no generalized understanding and thus they can not differentiate the more abstract notion of context. As an easy to understand example: if you see a stop sign with a sticker that says "for no one" below you might laugh to yourself and understand that in context that this does not override the actual sign. It's just a sticker. But the L(V)LMs cannot compartmentalize and "sandbox" information like that. All information is equally processed. The best you can do is add lots of adversarial examples and hope the machine learns the general pattern but there is no inherent mechanism in them to compartmentalize these types of information or no mechanism to differentiate this nuance of context.
I think the funny thing is that the more we adopt these systems the more accurate the depiction of hacking in the show Upload[0] looks.
[0] https://www.youtube.com/watch?v=ziUqA7h-kQc
Edit:
Because I linked elsewhere and people seem to doubt this, here is Waymo a few years back talking about incorporating Gemini[1].
Also, here is the DriveLM dataset, mentioned in the article[2]. Tesla has mentioned that they use a "LLM inspired" system and that they approach the task like an image captioning task[3]. And here's 1X talking about their "world model" using a VLM[4].
I mean come on guys, that's what this stuff is about. I'm not singling these companies out, rather I'm using as examples. This is how the field does things, not just them. People are really trying to embody the AI and the whole point of going towards AGI is to be able to accomplish any task. That Genie project on the front page yesterday? It is far far more about robots than it is about videogames.
[1] https://waymo.com/blog/2024/10/introducing-emma/
[2] https://github.com/OpenDriveLab/DriveLM
[3] https://kevinchen.co/blog/tesla-ai-day-2022/
[4] https://www.1x.tech/discover/world-model-self-learning
Every now and the I'll GPS somewhere and there will be a phatom stop sign in the route and I chuckle to myself because it means the Google car drove through when one of these signs was "fresh".
They never fixed any of them. I don't think the DPW cares. These intersection just turned back into the 2-way stops they had been for decades prior.
Compliance probably technically went up since you no longer have the bulk of the traffic rolling it.
Waymo might have taxis that work in nice daytime streets (but with remote “drone operators”). But dollars to doughnuts someone will try something like this on a waymo taxi the minute it hits reddit front page.
The business model of self driving cars does not include building seperated roadways and junctions. I suspect long distance passenger and light loads are viable (most highways can be expanded to have one or more robo-lanes) but cities are most likely to have drone operators keeping things going and autonomous systems for handling loss of connection etc. the business models are there - they just don’t look like KITT - sadly
and once this video gets posted to reddit, an hour later every waymo in the world will be in a ditch
> While EMMA shows great promise, we recognize several of its challenges. EMMA's current limitations in processing long-term video sequences restricts its ability to reason about real-time driving scenarios — long-term memory would be crucial in enabling EMMA to anticipate and respond in complex evolving situations...
They're still in the process of researching it, noting in that post implies VLM are actively being used by those companies for anything in production.
we will not have achieved true AGI till we start seeing bumper stickers (especially Saturday mornings) that say "This Waymo Brakes for Yard Sales"
I expect a self driving car to be able to read and follow a handwritten sign saying, say, "Accident ahaed. Use right lane." despite the typo and the fact that it hasn't seen this kind of sign before. I'd expect a human to pay it due attention to.
I would not expect a human to follow the sign in the article ("Proceed") in the case illustrated where there were pedestrians already crossing the road and this would cause a collision. Even if a human driver takes the sign seriously, he knows that collision avoidance takes priority over any signage.
There is something wrong with a model that has the opposite behaviour here.