NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Can LLMs model real-world systems in TLA+? (sigops.org)
simplegeek 30 minutes ago [-]
I feel LLMs are indeed getting better at writing models. But, in my experience, they struggle to come up with correct safety and liveness properties unless you closely work with them. And of these two, they struggle the most with coming up with correct liveness properties.

Also for some problems I observe that models produced by LLMs often cause state space explosion. For simpler models they can fix this when you guide them though.

I’m sure LLMs will get even better.

That said, I take slightly different approach. Lamport said “ If you're thinking without writing, you only think you're thinking.” So taking that advice I always try to write the first draft with hand and once I have the final shape in place I then turn to an LLM for further exploration and experimentation if I have to.

iFire 3 hours ago [-]
I don't use tla+ to model real-world systems anymore, Claude is able to model systems in Lean 4 and the binary executable can handle real input or I can directly generate c / rust on proofs with numeric types that have ring structure (integers, rationals, bits).

https://github.com/lambdaclass/truth_research_zk

dmos62 2 minutes ago [-]
Do you find Lean 4 sufficient for highly async systems?
thomasahle 54 minutes ago [-]
I'm currently choosing between the right formalization for a big hardware project.

I'm considering between SVA, TLA+ and Lean. With the former being more domain specific and the later more general.

Do you think we'll move towards "Lean for everything" or do domain specific formalisms still make sense?

dev_arvin2000 3 hours ago [-]
[flagged]
tmaly 6 hours ago [-]
I remember NVIDA sponsored a TLA+ challenge last year https://foundation.tlapl.us/challenge/index.html
uptodatenews 6 hours ago [-]
Whoa didn't even know cool
tombert 4 hours ago [-]
Claude has certainly been getting better with TLA+. It's not perfect yet but for laughs I got it to model the rules of Monopoly last night [1]. I haven't done any exhaustive checking on it yet, but it certainly looks passable.

It is pretty impressive at how good it's gotten at this, in a relatively short amount of time no less. I still usually write my specs by hand, but who knows how much longer I'll be doing that.

[1] https://pdfhost.io/v/KU2j37YKrP_Monopoly

ofrzeta 3 hours ago [-]
It looks quite complicated and I have no idea what it is doing. Obviously, since I don't know about TLA+. But what about someone who knows TLA+? It still seems hard to make sure it is valid. And it's just for a relatively simple game.
comex 29 minutes ago [-]
Well, for one thing:

> Decline to buy: property stays with bank (auction abstracted out)

Ignoring an entire game mechanic is really stretching the definition of “abstracted out”…

Also, at the bottom it defines a “Liveness: someone eventually wins” property which I believe cannot be proven. Monopoly doesn’t have any rules forcing the game to end eventually. There is only a probabilistic guarantee, and even that only applies if the players are trying to win; if the players are conspiring to prevent the game from ending then they’re unlikely to fail.

_doctor_love 31 minutes ago [-]
There is a nice guide to TLA+ from Hillel Wayne here: https://learntla.com/

PlusCal is recommended as the gentler on-ramp to TLA+ for first learning.

atomicnature 3 hours ago [-]
Just a question to people who may know better than me about this.

I thought the whole point of trying to write out TLA+ is so that you get a better idea of what you want and put it into formal language?

I get that an LLM can assist/help with expressing what we want in formal language a bit, but if one automates all this there is no human intent/design anymore.

If the LLM generates both the design (TLA+) and writes an arbitrary program that satisfies said design -- what exactly have we proved?

What assurance do humans get since human doesn't know or cannot specify what they want.

majormajor 2 hours ago [-]
An LLM-generated TLA+ model can be verified for certain things in a way that LLM-generated code can't. It's infamously hard to exhaustively unit-test concurrency.

Whether or not you're modeling the right things or verifying the right things, of course... that's always left as an exercise for the user. ;)

(How to prove the implementation code is guaranteed to match the spec is a trick I haven't seen generalized yet, either, too.)

dgacmu 6 hours ago [-]
This post reads like an accidental advertisement for approaches like Verus [1], which couple the implementation and verification so you can't end up with a model that diverges from the actual implementation. I'm personally much more optimistic about the verus approach, but I freely admit that's my builder bias speaking.

[1] https://github.com/verus-lang/verus

dev_arvin2000 3 hours ago [-]
[dead]
pzoln 3 hours ago [-]
Sorry, must be a very naive question, but what if you give LLM just a source code (maybe even obfuscate the names like Raft and Etcd) and ask it to create a TLA+ spec of that?
_doctor_love 30 minutes ago [-]
This is already being done by some folks, reverse-engineering existing source into a TLA+ spec. Like other commenters have mentioned, the challenge is in ensuring that the spec and code match each other.
asxndu 5 hours ago [-]
[dead]
uptodatenews 6 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 07:28:29 GMT+0000 (Coordinated Universal Time) with Vercel.