NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Steering interpretable language models with concept algebra (guidelabs.ai)
giang_at_glai 15 hours ago [-]
Author here.

This post shows “concept algebra” on language model: inject, suppress, and compose human-understandable concepts at inference time (no retraining, no prompt engineering).

There’s an interactive demo on the post.

Would love feedback on: (1) what steering tasks you’d benchmark, (2) failure cases you’d want to see, (3) whether this kind of compositional control is useful in real products.

Related: https://news.ycombinator.com/item?id=47131225

anon291 26 minutes ago [-]
I would personally like some quantification of how good this is compared to just replacing the system prompt of an off the shelf 8B parameter language model.

The suppression bit is very powerful. I would like to see a quantification of how often a steered 'normal' language model will mention things you asked it to suppress vs how often this one does

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 21:10:50 GMT+0000 (Coordinated Universal Time) with Vercel.