NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Learning from context is harder than we thought (hy.tencent.com)
XenophileJKO 54 seconds ago [-]
Hmm.. I looked at the benchmark set.

I'm conflicted. I don't know that I would necessarily want a model to pass all of these. Here is the fundamental problem. They are putting the rules and foundational context in "user" messages.

Essentially I don't think you want to train the models on full compliance to the user messages, they are essentially "untrusted" content from a system/model perspective. Or at least it is not generally "fully authoritative".

This creates a tension where the safety, truthfulness training, etc.

bradfa 20 minutes ago [-]
The key seems to be that you take the transcript of a model working within a problem domain that it’s not yet good at or where the context doesn’t match it’s original training and then you continually retrain it based on its efforts and guidance from a human or other expert. You end up with a specialty model in a given domain that keeps getting better at that domain, just like a human.

The hard part is likely when someone proves some “fact” which the models knows and has had reinforced by this training is no longer true. The model will take time to “come around” to understand this new situation. But this isn’t unlike the general populous. At scale humans accept new things slowly.

johnsmith1840 20 minutes ago [-]
It's basically continual learning. This is beyond a hard problem it's currently an impossible one. I know of no system that solve CL even at small scale let alone large models.

Annoyingly, they have SOME inherent capability to do it. It's really easy to get sucked down this path due to that glimmer of hope but the longer you play with it the more annoying it becomes.

SSI seems to be focused on this problem directly so maybe they discover something?

15 minutes ago [-]
joriJordan 8 minutes ago [-]
Because we don't experience reality through language but direct sensory perception. Language is arbitrary bird song and visual representations dragged forward from history, accepted definitions never uniformly distributed.

Testing based on contextual correctness makes no sense when there is no center to the universe. No "one true context to rule them all".

We learn from hands on sensory experiences. Our bodies store knowledge independent of the brain; often referred to as muscle memory.

Gabe Newell mentioned this years ago; our brain is only great at some things like language and vision processing but the rest of our body is involved in sensory information processing too: https://en.wikiquote.org/wiki/Gabe_Newell

The most potent evidence the brain is not the center of the universe we commonly think it to be is that patient with 90% of their skull filled with fluid while they carried out a typical first worlder life: https://www.sciencealert.com/a-man-who-lives-without-90-of-h...

States are banning a reading education framework that's been linked to lower literacy scores in younger generations; 3-cueing relies on establishing correctness via context assessment: https://www.edweek.org/teaching-learning/more-states-are-tak...

"Establishing context" is a euphemism for "arguing semantics".

Putting the brain at the root of of human intelligence is a relic of hierarchical and taxonomical models. There are no natural hierarchies.

rishabhaiover 15 minutes ago [-]
wasn't in-context learning an emergent behavior a while ago (1-2 years)?
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 19:19:18 GMT+0000 (Coordinated Universal Time) with Vercel.