NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: Agent-skills-eval – Test whether Agent Skills improve outputs (github.com)
ssgodderidge 1 hours ago [-]
The example model in the documentation is 4o-mini, you might want to update that to a more recent model.

As an aside, 4o-mini came out months before agent skills were released… I’m curious how it performs with choosing to load skills in the first place?

stingraycharles 50 minutes ago [-]
It’s an artifact of the documentation being AI generated, they usually pick gpt4-era models, without giving it further thought.

For Gemini it seems to always pick 2.5 despite 3.1 being the latest, Claude the 3.5-era models.

Not sure what’s preventing AI labs on ensuring this stuff is refreshed during training.

block_dagger 55 minutes ago [-]
The skill is deterministically added to the prompt by the harness before the target model is invoked. There is no “choosing” to load a skill. You might be confusing skills with tools (MCP etc).
egeozcan 2 hours ago [-]
Are there any published results gathered using this?
ianhxu 1 hours ago [-]
How do you iterate on the judge prompt? Is there an auto rater?
bixxie09 19 minutes ago [-]
[dead]
huflungdung 3 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 11:21:25 GMT+0000 (Coordinated Universal Time) with Vercel.