NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: Agent-cache – Multi-tier LLM/tool/session caching for Valkey and Redis
revenga99 5 hours ago [-]
Can you explain what this does?
kaliades 4 hours ago [-]
It caches AI agent operations in Valkey (or Redis) so you don't repeat expensive work.

Three tiers: if your agent calls gpt-4o with the same prompt twice, the second call returns from Valkey in under 1ms instead of hitting the API. Same for tool calls - if your agent calls get_weather("Sofia") twice with the same arguments, the cached result comes back instantly. And session state (what step the agent is on, user intent, LangGraph checkpoints) persists across requests with per-field TTL.

The main difference from existing options is that LangChain's cache only handles LLM responses, LangGraph's checkpoint-redis only handles state (and requires Redis 8 + modules), and none of them ship OpenTelemetry or Prometheus instrumentation at the cache layer. This puts all three tiers behind one Valkey connection with observability built in.

trueno 3 hours ago [-]
when you say "same prompt" are you saying its similar prompt and something in the middle determines that "this is basically the same question" or is it looking for someone who for whatever reason prompted, then copied and pasted that prompt and prompted it again word for word?
kaliades 3 hours ago [-]
[dead]
eddy_cammegh 3 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:50:56 GMT+0000 (Coordinated Universal Time) with Vercel.