Bypassing CPU for NVMe-to-GPU transfer is clever. The bottleneck for running large models locally has always been the memory hierarchy — this essentially treats NVMe as extended VRAM with direct DMA.
I wonder how this compares to Apple's unified memory approach on M-series chips for similar workloads. The M4 Max can fit 70B models entirely in memory without any offloading tricks, though at lower throughput than a 3090.
Would be interesting to see comparative benchmarks: this NVMe approach on a 3090 vs M4 Max native, especially for batch inference where the NVMe latency might be amortized.
fabifabulous 3 hours ago [-]
NVMEs are much, much slower than RAM. Especially unified/soldered RAM.
3abiton 1 hours ago [-]
To be fair, llama.cpp had this feature for over a year now. It just applies to GGUF.
xaskasdf 6 hours ago [-]
I got an m3, I will test it on metal and check how it goes
01100011 18 hours ago [-]
Yeah, GPUdirect should allow you to dma straight to a storage device.
I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.
javchz 17 hours ago [-]
Yeah a ramdisk would probably work wonders. It's a shame Intel optane didn't became a standard, those type of workflows would be amazing for it.
xaskasdf 5 hours ago [-]
Ya know, here on the local market there are a bunch of optanes hanging around, I'll try to manage one to check if there's any improvement
jonassm 2 hours ago [-]
Optanes will be good for latency, but not so much for BW which seems to be your major bottleneck if I'm not mistaken?
xaskasdf 23 minutes ago [-]
yeah, the mobo upgrade is something I gotta do anyway, so I'll cover that up more or less, the optane is something I didn't thought about
15 hours ago [-]
TechSquidTV 17 hours ago [-]
Ahhh damn it. Intel! Come back!
lmeyerov 13 hours ago [-]
This is exactly what I was wondering
I gave a talk a few years ago at dask summit (conf?) on making the stars align with dask-cudf here. We were helping a customer accelerate log analytics by proving out our stack for nodes that look roughly like: parallel ssd storage arrays (30 x 3 GB/s?) -> GPUDirect Storage -> 4 x 30 GB/s PCIe (?) -> 8 x A100 GPUs, something like that. It'd be cool to see the same thing now in the LLM world, such as a multi-GPU MoE, or even a single-GPU one for that matter!
bhewes 3 hours ago [-]
The marvel cxl 2.0 ddr4 card Serve the Home used for kvcache speed ups. And I am personally looking forward to cxl 3 and memory coherence across my system builds.
Isn't m.2 storage but DRAM - hopefully, meaning NVMe/PCIe not SATA speed - already exists as Compute Express Link (CXL), just not in this specific m.2 form factor? If only RAM wasn't silly expensive right now, one could use 31GB/s of additional bandwidth per NVMe connector.
randomtoast 22 hours ago [-]
0.2 tok/s is fine for experimentation, but it is not interactive in any meaningful sense. For many use cases, a well-quantized 8B or 13B that stays resident will simply deliver a better latency-quality tradeoff
xaskasdf 19 hours ago [-]
yeah, actually I wanted to see if this was possible at all. I managed to get around 3000 tokens/s on a ps2 with classic transformers, since the emotion engine is capable of 32 bit addresses, but it has like 32gb of ram. So I ran into the question of why was that fast and I couldn't get that speed even with small models, and the deal is that the instructions went right of the memory to the gpu and that's the main difference that does when a regular computer does inference: it has to request the instructions to the cpu every time. As I mentioned too, on professional cards you can avoid these problems naturally, since they got instructions precisely for this, but sadly I don't have 30k bucks to spare on a gpu :(
derstander 19 hours ago [-]
*32MB of RAM (plus 4MB of video RAM and a little sound and IOP memory).
eleventyseven 15 hours ago [-]
> I don't have 30k bucks to spare on a gpu :(
Do you have $2/hr to rent an RTX 6000 96GB or $5/hr for B200 180GB on the cloud?
superkuh 15 hours ago [-]
I'd rather not give money to scalper barons if I can avoid it. Fab capacity is going to that for rental rather than hardware for humans.
xaskasdf 5 hours ago [-]
I thought about that, but idk if they allow me to modify the linux kernel and nvidia cuda kernel at all
green-salt 3 hours ago [-]
I think you can do a bunch of that on Digitalocean's GPU droplets.
jonassm 3 hours ago [-]
In those systems you could probably leverage something like Nvidia SCADA or GDS directly.
xaskasdf 22 minutes ago [-]
Actually since they have direct GDS it should perform really well on professional gpus
anoncow 16 hours ago [-]
3000 tokens per sec on 32 mb Ram?
fc417fc802 16 hours ago [-]
fast != practical
You can get lots of tokens per second on the CPU if the entire network fits in L1 cache. Unfortunately the sub 64 kiB model segment isn't looking so hot.
But actually ... 3000? Did GP misplace one or two zeros there?
xaskasdf 5 hours ago [-]
I wondered the same, but the rendering seems right, the output was almost instant. I'll recheck the token counter; anyway as you say, fast isn't practical. Actually I had to develop my own tiny model https://huggingface.co/xaskasdf/brandon-tiny-10m-instruct to fit something "usable", and it's basically a liar or disinformation machine haha
Wuzado 21 hours ago [-]
I can imagine a couple scenarios in which a high-quality, large model would be much preferred over lower latency models, primarily when you need the quality.
tyfon 21 hours ago [-]
I didn't really understand the performance table until I saw the top ones were 8B models.
But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.
I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?
tgrowazay 20 hours ago [-]
LLM speed is roughly <memory_bandwidth> / <model_size> tok/s.
DDR4 tops out about 27Gbs
DDR5 can do around 40Gbs
So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.
uf00lme 20 hours ago [-]
Channels matter a lot, quad channel ddr4 is going to beat ddr5 in dual channel most of the time.
wtallis 18 hours ago [-]
Four channels of DDR4-3200 vs two channels of DDR5-6400 (four subchannels) should come out pretty close. I don't see any reason why the DDR4 configuration would be consistently faster; you might have more bank groups on DDR4, but I'm not sure that would outweigh other factors like the topology and bandwidth of the interconnects between the memory controller and the CPU cores.
someguy2026 20 hours ago [-]
DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster.
vlovich123 20 hours ago [-]
Faster than the 0.2tok/s this approach manages
zozbot234 20 hours ago [-]
Should be active param size, not model size.
xaskasdf 19 hours ago [-]
yeah, actually, I'm bottlenecked af since my mobo got pcie3 only :(
fluoridation 17 hours ago [-]
That's slower than just running it off CPU+GPU. I can easily hit 1.5 tokens/s on a 7950X+3090 and a 20480-token context.
umairnadeem123 16 hours ago [-]
0.2 tok/s is slow for chat but perfectly fine for batch/async workloads. I run automated content generation pipelines where a single job kicks off dozens of LLM calls (script generation, metadata, descriptions) and none of them need to be interactive. The whole job takes 20 minutes anyway because of image generation bottlenecks. Being able to run a 70B model locally for those batch calls instead of paying per-token API costs would be a significant cost reduction, even at this speed.
esquire_900 15 hours ago [-]
Cost wise it does not seem very effective. .5 token / sec (the optimized one) is 3600 tokens an hour, which costs about 200-300 watts for an active 3090+system. Running 3600 tokens on open router @.4$ for llama 3.1 (3.3 costs less), is about $0,00144. That money buys you about 2-3 watts (in the Netherlands).
Great achievement for privacy inference nonetheless.
teo_zero 14 hours ago [-]
I think we use different units. In my system there are 3600 seconds per hour, and watts measure power.
IsTom 11 hours ago [-]
OP probably means watt-hours.
dotancohen 6 hours ago [-]
And 0.5 tokens/s should work out to 1800 tokens at the end of the hour. Not 3600 as stated.
qoez 8 hours ago [-]
Open router is highly subsidized. This might be cheaper in the long run once these companies shift to taking profits
culopatin 1 hours ago [-]
But why not cross that bridge then. By that time you might have much more optimized local infrastructure. Although I do see that someone suffering through the local slowness now is what drives the development of these local options.
Aerroon 15 hours ago [-]
Something to consider is that input tokens have a cost too. They are typically processed much faster than output tokens. If you have long conversations then input tokens will end up being a significant part of the cost.
It probably won't matter much here though.
thatwasunusual 10 hours ago [-]
> Cost wise it does not seem very effective.
Why is this so damn important? Isn't it more important to end up with the best result?
I (in Norway) use a homelab with Ollama to generate a report every morning. It's slow, but it runs between 5-6 am, energy prices are at a low, and it doesn't matter if it takes 5 or 50 minutes.
xienze 3 hours ago [-]
> Why is this so damn important? Isn't it more important to end up with the best result?
You’re wondering why someone would prefer to get the same or better result in less time for less money?
eleventyseven 15 hours ago [-]
Are you taking into account energy costs of running a 3090 at 350 watts for a very long time?
teaearlgraycold 10 hours ago [-]
I doubt it’s at full TDP if it’s running at 0.2 tokens per second.
xaskasdf 6 hours ago [-]
Actually I can't go full tdp with a 650w PSU, I got to upgrade it asap
ekianjo 12 hours ago [-]
You can run a RTX3090 at 250w and still get a lot of its performance with nvidia-smi.
jacquesm 19 hours ago [-]
This is an interesting area for experiments. I suspect that in the longer term model optimization (knowing which bits you can leave out without affecting the functioning of the model) will become the dominant area of research just like it did with compression algorithms because effectively a model is a lossy compression scheme.
And that's good because that increases democratization of AI away from the silos that are being created.
serendip-ml 14 hours ago [-]
The compression analogy is interesting. Another way of looking at it could be fine-tuning as "knowing what to leave out" - a 3B model for example tuned for a narrow task doesn't need the capacity that makes 70B good at many things.
rl3 21 hours ago [-]
Nice. I've been looking at doing something similar, more on the order of running a 1T model with less than half the available VRAM.
One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.
I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.
Curious if anyone's tried this already.
xaskasdf 19 hours ago [-]
That would be nice to see. Actually I was thinking about getting another 3090 and a mobo upgrade since I'm bottlenecked by pcie3 to tryna run glm 4.7 or 5 at q4_k_m, it should be possible.
civicsquid 15 hours ago [-]
Really cool. I'm wondering: what background did you need to be able to think of the question that resulted in this project?
I know you said you're involved in some retrogaming and were experimenting, but as someone who works in a world where hardware is pretty heavily abstracted away, even if I got into retrogaming I don't know that I'd consider that there may be a systems improvement lying around. Beyond the creative aspect, it feels like there is some systems and hardware background that helped put the idea together (and I'd be interested to go learn about of that systems/hardware knowledge myself).
The idea was basically to run a llm on a ps2, then I ran into some problems as the 32mb ram cap with 4mb vram cap; so I had to figure out a way to stream layers on the forward pass. Given that ps2 manages to give instructions directly to the vram that's capable of 32bit addresses, it gave an insane amount of tok/s, then I wondered if I could do the same on my puter
rustyhancock 15 hours ago [-]
I wonder too, DMA plays a huge role in most older gaming consoles when the CPUs were far more sluggish.
Perhaps that's what made them think to try.
Perhaps the current batch of smart memory cards which on the PS2 I believe have quite complex DMA capabilities to stream from the SD card game data.
charcircuit 14 hours ago [-]
Why not the PS5? That's when games started streaming assets straight from the NVME SSD to the GPU. In this case the assets are weights.
rustyhancock 4 hours ago [-]
Just because he mentioned retro gaming.
Otherwise DMA is everywhere.
In the PS5 case since it uses unified memory it's not quite the same as say an GBA streamed from a flash cart to video RAM.
xaskasdf 6 hours ago [-]
Actually I'm thinking about buyin an AMD BC-250 that's bassically a ps5 with pcie factor format; and it's linux capable by default, maybe next month
davideom0414 8 hours ago [-]
Really interesting experiment i should have done this before
Do you have numbers on effective throughput vs PCIe theoretical bandwidth?
I’m curious whether this is primarily latency-bound or bandwidth-bound in practice
Can some tell me??
xaskasdf 5 hours ago [-]
Actually is purely bandwidth-bound, the major bottleneck of the whole process, for me in this case, is the B450 mobo I got that's only capable of pcie3 and 1x8 in the pcie lanes for gpu instead of 1x16; so I'm capped until I get an X570 maybe. I should get around twice or triple the tok speed with that upgrade alone
7777777phil 8 hours ago [-]
Cool hack but 0.5 tok/s on 70B when a 7B does 30+ on the same card. NVIDIA's own research says 40-70% of agentic tasks could run on sub-10B models and the quality gap has closed fast.
valianteffort 8 hours ago [-]
[flagged]
tclancy 6 hours ago [-]
Can we not? Make a valiant effort to rephrase.
Wuzado 20 hours ago [-]
I wonder - could this be used for multi-tier MoE? Eg. active + most used in VRAM, often used in RAM and less used in NVMe?
rao-v 20 hours ago [-]
Yeah I’ve often wondered why folks aren’t training two tier MoEs for VRAM + RAM. We already have designs for shared experts so it cannot be hard to implement a router that allocated 10x or 100x as often to “core” experts vs the “nice to have” experts. I suppose balancing during training is tricky but some sort of custom loss on the router layers should work.
I’ve also wondered why the routers aren’t training to be serially consistent so you can predict layers to swap into VRAM a few layers ahead to maximize available bandwidth.
reitzensteinm 20 hours ago [-]
I think part of the issue is that in production deployments, you're batching high enough that you'll be paging in those long tail experts constantly.
Unless you're handing that in some kind of fancy way, you'll be holding up the batch while waiting for host memory which will kill your throughout.
It makes much more sense for non batched local inference, especially if you can keep the MoE routing stable like you say, but most folks aren't optimising for that.
zozbot234 20 hours ago [-]
Ideally, you should rearrange batches so that inference steps that rely on the same experts get batched together, then inferences that would "hold up" a batch simply wait for that one "long tail" expert to be loaded, whereupon they can progress. This might require checkpointing partial inference steps more often, but that ought to be doable.
reitzensteinm 20 hours ago [-]
I think this is doable for very long tail experts that get swapped in for specialised topics - say, orbital mechanics.
But for experts that light up at, say, 1% frequency per batch, you're doing an awful lot of transfers from DRAM which you amortize over a single token, instead of reads from HBM which you amortize over 32 tokens.
rao-v 2 hours ago [-]
I think your analysis is right this would make sense mostly for the 30B-3A style models that are mostly for edge / hobbyist use, where context length is precious so nobody is batching.
Given that experts live per layer I dont think it makes sense to have orbital mechanics experts but … I have wondered about swapping out the bottom 10% of layers per topic given that that is likely where the highest order concepts live. I’ve always wondered why people bother with LORA on all layers given that the early layers are more likely to be topic agnostic and focused on more basic pattern assembly (see the recent papers on how LLMs count on a manifold)
svnt 20 hours ago [-]
Maybe I am misunderstanding something but:
1) This is basically the intention of several recent MoE models: keep particular generally useful experts hot in VRAM.
2) Unless you can swap layers in faster than you consume them there is no point to predicting layers (what does this even really mean? did you mean predicting experts?).
It seems at the moment the best you can do is keep experts and layers more likely to be used for a given query in VRAM and offload the rest, but this is work-dependent.
hedgehog 20 hours ago [-]
I don't have links handy but there is active research in this area.
Aurornis 14 hours ago [-]
Cool project. Can you provide more details about your DKMS patching process for consumer GPUs? This would be fun to try out, but I’d need some more details on that patch process first.
xaskasdf 6 hours ago [-]
I updated the documentation to provide more info for the patching process, I added the patches themselves too and provided some risk info about the patches
Actually this idea was fueled by those since I went to check if there was anything near to what I wanted to achieve, pretty useful tho
jonassm 11 hours ago [-]
nvmlib/ssd-gpu-dma and BaM (based on the same code base) are pretty cool as they allow you to initiate disk reads/writes directly from a CUDA kernel (so not only reading/writing directly to GPU memory but also allowing the GPU to initiate IO on its own). Sometimes called GPU-initiated I/O or accelerator-initiated I/O.
exabrial 20 hours ago [-]
I feel like we need an entirely new type of silicon for LLMs. Something completely focused on bandwidth and storage probably at the sacrifice of raw computation power.
garethsprice 8 hours ago [-]
Something like this? (Llama 3.1-8B etched into custom silicon delivering 16,000 tok/s, doesn't use much PCIe bandwidth):
Wowsa that’s amazing! Exactly what I was imagining. To do that with 2500 watts is incredible.
19 hours ago [-]
stuaxo 9 hours ago [-]
Interesting. Can AMD GPUs do direct io like this?
spwa4 10 hours ago [-]
I've often wondered doing this with extreme compression. What if you did extreme compression + decompression on the GPU? Because you're leaving a lot of compute unused.
xaskasdf 6 hours ago [-]
I did it, but with different quantization compressions, It ran into quality issues, I will try to rerun with the same quants if that fixes the issue, but the most that looks unused, its being used by rotating layers that are being swapped by the cpu from the ram itself, that manages to keep layers warm, ready to use while inferencing and discarding already used ones
nathan_compton 5 hours ago [-]
I'm not sure, but I suspect that LLM weights don't compress all that well. The intuition here is that training an LLM is compression of the training data into the weights, so they are probably very information dense already. Can't squeeze them down much.
sylware 11 hours ago [-]
Isn't that linux DMA buf?
timzaman 13 hours ago [-]
Umm sorry but the cpu can easily keep up shuttling around to/from your nvme. Especially ancient gen3 pcie. Not sure why ud do this.
xaskasdf 6 hours ago [-]
Did you even read anything? hahaha
jauntywundrkind 21 hours ago [-]
Could be neat to see what giving the 8b like 6gb ram instead of 10gb. Something in-between, where you still need NVMe, but not like the 3x ratio of the 70b model on 23GB.
Nice work. PCI-P2P (GPU-Direct (tm)) is such great stuff. Cool to see!
johnbarron 22 minutes ago [-]
[dead]
builderhq_io 5 hours ago [-]
[dead]
ai_hack3r 5 hours ago [-]
[dead]
dhjjdjjjd 13 hours ago [-]
[flagged]
turingsroot 15 hours ago [-]
[flagged]
Aurornis 14 hours ago [-]
> No cuBLAS means they wrote their own GEMM kernels, which is a massive undertaking
Not to diminish the impressiveness of this overall project, but it says right up front that these were vibe coded and the Opus 4.6 co-author lines are right in the commit messages. Those pieces were adapted from existing work via LLM, which is exactly the right use in a proof of concept project like this.
snovv_crash 13 hours ago [-]
Please don't use LLMs to post on HN...
flux3125 10 hours ago [-]
or at least don't make it too obvious.
IshKebab 12 hours ago [-]
Yeah I don't even get the motivation for that. Are HN accounts valuable in any way?
Rendered at 20:29:02 GMT+0000 (Coordinated Universal Time) with Vercel.
I wonder how this compares to Apple's unified memory approach on M-series chips for similar workloads. The M4 Max can fit 70B models entirely in memory without any offloading tricks, though at lower throughput than a 3090.
Would be interesting to see comparative benchmarks: this NVMe approach on a 3090 vs M4 Max native, especially for batch inference where the NVMe latency might be amortized.
I wonder... what if the m.2 storage was actually DRAM? You probably don't need persistence for spilling a model off the GPU. How would it fare vs just adding more host memory? The m.2 ram would be less flexible, but would keep the system ram free for the CPU.
I gave a talk a few years ago at dask summit (conf?) on making the stars align with dask-cudf here. We were helping a customer accelerate log analytics by proving out our stack for nodes that look roughly like: parallel ssd storage arrays (30 x 3 GB/s?) -> GPUDirect Storage -> 4 x 30 GB/s PCIe (?) -> 8 x A100 GPUs, something like that. It'd be cool to see the same thing now in the LLM world, such as a multi-GPU MoE, or even a single-GPU one for that matter!
https://www.servethehome.com/hyper-scalers-are-using-cxl-to-...
Do you have $2/hr to rent an RTX 6000 96GB or $5/hr for B200 180GB on the cloud?
You can get lots of tokens per second on the CPU if the entire network fits in L1 cache. Unfortunately the sub 64 kiB model segment isn't looking so hot.
But actually ... 3000? Did GP misplace one or two zeros there?
But 5 seconds / token is quite slow yeah. I guess this is for low ram machines? I'm pretty sure my 5950x with 128 gb ram can run this faster on the CPU with some layers / prefill on the 3060 gpu I have.
I also see that they claim the process is compute bound at 2 seconds/token, but that doesn't seem correct with a 3090?
DDR4 tops out about 27Gbs
DDR5 can do around 40Gbs
So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.
Great achievement for privacy inference nonetheless.
It probably won't matter much here though.
Why is this so damn important? Isn't it more important to end up with the best result?
I (in Norway) use a homelab with Ollama to generate a report every morning. It's slow, but it runs between 5-6 am, energy prices are at a low, and it doesn't matter if it takes 5 or 50 minutes.
You’re wondering why someone would prefer to get the same or better result in less time for less money?
And that's good because that increases democratization of AI away from the silos that are being created.
One workup indicated it was theoretically possible to modify a piece of SGLang's routing layer to support JIT predict-ahead expert swaps from Gen5 NVMe storage straight into GPU memory.
I'm hoping that proves true. The setup relies on NVIDIA Dynamo, so NIXL primitives are available to support that.
Curious if anyone's tried this already.
I know you said you're involved in some retrogaming and were experimenting, but as someone who works in a world where hardware is pretty heavily abstracted away, even if I got into retrogaming I don't know that I'd consider that there may be a systems improvement lying around. Beyond the creative aspect, it feels like there is some systems and hardware background that helped put the idea together (and I'd be interested to go learn about of that systems/hardware knowledge myself).
The idea was basically to run a llm on a ps2, then I ran into some problems as the 32mb ram cap with 4mb vram cap; so I had to figure out a way to stream layers on the forward pass. Given that ps2 manages to give instructions directly to the vram that's capable of 32bit addresses, it gave an insane amount of tok/s, then I wondered if I could do the same on my puter
Perhaps that's what made them think to try.
Perhaps the current batch of smart memory cards which on the PS2 I believe have quite complex DMA capabilities to stream from the SD card game data.
Otherwise DMA is everywhere.
In the PS5 case since it uses unified memory it's not quite the same as say an GBA streamed from a flash cart to video RAM.
I’ve also wondered why the routers aren’t training to be serially consistent so you can predict layers to swap into VRAM a few layers ahead to maximize available bandwidth.
Unless you're handing that in some kind of fancy way, you'll be holding up the batch while waiting for host memory which will kill your throughout.
It makes much more sense for non batched local inference, especially if you can keep the MoE routing stable like you say, but most folks aren't optimising for that.
But for experts that light up at, say, 1% frequency per batch, you're doing an awful lot of transfers from DRAM which you amortize over a single token, instead of reads from HBM which you amortize over 32 tokens.
Given that experts live per layer I dont think it makes sense to have orbital mechanics experts but … I have wondered about swapping out the bottom 10% of layers per topic given that that is likely where the highest order concepts live. I’ve always wondered why people bother with LORA on all layers given that the early layers are more likely to be topic agnostic and focused on more basic pattern assembly (see the recent papers on how LLMs count on a manifold)
1) This is basically the intention of several recent MoE models: keep particular generally useful experts hot in VRAM.
2) Unless you can swap layers in faster than you consume them there is no point to predicting layers (what does this even really mean? did you mean predicting experts?).
It seems at the moment the best you can do is keep experts and layers more likely to be used for a given query in VRAM and offload the rest, but this is work-dependent.
- https://taalas.com/the-path-to-ubiquitous-ai/ - https://chatjimmy.ai/
Nice work. PCI-P2P (GPU-Direct (tm)) is such great stuff. Cool to see!
Not to diminish the impressiveness of this overall project, but it says right up front that these were vibe coded and the Opus 4.6 co-author lines are right in the commit messages. Those pieces were adapted from existing work via LLM, which is exactly the right use in a proof of concept project like this.