NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Allocating on the Stack (go.dev)
nasretdinov 1 hours ago [-]
Nice to see common and natural patterns to have their performance improved. Theoretically appending to a slice would be possible to handle with just stack growth, but that would require having large gaps between goroutine stacks and mapping them lazily upon access instead of moving goroutines to the new contiguous blocks as it's implemented right now. But given how many questionable changes it requires from runtime it's certainly not going to happen :)
ivanjermakov 1 hours ago [-]
Having big stack frames is bad for cache locality. Stack is not something magical, it's mapped to the same physical memory as heap and needs to be loaded. Pretty sure such optimization would reduce performance in most cases.
HarHarVeryFunny 2 hours ago [-]
This article is about Go, but I wonder how many C/C++ developers realize that you've always had the ability to allocate on the stack using alloca() rather than malloc().

Of course use cases are limited (variable length buffers/strings, etc) since the lifetime of anything on the stack has to match the lifetime of the stack frame (i.e the calling function), but it's super fast since it's just bumping up the stack pointer.

spacechild1 1 hours ago [-]
alloca() is super useful, but it's also quite dangerous because you can easily overflow the stack.

The obvious issue is that you can't know how much space is left on the stack, so you basically have to guess and pick an arbitrary "safe" size limit. This gets even more tricky when functions may be called recursively.

The more subtle issue is that the stack memory returned by alloca() has function scope and therefore you must never call it directly in a loop.

I use alloca() on a regular basis, but I have to say there are safer and better alternatives, depending on the particular use case: arena/frame allocators, threadlocal pseudo-stacks, static vectors, small vector optimizations, etc.

12_throw_away 55 minutes ago [-]
> The obvious issue is that you can't know how much space is left on the stack [...]

Oh, huh. I've never actually tried it, but I always assumed it would be possible to calculate this, at least for a given OS / arch. You just need 3 quantities, right? `remaining_stack_space = $stack_address - $rsp - $system_stack_size`.

But I guess there's no API for a program to get its own stack address unless it has access to `/proc/$pid/maps` or similar?

Joker_vD 27 minutes ago [-]
> $system_stack_size

Does such thing even exist? And non-64 bit platforms the address space is small enough that with several threads of execution you may just be unable to grow your stack even up to $system_stack_size because it'd bump into something else.

masklinn 7 minutes ago [-]
> Does such thing even exist?

AFAIK no. There are default stack sizes, but they're just that, defaults, and they can vary on the same system: main thread stacks are generally 8MiB (except for Windows where it's just 1) but the size of ancillary stacks is much smaller everywhere but on linux using glibc.

It should be possible to get the stack root and size using `pthread_getattr_np`, but I don't know if there's anyone bothering with that, and it's a glibc extension.

chuckadams 32 minutes ago [-]
If your API includes inline assembly, then it's trivial. Go's internals would need it to swap stacks like it does. But I doubt any of that is exposed at the language level.
47 minutes ago [-]
norir 47 minutes ago [-]
If you have well defined boundaries, you can move the stack to an arbitrarily large chunk of memory before the recursive call and restore it to the system stack upon completion.
chuckadams 31 minutes ago [-]
And if you never do reach completion, you can just garbage collect that chunk. AKA "Cheney on the MTA": https://dl.acm.org/doi/10.1145/214448.214454
anematode 1 hours ago [-]
If you're not doing recursion, I prefer using an appropriately sized thread_local buffer in this scenario. Saves you the allocation and does the bookkeeping of having one per thread
rwmj 2 hours ago [-]
Most C compilers let you use variable length arrays on the stack. However they're problematic and mature code bases usually disable this (-Werror -Wvla) because if the size is derived from user input then it's exploitable.
ozgrakkurt 2 hours ago [-]
This is more of a patch/hack solution as far as I can understand.

You can just as well pass a heap allocated buffer + size around and allocate by incrementing/decrementing size.

Or even better use something like zig's FixedSizeAllocator.

Correct me if I am wrong please

HarHarVeryFunny 1 hours ago [-]
I wouldn't call it a hack, but it's not a general alternative for memory allocated on the heap since the lifetime is tied to that of the allocating function.

I think what you're referring to is an arena allocator where you allocate a big chunk of memory from the heap, then sequentially sub-allocate from that, then eventually free the entire heap chunk (arena) in one go. Arena allocators are therefore also special use case since they are for when all the sub-allocations have the same (but arbitrary) lifetime, or at least you're willing to defer deallocation of everything to the same time.

So, heap, arena and stack allocation all serve different purposes, although you can just use heap for everything if memory allocation isn't a performance issue for your program, which nowadays is typically the case.

Back in the day when memory was scarce and computers were much slower, another common technique was to keep a reuse "free list" of allocated items of a given type/size, which was faster than heap allocate and free/coalesce, and avoided the heap fragmentation of random malloc/frees.

lstodd 21 minutes ago [-]
It becames super slow when you bump that pointer into a page that's missing from the TLB.
HarHarVeryFunny 1 minutes ago [-]
A TLB miss could happen when executing the next statement in your program. It's not something you have a lot of control over, and doesn't change the fact that allocating from the stack (when an option) is going to be faster than allocating from the heap.
stackghost 1 hours ago [-]
alloca()'s availability and correctness/bugginess is platform dependent, so it probably sees only niche usage since it's not portable. Furthermore, even its man page discourages its use in the general case:

>The alloca() function is machine- and compiler-dependent. Because it allocates from the stack, it's faster than malloc(3) and free(3). In certain cases, it can also simplify memory deallocation in applications that use longjmp(3) or siglongjmp(3). Otherwise, its use is discouraged.

Furthermore:

>The alloca() function returns a pointer to the beginning of the allocated space. If the allocation causes stack overflow, program behavior is undefined.

https://man7.org/linux/man-pages/man3/alloca.3.html

anematode 1 hours ago [-]
Awesome stuff! Does Go have profile-guided optimization? I'm wondering whether a profile could hint to the compiler how large to make the pre-reserved stack space.
tptacek 1 hours ago [-]
Yep. `go build -pgo=foo.pprof`

https://go.dev/doc/pgo

bertylicious 1 hours ago [-]
Nice! That's (seems) so simple yet also so very effective. Shouldn't other memory-managed languages be able to profit from this as well?
1 hours ago [-]
lionkor 36 minutes ago [-]
C# has `stackalloc`
lstodd 23 minutes ago [-]
I read that as "Allocating on the Slack" and immediately came up with three ways how to do that.
zabzonk 1 hours ago [-]
alloca() is not part of the C++ standard, and I can't imagine how it could used safely in a C++ environment
mwkaufma 54 minutes ago [-]
If I had a nickel for every article about avoiding implicit boxing in gc-heap languages...
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:02:31 GMT+0000 (Coordinated Universal Time) with Vercel.