- 100MB 'image' (ie executable code; the executable itself plus all the OS libraries loaded.)
- 40MB heap
- 50MB "mapped file", mostly fonts opened with mmap() or the windows equivalent
- 45MB stack (each thread gets 2MB)
- 40MB "shareable" (no idea)
- 5MB "unusable" (appears to be address space that's not usable because of fragmentation, not actual RAM)
Generally if something's using a lot of RAM, the answer will be bitmaps of various sorts: draw buffers, decompressed textures, fonts, other graphical assets, and so on. In this case it's just allocated but not yet used heap+stacks, plus 100MB for the code.
Edit: I may be underestimating the role of binary code size. Visual Studio "devenv.exe" is sitting at 2GB of 'image'. Zoom is 500MB. VSCode is 300MB. Much of which are app-specific, not just Windows DLLs.
muskstinks 5 minutes ago [-]
Tx for the breakdown. I will play around with it later on my windows machine.
But isn't it crazy how we throw out so much memory just because of random buffers? It feels wrong to me
Capricorn2481 16 minutes ago [-]
But I have sublime text open with a hundred files and it's using 12mb.
gwbas1c 50 minutes ago [-]
Basically, the short answer is that most memory managers allocate more memory than a process needs, and then reuse it.
IE, in a JVM (Java) or dotnet (C#) process, the garbage collector allocates some memory from the operating system and keeps reusing it as it finds free memory and the program needs it.
These systems are built with the assumption that RAM is cheap and CPU cycles aren't, so they are highly optimized CPU-wise, but otherwise are RAM inefficient.
senfiaj 1 hours ago [-]
It's partly because there are layers of abstractions (frameworks, libraries / runtimes / VM, etc). Also, today's software often has other pressures, like development time, maintainability, security, robustness, accessibility, portability (OS / CPU architecture), etc. It's partly because the complexity / demand has increased.
Part of the problem is that modern apps aren't really "one thing" anymore
Orygin 1 hours ago [-]
200Mb for Sublime does not seem so bad when compared to Postman using 4Gb on my machine...
Capricorn2481 18 minutes ago [-]
> sublime consumes 200mb. I have 4 text files open. What is it doing?
Huh? Sublime Text? I have like 100 files open and it uses 12mb. Sublime is extremely lean.
Do you have plugins installed?
muskstinks 7 minutes ago [-]
I do not have plugins installed and i have only a handful of files open on macos.
Memroy statistics says 200mb and a peak of 750mb in the past (for whatever reason)
tombert 7 minutes ago [-]
I've been rewriting a lot of my stuff in Rust to save memory.
Rust is high-level enough to still be fun for me (tokio gives me most of the concurrency goodies I like), but the memory usage is often like 1/10th or less compared to what I would write in Clojure.
Even though I love me some lisp, pretty much all my Clojure utilities are in Rust land now.
The issue with retrofitting things to an existing well established language is that those new features will likely be underutilized. Especially in other existing parts of the standard library, since changing those would break backwards compatibly. std::optional is another example of this, which is not used much in the c++ standard library, but would be much more useful if used across the board.
Contrast this with Rust, which had the benefit of being developed several decades later. Here Option and str (string views) were in the standard library from the beginning, and every library and application uses them as fundamental vocabulary types. Combined with good support for chaining and working with these types (e.g. Option has map() to replace the content if it exists and just pass it along if None).
Retrofitting is hard, and I have no doubt there will be new ideas that can't really be retrofitted well into Rust in another decade or two as well. Hopefully at that point something new will come along that learned from the mistakes of the past.
menaerus 39 minutes ago [-]
Retrofitting new patterns or ideas is underutilized only when it is not worth the change. string_view example is trivial and anyone who cared enough about the extra allocations that could have happened already (no copy-elision taking place) rolled their own version of string_view or simply used char+len pattern. Those folks do not wait for the new standard to come along when they can already have the solution now.
std::optional example OTOH is also a bad example because it is heavily opinionated, and having it baked into the API across the standard library would be a really wrong choice to do.
pjc50 3 hours ago [-]
C# gained similar benefits with Span<>/ReadOnlySpan<>. Essential for any kind of fast parser.
groundzeros2015 1 hours ago [-]
In C you have char*
rcxdude 38 minutes ago [-]
Which isn't very good for substrings due to the null-termination requirement.
kccqzy 1 hours ago [-]
And the type system does not tell you if you need to call free on this char* when you’re done with it.
pjc50 55 minutes ago [-]
In C you only have char*.
fix4fun 3 hours ago [-]
Digression: Nowadays when RAM is expensive good old zram is gaining popularity ;) Try to check on trends.google.com . Since 2025-09 search for it doubled ;)
gwbas1c 48 minutes ago [-]
A lot of frameworks that use variants of "mark and sweep" garbage collection instead of automatic reference counting are built with the assumption that RAM is cheap and CPU cycles aren't, so they are highly optimized CPU-wise, but otherwise are RAM inefficient.
I wonder if frameworks like dotnet or JVM will introduce reference counting as a way to lower the RAM footprint?
pjc50 35 minutes ago [-]
Reference counting in multithreaded systems is much more expensive than it sounds because of the synchronization overhead. I don't see it coming back. I don't think it saves massive amounts of memory, either, especially given my observation with vmmap upthread that in many cases the code itself is a dominant part of the (virtual) memory usage.
zozbot234 20 minutes ago [-]
If you use an ownership/lifetime system under the hood you only pay that synchronization overhead when ownership truly changes, i.e. when a reference is added or removed that might actually impact the object's lifecycle. That's a rare case with most uses of reference counting; most of the time you're creating a "sub"-reference and its lifetime is strictly bounded by some existing owning reference.
vaylian 41 minutes ago [-]
Unlikely. Maybe I'm overly optimistic, but I think it's fairly likely that the RAM situation will have sorted itself out in a few years. Adding reference counting to the JVM and .NET would also take considerable time.
It makes more sense for application developers to think about the unnecessary complexity that they add to software.
xyzzy_plugh 40 minutes ago [-]
That's not strictly true. Mark and sweep is tunable in ways ARC is not. You can increase frequency, reducing memory at the cost of increased compute, for example.
tzot 4 hours ago [-]
Well, we can use memoryview for the dict generation avoiding creation of string objects until the time for the output:
import re, operator
def count_words(filename):
with open(filename, 'rb') as fp:
data= memoryview(fp.read())
word_counts= {}
for match in re.finditer(br'\S+', data):
word= data[match.start(): match.end()]
try:
word_counts[word]+= 1
except KeyError:
word_counts[word]= 1
word_counts= sorted(word_counts.items(), key=operator.itemgetter(1), reverse=True)
for word, count in word_counts:
print(word.tobytes().decode(), count)
We could also use `mmap.mmap`.
akx 2 hours ago [-]
This doesn't do the same thing though, since it's not Unicode aware.
There's bound to be a way to turn a stream of bytes into a stream of unicode code points (at least I think that's what python is doing for strings). Though I'm explicitly not volunteering to write the code for it.
contravariant 59 minutes ago [-]
For reasons I never quite understood python has a collections.Counter for the purpose of counting things. It's a bit cleaner.
griffindor 4 hours ago [-]
Nice!
> Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.
I wish I knew the input size when attempting to estimate, but I suppose part of the challenge is also estimating the runtime's startup memory usage too.
> Compute the result into a hash table whose keys are string views, not strings
If the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM. Is that included in the memory usage figures?
Nonetheless, it's a nice optimization that the kernel chooses which hash table keys to keep hot.
The other perspective on this is that we sought out languages like Python/Ruby because the development cost was high, relative to the hardware. Hardware is now more expensive, but development costs are cheaper too.
The take away: expect more push towards efficiency!
zozbot234 13 minutes ago [-]
> If the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM.
Not so much, because you only need some fraction of that memory when the program is actually running; the OS is free to evict it as soon as it needs the RAM for something else. Non-file-backed memory can only be evicted by swapping it out and that's way more expensive,
pjc50 4 hours ago [-]
>> Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.
At this point I'd make two observations:
- how big is the text file? I bet it's a megabyte, isn't it? Because the "naive" way to do it is to read the whole thing into memory.
- all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte. It gets more interesting when the file doesn't fit into RAM at all.
> all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte.
I have to disagree. Bad performance is often a result of a death of a thousands cuts. This function might be one among countless similarly inefficient library calls, programs and so on.
rcxdude 36 minutes ago [-]
If you're not putting a representative amount of data through the test, you have no idea if the resource usage you're seeing scales with the amount of data or is just a fixed overhead if the runtime.
3 hours ago [-]
kloop 1 hours ago [-]
> how big is the text file? I bet it's a megabyte, isn't it?
The edit in the article says ~1.5kb
pjc50 55 minutes ago [-]
Single page on many systems, which makes using mmap() for it even funnier.
Filligree 8 minutes ago [-]
Not to mention inefficient in memory use. I would have expected a mention of interning; using string-views is fine, but making it a view of 4kB pinned cache pages is not really.
veunes 1 hours ago [-]
I suspect it'll be selective
veunes 1 hours ago [-]
Not "C++ everywhere again" but maybe "understanding memory again"
dgb23 3 hours ago [-]
Not a C++ programmer and I think the solution is neat.
But it's not necessarily an apples to apples comparison. It's not unfair to python because of the runtime overhead. It's unfair because it's a different algorithm with fundamentally different memory characteristics.
A fairer comparison would be to stream the file in C++ as well and maintain internal state for the count. For most people that would be the first/naive approach as well when they programmed something like this I think. And it would showcase what the actual overhead of the python version is.
VorpalWay 3 hours ago [-]
> A fairer comparison would be to stream the file in C++ as well and maintain internal state for the count.
Wouldn't memory mapping the data in Python be the more fair comparison? If the language doesn't support that, then this seems to absolutely be a fair comparison.
> For most people that would be the first/naive approach as well when they programmed something like this I think.
I disagree, my mind immediately goes to mmap when I have to deal with a single file that I have to read in it's entirety. I think the non-obvious solution here is rather io-uring (which I would expect to be faster if dealing with lots of small files, as you can load them async concurrently from the file system).
dgb23 29 minutes ago [-]
I'd make the bet that "most people" (who can program) would not think of mmap, but either about streaming or would even just load the whole thing into memory.
Ask a bunch of coding agents and they will give you these two versions, which means it's likely that the LLMs have seen these way more often than the mmap version. Both Opus and GPT even pushed back when I asked for mmap, both said it would "add complexity".
callamdelaney 2 hours ago [-]
I shove everything in memory, it's a design decision. Memory is still cheap, relatively.
90d 2 hours ago [-]
Speaking about optimization, is Windows just too far gone at this point? It is comical the amount of resources it uses at "idle".
est 4 hours ago [-]
I think py version can be shortened as:
from collections import Counter
stats = Counter(x.strip() for l in open(sys.argv[1]) for x in l)
voidUpdate 4 hours ago [-]
Would that decrease memory usage though?
amelius 2 hours ago [-]
> AI sociopaths have purchased all the world's RAM in order to run their copyright infringement factories at full blast
The ultimate bittersweet revenge would be to run our algorithms inside the RAM owned by these cloud companies. Should be possible using free accounts.
gostsamo 3 hours ago [-]
> how much memory a native code version of the same functionality would use.
native to what? how c++ is more native than python?
VorpalWay 2 hours ago [-]
Native code usually refers to code which is compiled to machine code (for the CPU it will run on) ahead of time, as opposed to code running in a byte code VM (possibly with JIT).
I would consider all of C, C++, Zig, Rust, Fortran etc to produce native binaries. While things like Cython exist, that wasn't what was used here (and for various reasons would likely still have more overhead than those I mentioned).
fluoridation 39 minutes ago [-]
Native to the hardware platform.
biorach 4 hours ago [-]
"copyright infringement factories"
maipen 3 hours ago [-]
Tells you right away where this is coming from.
Dylan16807 42 minutes ago [-]
Do you mean something specific, because that sounds like a criticism but with some blanks that need to be filled in.
If you just mean they come across as annoyed by AI, that's true, but that's also way too wide of a category to infer basically anything else about them.
muskstinks 2 hours ago [-]
The critisism is valid. The problem is how you value this critism.
I agree they are stealing it but I also see the benefit of it for society and for myself.
Suckerberg downloaded terabytes of books for training, while people around me got sued to hell 20 years ago for downloading one mp3 file.
yieldcrv 2 hours ago [-]
they got sued for uploading actually
and Zuck isn’t sued for downloading either, he is sued for reproduction by the AI not being derivative enough, but so far all branches of government support that
anthk 2 hours ago [-]
Anna's Archive. Aaron Swartz.
FB and so are CIA fronts and they can do anything they please. Until they hit against Disney and lobbying giants and if a CIA idiot tries to sue/bribe/blackmail them they can order Hollywood to rot their images into pieces with all the wars they promoted in Middle East and Latin America just to fill the wallets of CEO's. That among some social critique movie on FB about getting illegal user data all over the world to deny insurances and whatnot. And OFC with a clear mention of the Epstein case with related people, just in case the Americans forgot about it.
Then the US industry and military complex would collapse in months with brainwashed kids running away from the army. Not to mention to the Call of Duty franchise and the like. It would be the end of Boeing and several more, of course. To hell to profit driven wars for nothing.
Ah, yes, AIPAC lobbies and the like. Good luck taming right wing wackos hating the MAGA cult more than the 'woke' people themselves. These will be the first ones against you after sinking the US image for decades, even more than the illegal Iraq war with no WMD's and the Bush/Cheney mafia.
The outcome of this? proper and serious engineering a la Airbus. Instant profit-driven MBA and war sickos being kicked out from the spot. OFC the AI snakeoil sellers except for the classical AI/NN against concrete cases (image detection and the like), these will survive fine, even better because these kind of jobs are highly specific and they are not statistical text parrots. They can provide granted results unlike LLM's prone to degrade because the human based content feeding needs to be continuous, while for tumour detection a big enough sample can cover a 99% of the cases.
R&D on electric vehicles/energy and nuclear power like nowhere else. And, for sure, the EV equivalent of a Ford T for Americans. A cheap and reliable one, good enough for the common Joe/Mary without being a luxury item. A new Golden Age would rise, for sure.
But the oil mafia will try to fight them like crazy.
MrBuddyCasino 2 hours ago [-]
I don't know how anyone can call the most amazing invention in computer science of the last 20 years "copyright infringement factories". We went from the ST:NG ship computer being futuristic tech to "we kinda have this now". Its like calling cars "air pollution factories", as if that was their only purpose and use.
A fundamentally anti-civilisational mindset.
muskstinks 11 minutes ago [-]
You can see both sides, critzise how its done and still wanting to have the result of it.
Its a little bit hypocritic which often enough ends in realism aka "okay we clearly can't fight their copyright infridgments because they are too powerful and too rich but at least we can use the good side of it".
Nothing btw. enforces all of this to happen THAT fast besides capitalism. We could slow down, we could do it better or more right.
saintfire 15 minutes ago [-]
The people pushing this technology, that accelerates climate change, have lobbied the government to circumvent typical roadblocks created by society to limit sensationalist development. Incidentally, the same people who talk about how dangerous AI will be for society, but don't worry, they're going to be the one to deliver it safely.
Now, I don't believe AI will ever amount to enough to be a critical threat to human life, you know, beyond the immense amounts of wasted energy they propose to convert into something more useful, like a market crash or heat and noise, or both.
Not sure how you can call someone opposed to any of that "anti-civilisational" matter-of-factly.
vor_ 2 hours ago [-]
I'm sorry, but you're acting obtuse if you pretend you don't know why they're being called that.
yieldcrv 2 hours ago [-]
as long as you know what architecture questions to ask, agentic coding can help with this next phase of optimization really quickly
delaying comp sci differentiation for a few months
I wonder if assembly based solutions will become in vogue
Rendered at 14:05:29 GMT+0000 (Coordinated Universal Time) with Vercel.
I look at memory profiles of rnomal apps and often think "what is burning that memory".
Modern compression works so well, whats happening? Open your taskmaster and look through apps and you might ask yourself this.
For example (lets ignore chrome, ms teams and all the other bloat) sublime consumes 200mb. I have 4 text files open. What is it doing?
Alone for chrome to implement tab suspend took YEARS despite everyone being aware of the issue. And addons existed which were able to do this.
I bought more ram just for chrome...
- 100MB 'image' (ie executable code; the executable itself plus all the OS libraries loaded.)
- 40MB heap
- 50MB "mapped file", mostly fonts opened with mmap() or the windows equivalent
- 45MB stack (each thread gets 2MB)
- 40MB "shareable" (no idea)
- 5MB "unusable" (appears to be address space that's not usable because of fragmentation, not actual RAM)
Generally if something's using a lot of RAM, the answer will be bitmaps of various sorts: draw buffers, decompressed textures, fonts, other graphical assets, and so on. In this case it's just allocated but not yet used heap+stacks, plus 100MB for the code.
Edit: I may be underestimating the role of binary code size. Visual Studio "devenv.exe" is sitting at 2GB of 'image'. Zoom is 500MB. VSCode is 300MB. Much of which are app-specific, not just Windows DLLs.
But isn't it crazy how we throw out so much memory just because of random buffers? It feels wrong to me
IE, in a JVM (Java) or dotnet (C#) process, the garbage collector allocates some memory from the operating system and keeps reusing it as it finds free memory and the program needs it.
These systems are built with the assumption that RAM is cheap and CPU cycles aren't, so they are highly optimized CPU-wise, but otherwise are RAM inefficient.
https://waspdev.com/articles/2025-11-04/some-software-bloat-...
Huh? Sublime Text? I have like 100 files open and it uses 12mb. Sublime is extremely lean.
Do you have plugins installed?
Memroy statistics says 200mb and a peak of 750mb in the past (for whatever reason)
Rust is high-level enough to still be fun for me (tokio gives me most of the concurrency goodies I like), but the memory usage is often like 1/10th or less compared to what I would write in Clojure.
Even though I love me some lisp, pretty much all my Clojure utilities are in Rust land now.
Contrast this with Rust, which had the benefit of being developed several decades later. Here Option and str (string views) were in the standard library from the beginning, and every library and application uses them as fundamental vocabulary types. Combined with good support for chaining and working with these types (e.g. Option has map() to replace the content if it exists and just pass it along if None).
Retrofitting is hard, and I have no doubt there will be new ideas that can't really be retrofitted well into Rust in another decade or two as well. Hopefully at that point something new will come along that learned from the mistakes of the past.
std::optional example OTOH is also a bad example because it is heavily opinionated, and having it baked into the API across the standard library would be a really wrong choice to do.
I wonder if frameworks like dotnet or JVM will introduce reference counting as a way to lower the RAM footprint?
It makes more sense for application developers to think about the unnecessary complexity that they add to software.
> Peak memory consumption is 1.3 MB. At this point you might want to stop reading and make a guess on how much memory a native code version of the same functionality would use.
I wish I knew the input size when attempting to estimate, but I suppose part of the challenge is also estimating the runtime's startup memory usage too.
> Compute the result into a hash table whose keys are string views, not strings
If the file is mmap'd, and the string view points into that, presumably decent performance depends on the page cache having those strings in RAM. Is that included in the memory usage figures?
Nonetheless, it's a nice optimization that the kernel chooses which hash table keys to keep hot.
The other perspective on this is that we sought out languages like Python/Ruby because the development cost was high, relative to the hardware. Hardware is now more expensive, but development costs are cheaper too.
The take away: expect more push towards efficiency!
Not so much, because you only need some fraction of that memory when the program is actually running; the OS is free to evict it as soon as it needs the RAM for something else. Non-file-backed memory can only be evicted by swapping it out and that's way more expensive,
At this point I'd make two observations:
- how big is the text file? I bet it's a megabyte, isn't it? Because the "naive" way to do it is to read the whole thing into memory.
- all these numbers are way too small to make meaningful distinctions. Come back when you have a gigabyte. It gets more interesting when the file doesn't fit into RAM at all.
The state of the art here is : https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times... , wherein our hero finds the terrible combination of putting the whole file in a single string and then running strlen() on it for every character.
I have to disagree. Bad performance is often a result of a death of a thousands cuts. This function might be one among countless similarly inefficient library calls, programs and so on.
The edit in the article says ~1.5kb
But it's not necessarily an apples to apples comparison. It's not unfair to python because of the runtime overhead. It's unfair because it's a different algorithm with fundamentally different memory characteristics.
A fairer comparison would be to stream the file in C++ as well and maintain internal state for the count. For most people that would be the first/naive approach as well when they programmed something like this I think. And it would showcase what the actual overhead of the python version is.
Wouldn't memory mapping the data in Python be the more fair comparison? If the language doesn't support that, then this seems to absolutely be a fair comparison.
> For most people that would be the first/naive approach as well when they programmed something like this I think.
I disagree, my mind immediately goes to mmap when I have to deal with a single file that I have to read in it's entirety. I think the non-obvious solution here is rather io-uring (which I would expect to be faster if dealing with lots of small files, as you can load them async concurrently from the file system).
Ask a bunch of coding agents and they will give you these two versions, which means it's likely that the LLMs have seen these way more often than the mmap version. Both Opus and GPT even pushed back when I asked for mmap, both said it would "add complexity".
from collections import Counter
stats = Counter(x.strip() for l in open(sys.argv[1]) for x in l)
The ultimate bittersweet revenge would be to run our algorithms inside the RAM owned by these cloud companies. Should be possible using free accounts.
native to what? how c++ is more native than python?
I would consider all of C, C++, Zig, Rust, Fortran etc to produce native binaries. While things like Cython exist, that wasn't what was used here (and for various reasons would likely still have more overhead than those I mentioned).
If you just mean they come across as annoyed by AI, that's true, but that's also way too wide of a category to infer basically anything else about them.
I agree they are stealing it but I also see the benefit of it for society and for myself.
Suckerberg downloaded terabytes of books for training, while people around me got sued to hell 20 years ago for downloading one mp3 file.
and Zuck isn’t sued for downloading either, he is sued for reproduction by the AI not being derivative enough, but so far all branches of government support that
FB and so are CIA fronts and they can do anything they please. Until they hit against Disney and lobbying giants and if a CIA idiot tries to sue/bribe/blackmail them they can order Hollywood to rot their images into pieces with all the wars they promoted in Middle East and Latin America just to fill the wallets of CEO's. That among some social critique movie on FB about getting illegal user data all over the world to deny insurances and whatnot. And OFC with a clear mention of the Epstein case with related people, just in case the Americans forgot about it.
Then the US industry and military complex would collapse in months with brainwashed kids running away from the army. Not to mention to the Call of Duty franchise and the like. It would be the end of Boeing and several more, of course. To hell to profit driven wars for nothing.
Ah, yes, AIPAC lobbies and the like. Good luck taming right wing wackos hating the MAGA cult more than the 'woke' people themselves. These will be the first ones against you after sinking the US image for decades, even more than the illegal Iraq war with no WMD's and the Bush/Cheney mafia.
The outcome of this? proper and serious engineering a la Airbus. Instant profit-driven MBA and war sickos being kicked out from the spot. OFC the AI snakeoil sellers except for the classical AI/NN against concrete cases (image detection and the like), these will survive fine, even better because these kind of jobs are highly specific and they are not statistical text parrots. They can provide granted results unlike LLM's prone to degrade because the human based content feeding needs to be continuous, while for tumour detection a big enough sample can cover a 99% of the cases.
R&D on electric vehicles/energy and nuclear power like nowhere else. And, for sure, the EV equivalent of a Ford T for Americans. A cheap and reliable one, good enough for the common Joe/Mary without being a luxury item. A new Golden Age would rise, for sure. But the oil mafia will try to fight them like crazy.
A fundamentally anti-civilisational mindset.
Its a little bit hypocritic which often enough ends in realism aka "okay we clearly can't fight their copyright infridgments because they are too powerful and too rich but at least we can use the good side of it".
Nothing btw. enforces all of this to happen THAT fast besides capitalism. We could slow down, we could do it better or more right.
Now, I don't believe AI will ever amount to enough to be a critical threat to human life, you know, beyond the immense amounts of wasted energy they propose to convert into something more useful, like a market crash or heat and noise, or both.
Not sure how you can call someone opposed to any of that "anti-civilisational" matter-of-factly.
delaying comp sci differentiation for a few months
I wonder if assembly based solutions will become in vogue