NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Unlocking Python's Cores:Energy Implications of Removing the GIL (arxiv.org)
devrimozcay 2 hours ago [-]
One thing I'm curious about here is the operational impact.

In production systems we often see Python services scaling horizontally because of the GIL limitations. If true parallelism becomes common, it might actually reduce the number of containers/services needed for some workloads.

But that also changes failure patterns — concurrency bugs, race conditions, and deadlocks might become more common in systems that were previously "protected" by the GIL.

It will be interesting to see whether observability and incident tooling evolves alongside this shift.

matsemann 18 minutes ago [-]
For big things the current way works fine. Having a separate container/deployment for celery, the web server, etc is nice so you can deploy and scale separately. Mostly it works fine, but there are of course some drawbacks. Like prometheus scraping of things then not able to run a web server in parallel etc is clunky to work around.

And for smaller projects it's such an annoyance. Having a simple project running, and having to muck around to get cron jobs, background/async tasks etc. to work in a nice way is one of the reasons I never reach for python in these instances. I hope removing the GIL makes it better, but also afraid it will expose a whole can of worms where lots of apps, tools and frameworks aren't written with this possibility in mind.

philipallstar 2 hours ago [-]
Might be worth noting that this seems to be just running some tests using the current implementation, and these are not necessarily general implications of removing the GIL.
samus 2 hours ago [-]
There might also be many optimization opportunities that still have to be seized.
chillitom 1 hours ago [-]
Our experience on memory usage, in comparison, has been generally positive.

Previously we had to use ProcessPoolExecutor which meant maintaining multiple copies of the runtime and shared data in memory and paying high IPC costs, being able to switch to ThreadPoolExecutor was hugely beneficially in terms of speed and memory.

It almost feels like programming in a modern (circa 1996) environment like Java.

hrmtst93837 38 minutes ago [-]
Swapping ProcessPoolExecutor for ThreadPoolExecutor gives real memory and IPC wins, but it trades process isolation for new failure modes because many C extensions and native libraries still assume the GIL and are not thread safe.

Measure aggressively and test under real concurrency: use tracemalloc to find memory hotspots, py-spy or perf to profile contention, and fuzz C extension paths with stress tests so bugs surface in the lab not in production. Watch per thread stack overhead and GC behavior, design shared state as immutable or sharded, keep critical sections tiny, and if process level isolation is still required stick with ProcessPoolExecutor or expose large datasets via read only mmap.

flowerthoughts 2 hours ago [-]
Sections 5.4 and 5.5 are the interesting ones.

5.4: Energy consumption going down because of parallelism over multiple cores seems odd. What were those cores doing before? Better utilization causing some spinlocks to be used less or something?

5.5: Fine-grained lock contention significantly hurts energy consumption.

alright2565 2 hours ago [-]
I'm not sure of the exact relationship, but power consumption increases greater than linear with clock speed. If you have 4 cores running at the same time, there's more likely to be thermal throttling → lower clock speeds → lower energy consumption.

Greater power draw though; remember that energy is the integral of power over time.

spockz 2 hours ago [-]
By running more tasks in parallel across different cores they can each run at lower clock speed and potentially still finish before a single core at higher clock speeds can execute them sequentially.
runningmike 3 days ago [-]
Title shortened - Original title:

Unlocking Python’s Cores: Hardware Usage and Energy Implications of Removing the GIL

I am curious about the NumPy workload choice made, due to more limited impact on CPython performance.

pothamk 2 hours ago [-]
[flagged]
OskarS 2 hours ago [-]
Thanks ChatGPT, good of you to let us know.
stingraycharles 1 hours ago [-]
There are so many ChatGPT responses in this thread, it’s giving me a headache.
OskarS 1 hours ago [-]
Yep. Real "dead internet theory" vibes, really sad to see.
stingraycharles 1 hours ago [-]
It’s been very noticeable for about a year now, but the last few months is absolutely terrible. I wonder if clawdbot has anything to do with it.
exe34 1 hours ago [-]
my hypothesis is that chatgpt was trained on the internet, and useful technical answers on the internet were posted by autistic people. who else would spend their time learning and then rushing to answer such things the moment they get their chance to shine? so chatgpt is basically pure distilled autism, which is why it sounds so familiar.
Incipient 1 hours ago [-]
I'm curious what makes that obviously llm? As far as I can tell it was a short and fairly benign statement with little scope to give away llm-ness?
mrkeen 22 minutes ago [-]
Just as bad if it's human. No information has been shared. The writer has turned idle wondering into prose:

> Once threads actually run concurrently, libraries (which?) that never needed locking (contradiction?) could (will they or won't they?) start hitting race conditions in surprising (go on, surprise me) places.

RobotToaster 2 hours ago [-]
The obvious solution is to require libraries that are no-GIL safe to declare that, and for all other libraries implicitly wrap them with GIL locks.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 12:50:27 GMT+0000 (Coordinated Universal Time) with Vercel.