NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
PC processors entered the Gigahertz era today in the year 2000 with AMD's Athlon (tomshardware.com)
ehnto 4 minutes ago [-]
I bought a whole bunch of parts with my first Athlon. I think I bought a Soundblaster, and a Radeon GFX card if I am remembering the timeline right. The soundblaster came with a demo of a Lara Croft game that used the then incredible spatial audio processing to great effect. The industry promptly forgot about that technology, and to this day game audio rarely matches the potential of real time spatial dynamics that we once reached 20 years ago.
xnx 3 hours ago [-]
The Megahertz Wars were an exciting time. Going from 75 MHz to 200 MHz meant that everything (CPU limited) ran 2x as fast (or better with architectural improvements).

Nothing since has packed nearly the impact with the exception of going from spinning disks to SSDs.

dlcarrier 1 hours ago [-]
In my experience, SSDs had a bigger impact. Thanks to Wirth's Law (https://en.wikipedia.org/wiki/Wirth%27s_law) the steady across-the-board increase in processing power didn't equate to programs running much faster, e.g. Discord running on a modern computer isn't any more responsive, if not less responsive than an ICQ client was running on a computer 25 years ago.

SSDs provided a huge bump in performance to each individual computer, but trickled their way into market saturation over a generation or two of computers, so you'd be effectively running the same software but in a much more responsive environment.

majormajor 40 minutes ago [-]
Anytime you upgraded from a 4 year old computer to a new one back then - from 16Mhz to 90Mhz, or 75Mhz to 333Mhz, or 333Mhz to 1Ghz, or whatever - it was immediate, it was visceral.

SSDs booted faster and launched programs faster and were a very nice change, but they weren't that same sort of night-and-day 80s/90s era change.

The software, in those days, was similarly making much bigger leaps every few years. 256 colors to millions, resolution, capabilities (real time spellcheck! a miracle at the time.) A chat app isn't a great comparison. Games are the most extreme example - Sim City to Sim City 2000; Doom to Quake; Unreal Tournament to Battlefield 1942 - but consider also a 1995 web browser vs a 1999 one.

nucleardog 13 minutes ago [-]
> SSDs booted faster and launched programs faster and were a very nice change, but they weren't that same sort of night-and-day 80s/90s era change.

For me they were.

I still remember the first PC I put together for someone with a SSD.

I had a quite beefy machine at the time and it would take 30 seconds or more to boot Windows, and around 45s to fully load Photoshop.

Built this machine someone with entirely low-end (think like "i3" not "Celeron") components, but it was more than enough for what they wanted it for. It would hit the desktop in around 10 seconds, and photoshop was ready to go in about 2 seconds.

(Or thereabouts--I did time it, but I'm remembering numbers from like a decade and a half ago.)

For a _lot_ of operations, the SSD made an order of magnitude difference. Blew my mind at the time.

beastman82 3 minutes ago [-]
Agree 100%. the compute was always bottlenecked by insanely high i/o latency. SSDs opened up fast computers like no processor ever did.
gavinsyancey 50 minutes ago [-]
> Discord running on a modern computer isn't any more responsive, if not less responsive than an ICQ client was running on a computer 25 years ago.

The only thing more impressive that hardware engineers' delivering continuous massive performance improvements for the past several decades is software engineers' ability to completely erase that with more and more bloated programs to do essentially the same thing.

dlcarrier 21 minutes ago [-]
You joke, but it really is more work. Iv'e developed software in languages from assembly language to JavaScript, and for any given functionality it's been easier to write it in RISC assembly language running directly than to get something working reliably in JavaScript running on a framework in an interpreter in a VM in a web browser, where it's impossible to reliably know what a call is going to do, because everything is undocumented and untested.

One of the co-signers of the Agile Manifesto had previously stated that "The best way to get the right answer on the Internet is not to ask a question; it's to post the wrong answer." (https://en.wikipedia.org/w/index.php?title=Ward_Cunningham#L...) I'm convinced that the Agile Manifesto was an attempt to make an internet post of the most-wrong way to manage a software projects, in hopes someone would correct it with the right answer, but instead it was adopted as-is.

vachina 1 hours ago [-]
> Discord running on a modern computer isn't any more responsive, if not less responsive than an ICQ client was running on a computer 25 years ago.

I feel this. Humanity has peaked.

idiotsecant 52 minutes ago [-]
This is silly. That's like saying that machines haven't gotten any better because a helicopter doesn't eat any less hay than a horse did.
dlcarrier 18 minutes ago [-]
I don't follow your analogy. Can you elaborate?
st_goliath 2 hours ago [-]
> The Megahertz Wars were an exciting time.

About a week ago, completely out of the blue, YouTube recommended this old gem to me: https://www.youtube.com/watch?v=z0jQZxH7NgM

A Pentium 4, overclocked to 5GHz with liquid nitrogen cooling.

Watching this was such an amazing throwback. I remember clearly the last time I saw it, which was when an excited friend showed it to me on a PC at our schools library. A year or so before YouTube even existed.

By 2005, my Pentium 4 Prescott at home had some 3.6GHz without overclocking, 4GHz models for the consumer market were already announced (but plagued by delays), but surely 10GHz was "just a few more years away".

fnord77 2 hours ago [-]
only just last year did someone goose a PC cpu to 9.13ghz

https://www.tomshardware.com/pc-components/cpus/core-i9-1490...

embedding-shape 2 hours ago [-]
> Nothing since has packed nearly the impact with the exception of going from spinning disks to SSDs.

"Bananas" core-counts gave me the same experience. Some year ago I moved to Ryzen Threadripper and experienced similar "Wow, compiling this project is now 4x faster" or "processing this TBs of data is now 8x faster", but of course it's very specific to specific workloads where concurrency and parallism is thought of from the ground up, not a general 2x speed up in everything.

rr808 1 hours ago [-]
I still remember my first CPU with a heatsink. It seemed like a temporary dumb hack.
HPsquared 2 hours ago [-]
SSDs were such a revolution though, and a really rewarding upgrade. I'd fit SSDs to friend and family computers as an upgrade.
micv 2 hours ago [-]
Getting my first SSD was absolutely the best computer upgrade I've ever bought. I didn't even realise how annoying load times were because I was so used to them and coming from C64s and Amigas even spinning rust seemed fairly quick.

It took a long time before I felt a need to improve my PC's performance again after that.

coffeebeqn 2 hours ago [-]
There were quite a few mind blowing upgrades back in the day. The first sound card instead of PC beeper was one of my most memorable moments.

I remember loading up Doom, plugging my shitty earplugs that had a barely long enough cable and hearing the “real” shotgun sound for the first time. Oo-wee

sigmoid10 2 hours ago [-]
I once had a decade old Thinkpad that suddenly became my new work laptop once more thanks to an SSD. It's a true shame they simply don't make them like this anymore.
dcminter 2 hours ago [-]
Just before I installed an SSD was the last time I owned a computer that felt slow.
geon 2 hours ago [-]
GPUs for 3d graphics were a game changer.

I can see why you wouldn’t consider it as impactful if you weren’t into gaming at the time.

jmyeet 52 minutes ago [-]
That wasn't how it worked.

Up until the 486, the clock speed and bus speed were basically the same and topped out at about 33MHz (IIRC). The 486 started the thing of making the CPU speed a multiple of the bus speed eg 486dx2/66 (33MHz CPU, 66MHz bus), 486dx4/100 (25MHz CPU, 100MHz bus). And that's continued to this day (kind of).

But the point is the CPU became a lot faster than the IO speed, including memory. So these "overdrive" CPUs were faster but not 2-4x faster.

Also, in terms of impact, yeah there was a massive incrase in performance through the 1990s but let's not forget the first consumer GPUs, namely 3dfx Voodoo and later NVidia and ATI. Oh, Matrox Millenium anyone?

It's actually kind of wild that NVidia is now a trillion dollar company. It listed in 1998 for $12/share and adjusted for splits, Google is telling me it's ~3700x now.

varispeed 44 minutes ago [-]
I don't know. I felt this way when switching from Intel laptop to Apple M1. I am still using it today and I prefer it over desktop PC.
embedding-shape 21 minutes ago [-]
Have you ever used proper desktop computers? I suppose such a move would feel significant if you've mostly been using laptops.
random3 9 minutes ago [-]
Fun times. Coolers, paste, fans, supply watts, dip switches and jumpers. Quake, Voodoo 3dfx vs NVidia GForce. This is where it all started, kids.

I was in high school and was running a "computer games club" (~ Internet cafe for games and kids) since 1998 when we got a place, renovated it ourselves, got custom built furniture (cheap narrow desks) and initially 6 computers - AMDs at 300Mhz. By 2000 we broke a wall in the adjacent space and had ~15, cable + satellite internet for downloads and whatever video cards we could buy or scrap. It was wild.

Sharlin 3 hours ago [-]
The i486DX 33MHz was introduced in May 1990. A 30x increase, or about five doublings, in clock speeds over ten years. That's of course not the whole truth; the Athlon could do much more in one cycle than the 486. In any case, in 2010 we clearly did not have 30GHz processors – by then, the era of exponentially rising clock speeds was very decidedly over. I bought an original quadcore i7 in 2009 and used it for the next fifteen years. In that time, roughly one doubling in the number of cores and one doubling in clock speeds occurred.
adrian_b 2 hours ago [-]
"The era of exponentially rising clock speeds" was already over in 2003, when the 130-nm Pentium 4 reached 3.2GHz.

All the later CMOS fabrication processes, starting with the 90-nm process (in 2004), have provided only very small improvements in the clock frequency, so that now, 23 years later after 2003, the desktop CPUs have not reached a double clock frequency yet.

In the history of computers, the decade with the highest rate of clock frequency increase has been 1993 to 2003, during which the clock frequency has increased from 67 MHz in 1993 in the first Pentium, up to 3.2 GHz in the last Northwood Pentium 4. So the clock frequency had increased almost 50 times during that decade.

For comparison, in the previous decade, 1983 to 1993, the clock frequency in mass-produced CPUs had increased only around 5 times, i.e. at a rate about 10 times slower than in the next decade.

hedora 1 hours ago [-]
Sort of: The Pentium 4 was a strange chip. It had way too many pipeline steps, and was basically just chasing high clock speed marketing numbers instead of performance. In other words, it hit "3.2GHz" by cheating.

I'd argue you'd need to use AMD's Athlon XP or 64 bit processors, or either Pentium 3 / Core 2 Duo to figure out when clock speeds stopped increasing.

bee_rider 2 hours ago [-]
It is true that we haven’t seen single core clock speeds increasing as fast, for a long while now. And I think everyone agrees that some nebulously defined “rate of computing progress” has slowed down.

But, we can be slightly less pessimistic if we’re more specific. Already by the early 90’s, a lot of the clock speed increase came from strategies like pipelines, superscalar instructions, branch prediction. Instruction level parallelism. Then in 200X we started using additional parallelism strategies like multicore and SMT.

It isn’t a meaningless distinction. There’s a real difference between parallelism that the compiler and hardware can usually figure out, and parallelism that the programmer usually has to expose.

But there’s some artificiality to it. We’re talking about the ability of parallel hardware to provide the illusion of sequential execution. And we know that if we want full “single threaded” performance, we have to think about the instruction level parallelism. It’s just implicit rather than explicit like thread-level parallelism. And the explicit parallelism is right there in any modern compiler.

If the syntax of C was slightly different, to the point where it could automatically add OpenMP pragmas to all it’s for loops, we’d have 30GHz processors by now, haha.

hedora 1 hours ago [-]
Clock speed increases definitely slowed down, but now that software can use parallelism better, we're seeing big wins again. Current desktop/laptop packages are doing 100 trillion operations per second. The article's processor could do one floating point op per cycle, or 1B ops. So, we've seen a 100,000x speedup in the last 25 years. That's a doubling every ~ 1.5 years since 2000.

It's not quite apples-to-apples, of course, due to floating point precision decreasing since then, vectorization, etc, but it's not like progress stopped in 2000!

lysace 27 minutes ago [-]
Web browsing is still largely single/few-threaded in practice, afaik. (Right?)
layer8 2 hours ago [-]
On the plus side, the 486DX-33 didn’t require active cooling. The second half of the 1990s was when home computing started to become noisy, and the art of trying to build silent PCs began.
hedora 2 hours ago [-]
The Athlon XP was the bigger milestone, as I remember it.

They were both "seventh generation" according to their marketing, but you could get an entire GHz+ Athlon XP machine for much less than half the $990 tray price from the article.

I distinctly remember the day work bought a 5 or 6 node cluster for $2000. (A local computer shop gave us a bulk discount and assembled it for them, so sadly, I didn't poke around inside the boxes much.)

We had a Solaris workstation that retailed for $10K in the same office. Its per-core speed was comparable to one Athlon machine, so the cluster ran circles around it for our workload.

Intel was completely missing in action at that point, despite being the market leader. They were about to release the Pentium 4, and didn't put anything decent out from then to the Core 2 Duo. (The Pentium 4 had high clock rates, but low instructions per cycle, so it didn't really matter. Then AMD beat Intel to market with 64 bit support.)

I suspect history is in the process of repeating itself. My $550 AMD box happily runs Qwen 3.5 (32B parameters). An nvidia board that can run that costs > 4x as much.

herodoturtle 40 minutes ago [-]
I remember upgrading my 486 DX2 66Mhz to a DX4 100Mhz and all of a sudden being able to run winamp and Quake. That felt pretty epic at the time.
paulryanrogers 1 hours ago [-]
My first 1GHz was an AMD, also my first non-Intel, and its required fan was so loud that I was glad to get rid of it.

The speed was nice, and some competition helped lower prices.

dd_xplore 3 hours ago [-]
I remember back in 2006 I used to browse overclock forums to overclock my pentium 4, I tons of fun consuming lots of instructions, I learned the bios, changed PLL clocks, mem clocks etc.
rckclmbr 2 hours ago [-]
I bought a car radiator and dremeled out my case, visited Home Depot for all the tubes and connectors. It’s too easy nowadays to add watercooling
mtucker502 3 hours ago [-]
What progress is being made in overcoming the current thermal limits blocking us from high clock rates (10Ghz+)?
sparkie 25 minutes ago [-]
That's not going to happen, but there's alternative research such as [1] where we get rid of the clock and use self-timed circuits.

[1]:https://arc.cecs.pdx.edu/

vessenes 2 hours ago [-]
Like any doubling rule, the buck has to stop somewhere. Higher energy usage + smaller geometry means much more exotic analog physics to worry about in chips. I’m not a silicon engineer by any means but I’d expect 10Ghz cycles will be optical or very exotically cooled or not coming at us at all.
adrian_b 2 hours ago [-]
Reaching 10 GHz for a CPU will never be done in silicon.

It could be done if either silicon will be replaced with another semiconductor or semiconductors will be replaced with something else for making logical gates, e.g. with organic molecules, to be able to design a logical gate atom by atom.

For the first variant, i.e. replacing silicon with another semiconductor, research is fairly advanced, but this would increase the fabrication cost so it will be done only when any methods for further improvements of silicon integrated circuits will become ineffective or too expensive, which is unlikely to happen earlier than a decade from now.

Hikikomori 24 minutes ago [-]
Overclockers are pretty close.
FpUser 2 hours ago [-]
Having RAM read / write faster will be of way more benefit
brennanpeterson 2 hours ago [-]
None for normal.compute, since energy density is still fundamental. But the interesting option is cryogenic computing, which can have zero switching energy, and 10s of GHz clock rates

Some neat startups to watch for in this space.

magic_man 3 hours ago [-]
The energy consumed is cv^2f. It makes no sense to keep increasing frequency as you make power way worse.
dlcarrier 1 hours ago [-]
At lower frequencies, leakage current plays a larger role than gate capacitance, so for any given process node, there's a sweet spot. For medium to low loads, it takes less power to rapidly switch between cutting off power to a core, and running at a higher frequency than is needed, than to run at a lower frequency.

Newer process nodes decrease the per-gate capacitance, increasing the optimal operating frequency.

vlovich123 3 hours ago [-]
So heat. There’s efforts to switch to optics which don’t have that heat problem so much but have the problem that it’s really hard to build an optical transistor. + anywhere your interfacing with the electrical world you’re back to the heat problem.

Maybe reversible computing will help unlock several more orders of magnitude of growth.

HarHarVeryFunny 2 hours ago [-]
What would be the benefit? You don't need a 10GHz processor to browse the web, or edit a spreadsheet, and in any case things like that are already multi-threaded.

The current direction of adding more cores makes more sense, since this is really what CPU intensive programs generally need - more parallelism.

michaelt 11 minutes ago [-]
Because someone decided to write all the software in javascript and python, which don't benefit from the added cores.
vaylian 53 minutes ago [-]
You technically don't even need a 300MHz processor for the use cases that you name. But Intel and others kept developing faster CPUs anyway.
moffkalast 16 minutes ago [-]
For parallelism we already have SIMD units like AVX and well... GPUs. CPUs need higher single thread speeds for tasks that simply cannot make effective use of it.
nurettin 1 hours ago [-]
Single core speed is absolutely a thing that is needed and preferred to multicore. That's why we have avx, amx, etc.
KeplerBoy 27 minutes ago [-]
Meh, avx is also just parallelism. That won't get you around Amdahl's law.
davidee 1 hours ago [-]
I have very fond memories of my first dual-cpu Athlon machine.

It was the workstation on which I learned Logic Audio before, you know, Apple bought Emagic. I took that machine, running very low latency Reason to live gigs with my band.

Carting around a full-tower computer (not to mention the large CRT monitor we needed) next to a bunch of tube Fender & Ampeg amps was wild at the time. Finding a good drummer was hard; we turned that challenge into a lot of fun programming rhythm sections we could jam to, and control in real-time, live.

jmyeet 32 minutes ago [-]
I have a hard time remembering what computers I had in the 1990s now. I had an 8086 in the 1980s. I think the next one I had was a 486/33 in the early 90s and I had this for years. I remember having a Cyrix 586 at some point later. I think the next jump was in the early 2000s and I honestly don't rmeember what that CPU was so I can't say when I got my first 1GHz+ CPU. Probably that 2002 PC. No idea what it was now. But it did survive in some form for another 12 years.

Fun fact #1: many today may not know that the only reason switched to the Pentium name was because a court ruled that they couldn't trademark a number and AMD had cross-licensed the microarchitecture and instruction set to AMD and Cyrix.

It was the Pentium 4 when clock speeds went insane and became a huge marketing point even though Pentium chips had lower IPC than Athlons (at that time). There was a belief that CPUs would keep going to 10GHz+. Instead they hit a ceiling at about ~3GHz, that's barely increased to this day (ignoring burst modes).

Intel originally intended to move workstations and servers to the EPIC architecture (eg Merced was an early chip in this series). This began in the 1990s but was years delayed and required writing software a very particular way. It never delievered on its promise.

And AMD, thanks to the earlier cross-licensing agreement, just ate Intel's lunch with the Athlon 64 starting in 2003 by adding the x86_64 instructions, which we still use today.

Fun Fact #2: it was the Pentium 3 that saved Intel's hide long after it was discontinued in favor of the Pentium 4.

The early 2000s were the nascent era of multi-core CPUs. The Pentium 3 had survived in mobile chips and become the Pentium-M and then the Core Duo (and Core 2 Duo later). This was the Centrino platform and included wireless (IIRC 802.11b/g). The Pentium 4 hit the Gigahertz ceiling and EPIC wasn't going to happen to Intel went back to the drawing board, revived the mobile Pentium-3 platform, adding AMD's 64 bit instructions and released their desktop CPUs. Even modern Intel CPUs are in many ways a derivation of the Pentium-3 [1].

[1]: https://en.wikipedia.org/wiki/List_of_Intel_Core_processors

1970-01-01 2 hours ago [-]
Argh. The headline. The opener. Awful. Where are editors in 2026? There's no way an LLM would write this.

The GHz barrier wasn't special. What was much more important was the fact that AMD was giving Intel a hard time and there was finally hard competition.

dlcarrier 1 hours ago [-]
AMD being competitive at the time is what mattered, but there's still technological advancement needed for them to be competitive. In this case, it was AMD using copper interconnects that allowed them to not only hit 1 GHz, but quickly clock up from there: https://en.wikipedia.org/wiki/Athlon#Original_release
adrian_b 2 hours ago [-]
In terms of marketing, the "GHz" barrier was special, because surpassing it has indeed created a lot of recognition in the general public for the fact that the AMD Athlon CPUs were better than the Intel Pentium III CPUs.

In reality, of course what you say is true and the fact that Athlon could previde a few extra hundreds of MHz in the clock frequency was not decisive.

Athlon had many improvements in microarchitecture in comparison with Pentium III, which ensured a much better performance even at equal clock frequency. For instance, Athlon was the first x86 CPU that was able to do both a floating-point multiplication and a floating-point addition in a single clock cycle. Pentium III, like all previous Intel Pentium CPUs required 2 clock cycles for this pair of operations.

This much better floating-point performance of Athlon vs. Intel contrasted with the previous generation, where AMD K6 had competitive integer performance with Intel, but its floating-point performance was well below that of the various Intel Pentium models (which had hurt its performance in some games).

HarHarVeryFunny 2 hours ago [-]
There was a time where increased clock speeds, or more generally increased processor throughput was important. I can remember when computers were slow, even for things like browsing the web (and not just because internet connection speeds were slow), and paying more for a new faster computer made sense. I think this time period may well have lasted roughly until the "GHz era" or thereabouts, after which even the cheapest, slowest, computers were all that anybody really needed, except for gamers where the the solution was a faster graphics card (which eventually lead to GPU-computing and the current AI revolution!)
1970-01-01 2 hours ago [-]
You're conflating a few things here. The Vista era was the biggest requirement hit. That was the time where people really needed a faster PC to continue browsing. Before that, you could get away with XP running on a sub-GHz processor.
tosti 48 minutes ago [-]
That's not how I remember recent history because Linux was already pretty good before microslop XP came out. I've been daily driving cheap junk ever since, no regrets.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 17:42:11 GMT+0000 (Coordinated Universal Time) with Vercel.