Ah, that explains this patchset that was submitted to the Linux kernel today
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on
s390 architecture, we aim to expand the platform's software ecosystem. This
initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU
virtualization on s390....."
From the perspective of PC building, I've always thought it would be neat if the CPU/storage/RAM could go on a card with a PCIe edge connector, and then that could be plugged into a "motherboard" that's basically just a PCIe multiplexer out to however many peripheral cards you have.
Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.
I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.
yjftsjthsd-h 3 hours ago [-]
Oh, going back to a backplane computer design? That could be cool, though I assumed we moved away from that model for electrical/signaling reasons? If we could make it work, it would be really cool to have a system that let you put in arbitrary processors, eg. a box with 1 GPU and 2 CPU cards plugged in
mikepurvis 2 hours ago [-]
I believe PCIe is a leader/follower system, so there'd probably be some issues with that unless the CPUs specifically knew they were sharing, or there was a way for the non-leader units to know they they shouldn't try to control the bus.
bombcar 1 hours ago [-]
But if we're dreaming, we can have the backplane be actually multiple (Nx thunderbird 5 cables connected each slot to all other slots directly).
Then each device can be a host, a client, at the same time and at full bandwidth.
bombcar 3 hours ago [-]
This was (is?) done - some strange industrial computers for sure and I think others, where the "motherboard" was just the first board on the backplane.
The transputer b008 series was also somewhat similar.
throwup238 2 hours ago [-]
That would crush latency on RAM.
wat10000 54 minutes ago [-]
Now we have cables that include computers more powerful than an old mainframe. So if it pleases you, just think of all the tiny little daughter computers hooked up to your machine now.
Teever 4 hours ago [-]
That's what I was hoping Apple was going to do with a refreshed Mac Pro.
I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.
The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.
This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.
Grosvenor 2 hours ago [-]
Apple already experimented with this with the prototype Jonathan computer.
It's very late 80's in its aesthetic, and I love it.
I’ve been running VM/370 and MVS on my RPi cluster for a long time now.
raverbashing 8 hours ago [-]
But I wonder if this is "much better" than x86 emulation or virt?
Is there really SW that's limited to (Linux) ARM and not x86?
Jarwain 7 hours ago [-]
Technically aren't most android apps limited to ARM?
toast0 5 hours ago [-]
There's certainly some, but I don't think most.
I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.
wmf 3 hours ago [-]
Probably Intel and AMD aren't willing to do this deal but Arm is.
MikePlacid 5 hours ago [-]
> Is there really SW that's limited to (Linux) ARM and not x86?
MacOS? (hides)
mykowebhn 10 hours ago [-]
This is a serious question. What does IBM, in fact, do? I'm surprised they are still around and apparently relevant. Are they more or less a services and consulting company now?
roncesvalles 9 hours ago [-]
Putting consumer grade (aka "commodity") hardware in a datacenter and running your infra on it is a bit of a meme, in the sense that it's not the only way of doing things. It was probably pioneered/popularized by Google but that's because writing great software was their "hammer", ie they framed every computing problem as a software problem. It was probably easier for them (= Jeff Dean) to take mediocre hardware and write a robust distributed system on top instead of the other way around.
There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software. That's what a mainframe is. You can write a simple and easy to maintain single process backend program, run it on a mainframe and be fairly confident that it can run without stopping for decades. Everything from the power supply to the CPU is redundant and can be hot swapped without booting the OS. Credit card transactions and banking software run on this model for example (just think about how insanely reliable credit card transactions are).
IBM has a monopoly in the second world. You could say the entire field of distributed systems is one big indie effort to break free of IBM's monopoly on computing.
vbezhenar 8 hours ago [-]
What I think today people do:
1. They run complicated infrastructure software, written by third-party developers.
2. And they run their own simple programs on top of them.
So for example you can rent Kubernetes cluster from AWS and run simple HTTP server. If your server crashes, Kubernetes will restart it, so it's resilient. There will be records in some metrics which will light up some alerts and eventually people will know about it and will fix it.
Another example: your simple program does some REST GET query. This query failed for some reason. But that query was intercepted by middleware proxy and that proxy determines that HTTP response was 5xx, so it can retry it. So it retries it few times with properly calibrated duration and eventually gets a response and propagates it back to the simple program. Simple program had no idea about all the stuff happening to make it work, it just threw HTTP query and got a response.
There's a lot of complicated machinery to enable simple programs to be part of resilient architecture. That's a goal, anyway.
zozbot234 8 hours ago [-]
> There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software.
You actually need both, the point of the extremely resilient hardware is that it can act as the single source of truth when you need it - including perhaps hosting some web-based transactions that directly affect your single source of truth. (Calling this a "model" for web-based infrastructure in general would be misleading though: a credit card transaction on the web is not your ordinary website! The web is just an implementation technology here.) Everything else can be ephemeral open systems, which is orders-of-magnitude cheaper.
throwaway27448 8 hours ago [-]
> Credit card transactions and banking software run on this model for example
TSYS is super expensive and is dying out. The current generation of banking software is very much shifting to distributed software across commodity data centers.
throwawaypath 3 hours ago [-]
Current generation of banking software is expanding on the mainframe:
IBM Z mainframes play a pivotal role in facilitating 87% of global credit card transactions, nearly $8 trillion in annual payments, and 29 billion ATM transactions each year, amounting to nearly $5 billion per day. Rosamilia highlighted the continuous growth in demand for capacity over the past decade, which has seen inventory expand by 3.5 times.
That post fails to mention Capital One's move from IBM mainframes to AWS was one of the reasons they suffered one of the largest data breaches in history.
esseph 7 hours ago [-]
Red Hat OpenShift (IBM) is what a lot of banks have settled on. Red Hat went all in maybe 5+ years ago in capturing those institutions.
VorpalWay 5 hours ago [-]
Ah, that explains why IBM bought RedHat. Or at least one reason for doing so.
esseph 3 hours ago [-]
I'd imagine close to 95% in the US, if they're running important workloads on prem on Linux, it's on RHEL. A staggering number of VMs and bare metal.
Bigpet 6 hours ago [-]
Is that in addition to mainframes or for completely replacing them?
zhengyi13 4 hours ago [-]
Probably both, to respond to the risk tolerances of any given org.
esseph 3 hours ago [-]
Both
Some stayed at on prem, some pushed code to mainframe VMs in the cloud, some went to OpenShift (mostly on prem from what Ive seen, probably 80-85%).
bitwize 5 hours ago [-]
I work in banking. We provide modern solutions for small local banks in the US. That's how our core runs. It's just Java apps (Spring Boot, Jakarta EE) running in the cloud.
Nursie 8 hours ago [-]
> Credit card transactions and banking software run on this model for example
Eh, they can but even a couple of decades ago there was a shift to open platforms. 90s and early 00s, sure, it was mainframe and exotic x86 species like Stratus machines. But even then the power of “throw a ton of cheaper Unix at it” was winning.
Banks’ central systems maybe, I have less experience there. IBM did also try for a while to ride the Linux virtualisation wave as well, saying “hey, you can run thousands of Linux instances on a single mainframe”, and I did some work porting IBM software to s390 Linux around 2007.
mech422 1 hours ago [-]
IIRC the Stratus/Model 88 was Moto 68K chips, not x86? I worked on them for years on wall st. - really nice machines! :-D
mghackerlady 7 hours ago [-]
x86 servers weren't that common in the 90s and early 200s, that was all sun or the other commercial unix peoples things
greedo 5 hours ago [-]
Sun was dying in 2000. I was busy deploying BSD and a bit later Linux for all our x86 gear.
pjmlp 3 hours ago [-]
Meanwhile in 2000 we only considered Linux good enough to host our MP3 file server and quake for the late nights.
All our production stuff was being deployed on Aix, HP-UX, Solaris and Windows NT/2000 Server.
Likewise most of my university degree used DG/UX and Solaris, when Red-Hat Linux was first deployed on the labs, it was after the DG/UX server died, and I was already on the fourth year of a five year degree.
greedo 14 minutes ago [-]
Well we were a small startup, and the idea of using AIX was a non-starter. Solaris was lovely, but our E250 was only for mail, and in hindsight we should have stood up a FreeBSD server with dovecot or something instead of a system that we migrated off of a year later.
We did use NT/2K internally but that was because we had some who insisted on using SMB via Windows.
Such fun times. The nix and nix-like OSes were spreading like fire. I never would have thought I'd ever wrangle them for the majority of my career.
mghackerlady 4 hours ago [-]
Java was exploding and sun machines were the server platform at the time. Yes, the dot com bubble burst and their stock was in freefall but all the things deployed to sun that survived the bubble didn't just disappear or move to X86 overnight
greedo 12 minutes ago [-]
Well you can say the same about COBOL...
Just because things hung around didn't mean that Sun/Solaris/Java were long for this world. Linux/x86 was just too cheap compared to SPARC gear. Even if it wasn't as robust as the Sun gear, it just made too much sense especially if you didn't have any legacy baggage.
Nursie 3 hours ago [-]
In the 90s, perhaps not massively, but gaining ground very early in the 00s. I started my career in 2000 and most of the credit-card related stuff I built until ‘05 was targeted at Windows, Linux and Solaris, with a variety of other Unix platforms depending on the client/project.
But the x86 I was referring to in my comment above, Stratus, was (maybe still is?) an exotic attempt to enter the mainframe-reliability space with windows. IIRC it effectively ran two redundant x86 machines in lockstep, keeping them in sync somehow, so that if hardware on one died the other could continue. I have no idea how big their market was, but I know of at least one acquirer/issuer credit card system that ran on that hardware around 2002-3.
3 minutes ago [-]
Cthulhu_ 9 hours ago [-]
A better question would probably what they don't do; just going off the wiki page (https://en.wikipedia.org/wiki/IBM) for recent history, they're in health care (imaging), weather, video streaming, cloud services, Red Hat, managed infrastructure (which branched off into a company called Kyndryl, which has 90.000 employees in 115 countries), warfare ("In June 2025, IBM was named by a UN expert report as one of several companies "central to Israel's surveillance apparatus and the ongoing Gaza destruction.""), etc etc etc.
Basically they do a lot, but they're not showy about it.
Frieren 10 hours ago [-]
IBM has more revenue than Oracle even if we hear way less about it. 5 times smaller than Apple, thou. It also has more employees than Microsoft or Alphabet. But it has tighter profit margins than other tech companies.
IBM is not in consumer products nor services so we do not hear about it.
gpapilion 6 hours ago [-]
It’s a very different company post the PwC purchase. They have around 1/3 of the revenue from consulting which tends to push the valuation down due to its relative low margin when compared to software. This also inflates the number of employees.
lotsofpulp 9 hours ago [-]
Oracle/TSMC/SpaceX isn’t in consumer products/services, but they are heard about.
IBM was declining for 10 years while the rest of the tech related businesses were blowing up, plus IBM does not pay well, so other than it being a business in decline, there wasn’t much to talk about. No one expects anything new from IBM.
Also, they had quite a few big boondoggles where they were the bad guys helping swindle taxpayers due to the goodwill from their brand’s legacy, so being a dying rent seeking business as opposed to a growing innovative business was the assumption I had.
freedomben 8 hours ago [-]
SpaceX is pretty heavily in consumer products/services now that Starlink is big. But otherwise yes you are correct.
hsbauauvhabzb 9 hours ago [-]
They also helped the nazis
phrotoma 8 hours ago [-]
Early in my career I spent some years working at the biggest bank in Canada, they were (and still are) an enormous IBM customer. Hardware, software, consulting, and probably lots of other things I had no visibility into.
Beneath the countless layers of VMs and copious weird purpose built gear like Tandem and Base24 for the ATMs was a whole bunch of true blue z/OS powered IBM mainframes chugging through thousands and thousands of interlocking COBOL programs that do everything from moving files between partner banks all over the world, moving money between accounts, compounding interest, and extracting a metric shitton of every type of fee imaginable.
If you know z/OS there's work available until your retirement. Miserable, pointless, banal, and archaic legacy as fuck mainframe work.
I don't how exaggerated this story is, but one of my buddies did his internship at TD. One of his skip managers told him if you know COBOL there are departments that will give you a blank cheque during salary ngotiation.
phrotoma 8 hours ago [-]
Yeah it's hard to say but I believe there's at least some truth to that. I took COBOL off my resume over a decade ago just to combat the volume of recruiters trying to drag me away from the cloud back to on-prem land.
A good friend of mine who worked on a CICS based credit card processing application at that bank doubled his salary twice inside of 4 yrs. First by quitting the bank and going to a boutique consultancy to build competing software (which they sold to other banks) and then by quitting that job and coming back to the bank to takeover the abysmal state the CICS app had lapsed into in his absence.
And that was circa 2010.
One thing that was true of the bank then and I'm sure is true now is that when they see a nail they truly have just the one hammer. When a problem comes along, hit it with a huge sack of cash until it goes away.
vbezhenar 8 hours ago [-]
I don't think "know COBOL" is enough. I'm pretty sure I can learn COBOL in a week. It's more about "know COBOL and know all this old stuff like CLIs, etc, and know all these old approaches".
nunez 5 hours ago [-]
Not sure if this is still the case, but Dillard's (US retailer) had a COBOL training program for undergrads as recently as six years ago
zozbot234 8 hours ago [-]
Typically it's not just about knowing COBOL as a language, the bottleneck is having real expertise wrt. highly specific, fiddly proprietary frameworks that are implemented on top of COBOL.
3yr-i-frew-up 6 hours ago [-]
Amazing to know AI has eliminated this role that used to have blank cheque.
chasd00 7 hours ago [-]
> purpose built gear like Tandem
Tandem! Now there's a name i haven't heard in a long time. A college friend of mine worked with some of their stuff right out of college and I still remember him telling me about it. It seemed like magic, we were both floored with the capabilities.
/we were in our early 20s and the inet was just taking off so there were lots of "magic" everywhere
The Remarkable Computers Built Not to Fail by Asianometry
functional_dev 8 hours ago [-]
is it that bad?
maybe that is a secret for a long life. I want a job that never disappears :)
phrotoma 6 hours ago [-]
Man ... this question hits me really hard. I was absolutely miserable by the end of my years at the bank, and the part that really fucked me up was that (at the time) I could not understand why all my colleagues weren't.
Huge generalizations incoming, there are exceptions to every rule, but in my experience there are no nerds who love tech for tech's sake in the banking world. It's entirely staffed by the "C's get degrees" crowd who just want to clock in, clock out, keep their head down, and retire with a nice pension.
I wanted to work on sexy technology, wrangle clouds, contribute to open source, and hack in modern languages.
I have many friends who are still at that bank 20 yrs later. They're all directors of this that or the other thing, still just grinding out some midlevel whatever career and cruising comfortably. If that ticks all your boxes then by all means go hit up a bank job.
By the time I left I couldn't drink enough liquor in a day to rinse the stench of that job off me. If I hadn't managed to slip that place I'd be dead of liver failure by now.
It's the secret for a long life for some folks, but it ain't for everybody.
pjmlp 9 hours ago [-]
Own Red-Hat, thus major contributions to Wayland, GNOME, GCC and Java, at very least.
Have their own Java implementation, with capabilities like AOT before OpenJDK got started on Leyden, or even Graal existed, for years had extensions for value types (nowadays dropped), and alongside Azul, cluster based JIT compiler that shares code across JVM instances.
IBM i and z/OS are still heavely deployed in many organisations, alongside Aix, and LinuxONE (Linux running on mainframes and micros).
Research in quantum computing, AI, design processes, one of the companies that does huge amounts of patents per year across various fields.
And yes a services company, that is actually a consortium of IBM owned companies many of each under a different brand (which is followed by "an IBM company").
bargainbin 9 hours ago [-]
I work for a big international corp. We pay IBM a blankest sum annually because it’s that hard to quantify just how much we rely on their services and licensing costs.
Licensing of course just being typical rent seeking behaviour but their services are valuable given the financial impact if one of their solutions goes down on us (which is very rarely)
JoachimS 8 hours ago [-]
Everything. They have done for decades, and will do for decades. And what IBM focus on is probably worth looking into.
IBM (imho) is in the absolute frontline in quantum computers. One could argue if the number of startups in QC means that there is an actual market or not. Companies that lives on VC or the valuation of their stock.
But IBM is not showy, not on the front pages, does not live on VC or stock valuation. IBM makes tons of money decade after decade from customers that are also not showy but makes tons of money. Banks, financial institutions, energy, logistics, health care etc etc. If IBM thinks these companies will benefit from using QC from IBM (and pay tons of money for it), there is quite probably some truth in QC becoming useful in the near future. Years rather than decades.
IBM have run the numbers and have decided that spending the money for engineering, research required is outweighs the money possible to earn on QC services. QCs powerful enough to run the QC-supported algorithms these companies need to make more tons of money. And it's probably not breaking RSA or ECC.
enether 13 minutes ago [-]
They make $8-9B a year (~90% profit margins) selling software to mainframes, which were deployed ages ago but still have to be maintained because critical COBOL business code was written on their systems - and migration is too riskly/costly.
To give you an idea:
- of the risk in regulated industries like banking: a UK bank was once fined *$62 million* for botching a mainframe migration and causing downtime.
- of the difficulty and risk in non-tech industries: Australia once spent *$120 million* trying to migrate its social security system off mainframes... and failed.
Mainframes are not their only business, of course, but it's a major cash cow that's under appreciated. I, for one, didn't know that business keeps growing.
They design their own CPUs, and they sold $15b of hardware last year. Tellum ii in the z17 mainframe is a Samsung 5nm part.
What I don't get however is who'd use their custom accelerators for AI inference.
eru 9 hours ago [-]
Anyone who can't get any better AI accelerators elsewhere? Last I heard, these things were sold out for years on end. And anyone who can make one, can sell them.
ghaff 10 hours ago [-]
So they had $30 billion in software revenue last year and $15 billion in infrastructure against $20 billion in consulting.
guenthert 5 hours ago [-]
You don't read much about IBM here, but this is the wrong site to look for them. A big chunk of IBM's business comes from other businesses outside the IT industry. You're more likely to read about IBM in the Wall Street Journal; Google finds "IBM" at wsj.com about 48000 times (it finds "oracle" there about 30000 times).
seanmcdirmid 5 hours ago [-]
IBM is known as a toxic tech along with Palantir and Oracle. We talk about IBM on HN, but mostly in negative contexts.
shrimppersimmon 9 hours ago [-]
They design and build not one but two CPU architectures, s390/Z and POWER.
Both have been around for many years, but neither is obsolete, they're just not designed for consumer applications.
They still generate $10-15 billion per year in revenue.
eru 9 hours ago [-]
Power was used in customer applications a long time ago? I think Apple used them for a while and so did some game consoles?
shrimppersimmon 9 hours ago [-]
Yes. Apple used PowerPC, and PowerPC was also in the Xbox 360, PS3, Wii, and Wii U. It was also widespread in embedded sectors like networking, automotive, and aerospace.
IBM eventually stepped away from the embedded market and eventually lost their foothold in consoles as well. While Raptor did offer Power9 systems at a somewhat accessible price point, the IBM-produced CPUs were still fundamentally enterprise-grade hardware, meaning they retained the high costs and "big iron" features of server tech.
mghackerlady 7 hours ago [-]
What I wouldn't give for raptor money... they've gotten more and more expensive as time went on
kstrauser 5 hours ago [-]
Sort of, in the form of PowerPC, which was an Apple-IBM-Motorola (“AIM”) collaboration. It’s closely related to IBM’s Power line, but more like a predecessor than a sibling.
kitd 7 hours ago [-]
They also designed the Cell CPU used in Nintendo Wiis, among others.
tempest_ 4 hours ago [-]
Cell was PS3 and the Will used a Power cpu.
IBM had a hand in both however
stewarts 6 hours ago [-]
[dead]
panick21_ 8 hours ago [-]
The designed many, many more CPU architectures.
lmpdev 9 hours ago [-]
I was surprised to find out they still have hardware repair technicians (extremely expensive but reliable: ~$400 per computer around 2022 iirc)
But yes they’re mostly enterprise/services/mainframes not anything overly consumer
quietsegfault 9 hours ago [-]
No, IBM has Unisys contractors, not employees. All the techs I’ve worked with from IBM have been a nightmare. One dropped an entire drive array on the ground, and tried to install it despite it being bent and no longer fitting on the rack. I have been acquired by IBM twice. They are a nightmare, horrible company.
stonogo 6 hours ago [-]
IBM has plenty of hardware techs. They're called system services representatives (SSRs) and if you got a Unisys contractor, that just means you're not spending enough money for IBM to send an SSR.
itake 9 hours ago [-]
I own their shares due to their Quantum Computing group
When you’re that large and established it’s very hard to die. I expect IBM to exist in some form pretty much forever
deepriverfish 5 hours ago [-]
my company uses as400 and DB2 and pay for their servers. So they still make money from hardware too
dogma1138 10 hours ago [-]
Mainframes and consulting.
esseph 7 hours ago [-]
They own things like:
1. Red Hat Enterprise Linux, which is by far the most commonly deployed Linux variant among US Enterprise orgs.
2. Ansible
3. Podman
4. Hashicorp Terraform / Consul / Packer / Vagrant / Nomad / Etc.
5. Giant B2B services arm
6. Mainframe, which a lot of science organizations / governments / credit card companies still run. Sometimes you may have an IBM rep show up to replace a part on the mainframe you didn't even know was broken - very reliable, fault tolerant system.
7. The only service I know where you can rent Quantum computing time in the cloud
8. Probably a ton of other things I'm not even aware of.
9. Red Hat OpenShift - so if you're big enterprise running k8s on prem, there's a good chance it's OpenShift, especially in banking / finance / government.
4 hours ago [-]
10 hours ago [-]
quietsegfault 9 hours ago [-]
They exist to swallow up profitable companies, extract any “unnecessary” overhead (like benefits, PTO, pay that isn’t rock bottom), and package into large enterprise licensing agreements.
eru 9 hours ago [-]
Sounds like a pretty good deal for those people who keep starting these 'profitable' companies.
If IBM runs them into the ground, there's a niche for a copy-cat of the original company that you can just found again. Rinse and repeat.
p-e-w 10 hours ago [-]
I was shocked when IBM acquired Red Hat a few years ago. I had silently assumed at the time that Red Hat was far bigger than IBM nowadays, so the reverse would have made more sense to me.
freedomben 8 hours ago [-]
Google was apparently in the running for acquiring Red Hat. I still wonder what Red Hat would be today if Google had acquired instead.
mghackerlady 7 hours ago [-]
much, much worse
freedomben 7 hours ago [-]
Yes I agree, given the direction G has been going. I was disappointed at the time, but it was probably a blessing in disguise
mghackerlady 6 hours ago [-]
honestly I think it's a net positive (for me at least) because it ensures Fedora has great POWER support (I'll never be able to afford a POWER machine at this rate, but the architecture is an absolute pleasure to work with whenever I have to)
7 hours ago [-]
fock 8 hours ago [-]
They sell (managed) database appliances (on z and Power) and associated software (think the platform/HANA parts of SAP) - all state-of-the-art in the late 1990s but since then put on maintenance mode and it shows (a bit like oracle...).
Their hardware is still cool custom built silicon and imo state of the art, but since k8s, high-speed-network and multi-TB-machines (for <100k$) are here and run Linux no new venture buys into that anymore (except for gulf states...).
Before, when the competition was a cluster of Itanium/VMS or Sparc/Solaris and the associated contract, noone bought into that either at scale but also noone using IBM had a very compelling reason to switch everything around.
So essentially they sell new hardware and "support" to customers who have been in need to process tabular, multi-GB databases since when a PC was 128MB memory and have been doing electronic record-keeping since the 1970s. They also allow their ~hostages~, ehm, customers who trust them with their data to run processing near the data at a cost/in a cloud style billing model. That is so expensive though that every large IBM-shop has built an elaborate layer of JVMs, Unix and mirror-databases around their IBM appliances. Lately they bought Redhat and hashicorp and confluent thus taking a cut from the "support" of the abominiations of IT systems they helped birth for some more time to come (also remember the alternative JVM OpenJ9, do you all?).
I think the later a company started using centralized electronic record keeping, the higher the likelyhood they are not paying IBM anymore: commercial banks, governments and insurance started digitizing in the 60s (with custom software) and if the companies are old (or in US-friendly petrostates) they are all IBM customers. Corps using ERP or PLM offerings (so manufacturing and retail chains which are younger than banks) used to start digitizing a little later (Walmart only was founded in the 60s and electronic CAD started in the 80s) and while they likely used IBM in the past (SAP was big on DB2) they might not use it anymore (also it helps they usually bought the ERP or PLM from someone else). New Companies whose sole business was to run a digital-platform started on Unix (see Amazon who successfully fought to ditch Oracle even) or just built their whole platform (Google). If those companies predate Unix they usually fought hard to get rid of IBM (Microsoft, Amadeus)
Consulting/outsourcing services have been spun out to Kyndryl, so nowadays IBM only sells hardware, support for their products and ostensibly has some people left to develop their products... The days when that was a big thing and IBM produced all the stuff they sell support for now, have been long gone. A fun link to see how their "product development" operates nowadays is this discussion to bring gitlab-runners to z/OS: https://gitlab.com/gitlab-org/gitlab-runner/-/work_items/275... - tl;dr "hey you opensource company, we are IBM and managed to pay someone to port a go compiler to z/OS. Now we have a customer who wants to use gitlab with z/OS. Would you like to make your software part of our product offering?".
A fun fact is that - even within IBM - access to the real mainframe seems to be very limited which shows a bit in the discussion linked above and also with an ex-Kyndryl-person saying: "oh, I once had a contract where we replaced the mainframe and we ran that on Linux-boxes inside IBM, because it was just cheaper that way. Just the big reporting was a bit slow, but the reliability was just fine"
silvestrov 11 hours ago [-]
> dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security
I think we can ignore the "AI" word here as its presence is only because everything currently has to be AI.
So why would IBM add ARM?
> As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments
I think it has become too expensive for IBM to develop their own CPU architecture and that ARM64 is starting to catch up in performance for a much lower price.
So IBM wants to switch to ARM without making a too big fuzz about it.
adrian_b 7 hours ago [-]
>So IBM wants to switch to ARM without making a too big fuzz about it.
That was my first thought too, but it does not make sense, because if IBM would sell ARM-based servers nobody would buy from them instead of using cheaper alternatives.
As revealed in another comment, at least for now their strategy is to provide some add-in cards for their mainframe systems, containing an ARM CPU which is used to execute VMs in which ARM-native programs are executed.
So this is like decades ago, when if you had an Apple computer with a 6502 CPU you could also buy a Z80 CPU card for it, so you could also run CP/M programs on your Apple computer, not only programs written for Apple and 6502.
Thus with this ARM accelerator, you will be able to run on IBM mainframes, in VMs, also Linux-on-ARM instances or Windows-on-ARM instances. Presumably they have customers who desire this.
I assume that the IBM marketing arguments for this are that this not only saves the cost of an additional ARM-based server, but it also provides the reliability guarantees of IBM mainframes for the ARM-based applications.
Taking into account that today buying an extra server with its own memory may cost a few times more than last summer, an add-in CPU card that shares memory with your existing mainframe might be extra enticing.
acdha 2 hours ago [-]
People buy IBM for the support and exotic features around high-availability and expansion. I think they’d be able to do an ARM migration if needed since they have deep experience with emulation (there is mainframe code from the 1970s running on POWER today on nested emulators) and they have a lot of precedent for their support engineers working closely with customers.
rzerowan 10 hours ago [-]
Im thinking maybe as a compliment to x86 offerings and eventual displacement as a primary offering , i do not see them ditching POWER.
The architecture might be non-standard and not very widespread however for what it does and workloads that are suited to it. I dont think any ARM design comes close , maybe Fujitsu's A64FX.
silvestrov 9 hours ago [-]
Marketingwise I think it is difficult for IBM to sell x86 systems as it is too easy for customers to compare performance to a standard Wintel server.
Sun had the same problem after 2001 dotcom when standard PC servers became reliable enough to run web servers on.
It's easier to sell "our special sauce" when building using a custom ARM platform. Then you have no easy comparison with standard servers.
The i systems are just POWER machines with different firmware.
tempay 11 hours ago [-]
> ARM64 is starting to catch up in performance for a much lower price
Why do you say "starting to"? arm64 has been competitive with ppc64le for a fairly long time at this point
adrian_b 7 hours ago [-]
I do not think that I have seen any public benchmark for more than a decade that can compare ARM-based CPUs with IBM POWER CPUs.
The recent generations of IBM POWER CPUs have not been designed for good single-thread performance but only for excellent multi-threaded performance.
So I believe that an ARM CPU from a flagship smartphone should be much faster in single thread that any existing IBM POWER CPU.
On the other hand, I do not know if there exists any ARM-based server CPU that can match the multi-threaded performance of the latest IBM POWER CPUs.
At least for some workloads the performance of the ARM-based CPUs must be much lower, as the IBM CPUs have huge cache memories and very fast memory and I/O interfaces.
The ARM-based server CPUs should win in performance per watt (due to using recent TSMC processes vs. older Samsung processes) and in performance per dollar, but not in absolute performance.
my123 5 hours ago [-]
After Power9, IBM became uncompetitive multi-core performance against mainstream server CPUs - both x86 and Arm. They didn't keep up with the rise in core counts.
And the single thread side isn't that good either, but SMT8 is a quite nice software licensing trick
mbreese 8 hours ago [-]
I thought PPC was supposed to be highly performant, but not very efficient. I didn’t think ARM (at least non-Apple ARM) was hitting that level of performance yet. I thought ARM was by far more efficient, but not quite there in terms of raw performance.
But I could be wrong… I’m going from a historical perspective. I haven’t checked PPC benchmarks in quite a while.
kjs3 7 hours ago [-]
Are you guys sure you're not confusing product lines? PPC is a PowerISA architecture, but hasn't been pushing desktop/server level performance for, what, almost 20 years? It's an embedded chip now, and AFAIK IBM doesn't even make them any more. Power (currently "10th gen"(-ish)) is the performant aarchitecture, used in the computers formally known as i-Series, formerly known as RS/6000. It's pretty fast, not not price competitive. They aren't really the same thing.
adrian_b 6 hours ago [-]
"PowerPC" was a modification of the original IBM POWER ISA, which was made in cooperation by IBM, Motorola and Apple.
Motorola made CPUs with this ISA. Apple used CPUs with this ISA, some made by IBM and some made by Motorola.
While Motorola and Apple used the name "PowerPC", IBM continued to use the original name "POWER" for its server and workstation CPUs. Later IBM sold its division that made CPUs for embedded applications and for PCs, retaining only the server/workstation CPUs.
However, nowadays, even if the official IBM name is "POWER", calling it "PowerPC" is not a serious mistake, because all the "PowerPC" ISA changes have been incorporated many years ago into the POWER ISA.
So the current POWER ISA is an evolution of the PowerPC ISA, which was an evolution of the original 1990 POWER ISA.
It is better to call it POWER, as saying "PowerPC" may imply a reference to an older version of the ISA, instead of referring to the current version, but the 2 names are the same thing. PowerPC was an attempt of rebranding, but then they returned to the original name.
kjs3 5 hours ago [-]
Thanks for the lecture. My point is that people often confuse PPC in the embedded space (still in production) with Power in the enterprise space (where noone I know refers to it as 'PPC' other than historical artifacts like 'ppc64le' (we run mostly AIX), and haven't since the G5 days). Same/similar ISA, very very different performance expectations. YMMV.
stonogo 6 hours ago [-]
There isn't really an arm64 processor available that runs as fast as a Power10 processor, and there isn't really a Power10 processor that runs as efficiently as an arm64 processor, so 'competitive' is probably the wrong word.
homarp 10 hours ago [-]
AI= Arm Ibm in that case
3form 8 hours ago [-]
That's quite loaded already. They should consider calling it IBM ARM 64, IA-64 in short.
mghackerlady 7 hours ago [-]
IBM was one of the few companies not buying the whole itanium nonsense iirc
wmf 3 hours ago [-]
IBM wasted plenty of effort on Itanic but at least they were smart enough not to cancel any of their architectures.
formerly_proven 8 hours ago [-]
IBM has two architectures which are de-facto only used by them, s390x and ppc64le. They have poured a lot of resources into having open source software support those targets, and this announcement might mean they find it easier/cheaper going forward to virtualize ARM instead and maybe even migrate slowly to ARM.
mbreese 8 hours ago [-]
I think they see customers wanting to have the flexibility to move to ARM and this is the fastest way to say they support ARM workloads. Maybe this is a path for IBM to eventually use ARM chips down the road, but I see this as being more about meeting customers where they think the demand is today rather than an explicit guess for tomorrow.
mghackerlady 7 hours ago [-]
ppc64le has other machines. Raptor off the top of my head, but there's also that weird notebook project that seems to be talked about once every few years and probably won't ever happen and some pretty cool stuff in the amiga space (I don't know if that's strictly le but power is supposed to be bi-endian)
hrmtst93837 2 hours ago [-]
ARM does not erase the compiler and toolchian tail IBM has dragged across two niche arches for years.
Legacy apps on s390x do not move because IBM put out a press release and IBM does not get fatter cloud margins by joining the same ARM pile as other vendors. Mainframe migration is not a weekend project. "Easier" usually means somebody signs a six digit check first.
nxobject 10 hours ago [-]
Once you parse the marketing speak, looks like there may be ARM ISA silicon in future System Z.
But, what are their legacy finance-sector customers asking for here? Are they trying to add ARM to LinuxONE, while maintaining the IBM hardware-based nine nines uptime strategy/sweet support contract paradigm?
If so, why don't the Visas of the world just buy 0xide, for example?
> develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.
> "This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxONE systems, reinforcing our end-to-end system design as a powerful advantage."
bob1029 9 hours ago [-]
I think the #1 use case here is allowing AI/cloud workloads the ability to execute against the mainframe's data without ever leaving the secure bubble. I.e., bring the applications to the data rather than the data to the applications.
IBM could put an entire 1k core ARM mini-cloud inside a Z series configuration and it could easily be missed upon visual inspection. Imagine being able to run banking apps with direct synchronous SQL access to core and callbacks for things like real-time fraud detection. Today, you'd have to do this with networked access into another machine or a partner's cloud which kills a lot of use cases.
If I were IBM, I would set up some kind of platform/framework/marketplace where B2B vendors publish ARM-based apps that can run on Z. Apple has already demonstrated that we can make this sort of thing work quite well with regard to security and how locked down everything can be.
jlawer 11 hours ago [-]
I wonder if we end up with z series running on arm long term.
The value in z series is in the system design and ecosystem, IBM could engineer an architecture migration to custom CPUs based on ARM cores. They would still be mainframe processors, but likely able to be able to reduce investment in silicon and supporting software.
themafia 10 hours ago [-]
You can run 1960s System/360 binaries unmodified on modern z/OS. The system also uses a lot of "high level assembler" and "system provided assembly macros" making a complete architecture switch extremely painful and complicated.
They called their new architecture "ESAME" for a while for a pretty obvious reason.
kjs3 6 hours ago [-]
I don't think that would change if the underlying architecture changes; IBM has been committed to backward compatibility for a long time. Some hypothetical future mainframe class IBM ARM would undoubtedly be able virtualize a 360/370/390 without breaking a sweat. And ARM will undoubtedly enable IBM to add custom emulation hardware to their spin on ARM if they need it.
iSnow 9 hours ago [-]
It is wild how ARM - which was kind of a niche company and ISA - has taken the world by storm since the modern smartphone was born. Now their designs make their way upwards to big iron and AI datacenters.
kjs3 6 hours ago [-]
It's what Intel did with x86 a few decades before the modern smart phone.
graemep 9 hours ago [-]
Smartphones were a big boost, but they were already growing very rapidly before that.
chrsw 9 hours ago [-]
Maybe I don't know enough technical details about these CPU architectures or IP agreements, but I don't see why IBM couldn't have done what Arm did but with PowerPC.
wmf 3 hours ago [-]
PowerPC doesn't have the organic ecosystem that ARM has.
3yr-i-frew-up 6 hours ago [-]
2026 continues to amaze me.
I never would have expected such, but now I'm getting used to it.
I'm waiting for Apple and Microsoft to announce collaboration. They probably already do, but Apple knows its bad for marketing.
I'm not sure I can be surprised anymore.
dev_l1x_be 4 hours ago [-]
I miss working on Power platforms. It is such a nice system with openfirmware. The world went another way.
JSR_FDED 8 hours ago [-]
IBM is desperate to keep the mainframe relevant. The typical transactional workloads are going to stay on the mainframe, and by bolting on ARM “for AI”they’re giving their customer CIOs a reason to defend their decision to stick with the mainframe.
bonzini 8 hours ago [-]
This certainly has been in the making for longer than the "everything we do must be for AI" bubble. In fact s390 has its own on-die inference engines and they have access to the same caching mechanisms as the main processor (which are quite insane).
mghackerlady 7 hours ago [-]
IBM has been on the AI hypetrain since 2018ish iirc
george_belsky 5 hours ago [-]
Nvidia tried, it’s IBM turn now
christkv 10 hours ago [-]
Arm co processors for main frames?
rbanffy 9 hours ago [-]
AIX for ARM? ;-)
mghackerlady 7 hours ago [-]
Is modern ARM stuff done big-endian? because AIX is exclusively BE iirc
yjftsjthsd-h 5 hours ago [-]
That, weirdly, should be fine; ARM is bi-endian in the sense of being perfectly happy to run either way. In fact, the easiest way I know of to test software on a big endian system is to run a perfectly ordinary Raspberry Pi with NetBSD's big endian port for it:)
mghackerlady 4 hours ago [-]
Yeah, I know ARM is bi-endian (pretty much all non-x86 archs used nowadays are) but the question is if it's actually enough to have a software base for it. NetBSD having an ARM port in BE is great but most arm stuff is done for LE systems since MacOS, and NT, and most Linux stuff is LE. This isn't that much of a problem in the free software world because we like to test things on obscure architectures but for the kind of proprietary stuff that you'd want to run on arm might have problems (assuming it wasn't ported AIX already)
rbanffy 6 hours ago [-]
I never said it'd be an easy port, although there was an x86 (and s/390) port back when time itself was new.
edit: s/390 is big endian.
adolph 8 hours ago [-]
I wonder how this relates to Linaro, a joint venture of ARM, IBM, and others started in 2010.
TLDR; “fine, we’ll support Arm too because customers want it.”
ghaff 11 hours ago [-]
Is that such a silly notion?
jonkoops 8 hours ago [-]
No, but it is a lot of corporate speak for such a simple announcement.
shevy-java 10 hours ago [-]
Is that good or bad?
My gut feeling says to lean more on the bad side. I am very skeptic when corporations announce "this is for the win". Then I slowly walk over to the Google Graveyard and nod my head wisely in sadness ... https://killedbygoogle.com/
EdoardoIaga 6 hours ago [-]
great
panick21_ 8 hours ago [-]
IBM and 'track record of innovation' ... is a bit of an understatement.
nubinetwork 10 hours ago [-]
April fools day was yesterday, IBM.
mafzal9 11 hours ago [-]
Arm is trying to expend it's horizons every where as in the previous year ARM acquired the Arduino.
VorpalWay 11 hours ago [-]
No, it was Qualcomm who acquired Arduino. While they are an ARM licensee who make ARM chips, they are not ARM.
woadwarrior01 9 hours ago [-]
Also, Qualcomm and ARM aren't quite in good terms.
"KVM: s390: Introduce arm64 KVM"
"By introducing a novel virtualization acceleration for the ARM architecture on s390 architecture, we aim to expand the platform's software ecosystem. This initial patch series lays the groundwork by enabling KVM-accelerated ARM CPU virtualization on s390....."
https://patchwork.kernel.org/project/linux-arm-kernel/cover/...
things like https://www.youtube.com/watch?v=a6b4lYOI0GQ could get you a really interesting form of multitasking
Maybe it's gimmicky, but I feel like you could get some interesting form factors with the CPU and GPU cards sitting back-to-back or side-by-side, and there would be more flexibility for how to make space for a large air cooler, or take it up again if you've got an AIO.
I know some of this already happens with SFF builds that use a Mini-ITX motherboard + ribbon cable to the GPU, but it's always been a little awkward with Mini-ITX being a 170mm square, and high end GPUs being only 137mm wide but up to 300mm in length.
Then each device can be a host, a client, at the same time and at full bandwidth.
The transputer b008 series was also somewhat similar.
I had envisoned a smaller tower design that with PCI slots and Apple developing and selling daughter cards that were basically just a redesigned macbook pro PCB but with a PCI-E edge connector and power connector.
The way I see it a user could start with a reasonably powerful base machine and then upgrade it over time and mix and match different daughter cards. A ten year old desktop is fine as a day to day driver, it just needs some fancy NPU to do fancy AI stuff.
This kind of architecture seems to make sense to me in an age where computers have such a longer usable lifespan and where so many features are integrated into the motherboard.
https://512pixels.net/2024/03/apple-jonathan-modular-concept...
https://news.ycombinator.com/item?id=46248644
I’ve been running VM/370 and MVS on my RPi cluster for a long time now.
Is there really SW that's limited to (Linux) ARM and not x86?
I'd guess most apps are bytecode only, which will run on any platform. Some apps with native code have bytecode fallbacks. Many apps with native code include multiple support for multiple architectures; the app developer will pick what they think is relevant for their users, but mips and x86 are options. There were production x86 androids for a few years, some of those might still be in user bases; mips got taken out of the Native Development Kit in 2018 so probably not very relevant anymore.
MacOS? (hides)
There is, however, a completely different vision for how web infrastructure should be and that is to have extremely resilient hardware and simple software. That's what a mainframe is. You can write a simple and easy to maintain single process backend program, run it on a mainframe and be fairly confident that it can run without stopping for decades. Everything from the power supply to the CPU is redundant and can be hot swapped without booting the OS. Credit card transactions and banking software run on this model for example (just think about how insanely reliable credit card transactions are).
IBM has a monopoly in the second world. You could say the entire field of distributed systems is one big indie effort to break free of IBM's monopoly on computing.
1. They run complicated infrastructure software, written by third-party developers.
2. And they run their own simple programs on top of them.
So for example you can rent Kubernetes cluster from AWS and run simple HTTP server. If your server crashes, Kubernetes will restart it, so it's resilient. There will be records in some metrics which will light up some alerts and eventually people will know about it and will fix it.
Another example: your simple program does some REST GET query. This query failed for some reason. But that query was intercepted by middleware proxy and that proxy determines that HTTP response was 5xx, so it can retry it. So it retries it few times with properly calibrated duration and eventually gets a response and propagates it back to the simple program. Simple program had no idea about all the stuff happening to make it work, it just threw HTTP query and got a response.
There's a lot of complicated machinery to enable simple programs to be part of resilient architecture. That's a goal, anyway.
You actually need both, the point of the extremely resilient hardware is that it can act as the single source of truth when you need it - including perhaps hosting some web-based transactions that directly affect your single source of truth. (Calling this a "model" for web-based infrastructure in general would be misleading though: a credit card transaction on the web is not your ordinary website! The web is just an implementation technology here.) Everything else can be ephemeral open systems, which is orders-of-magnitude cheaper.
TSYS is super expensive and is dying out. The current generation of banking software is very much shifting to distributed software across commodity data centers.
IBM Z mainframes play a pivotal role in facilitating 87% of global credit card transactions, nearly $8 trillion in annual payments, and 29 billion ATM transactions each year, amounting to nearly $5 billion per day. Rosamilia highlighted the continuous growth in demand for capacity over the past decade, which has seen inventory expand by 3.5 times.
https://thesiliconreview.com/2024/04/ibm-new-mainframe-web-t...
Some stayed at on prem, some pushed code to mainframe VMs in the cloud, some went to OpenShift (mostly on prem from what Ive seen, probably 80-85%).
Eh, they can but even a couple of decades ago there was a shift to open platforms. 90s and early 00s, sure, it was mainframe and exotic x86 species like Stratus machines. But even then the power of “throw a ton of cheaper Unix at it” was winning.
Banks’ central systems maybe, I have less experience there. IBM did also try for a while to ride the Linux virtualisation wave as well, saying “hey, you can run thousands of Linux instances on a single mainframe”, and I did some work porting IBM software to s390 Linux around 2007.
All our production stuff was being deployed on Aix, HP-UX, Solaris and Windows NT/2000 Server.
Likewise most of my university degree used DG/UX and Solaris, when Red-Hat Linux was first deployed on the labs, it was after the DG/UX server died, and I was already on the fourth year of a five year degree.
We did use NT/2K internally but that was because we had some who insisted on using SMB via Windows.
Such fun times. The nix and nix-like OSes were spreading like fire. I never would have thought I'd ever wrangle them for the majority of my career.
Just because things hung around didn't mean that Sun/Solaris/Java were long for this world. Linux/x86 was just too cheap compared to SPARC gear. Even if it wasn't as robust as the Sun gear, it just made too much sense especially if you didn't have any legacy baggage.
But the x86 I was referring to in my comment above, Stratus, was (maybe still is?) an exotic attempt to enter the mainframe-reliability space with windows. IIRC it effectively ran two redundant x86 machines in lockstep, keeping them in sync somehow, so that if hardware on one died the other could continue. I have no idea how big their market was, but I know of at least one acquirer/issuer credit card system that ran on that hardware around 2002-3.
Basically they do a lot, but they're not showy about it.
IBM is not in consumer products nor services so we do not hear about it.
IBM was declining for 10 years while the rest of the tech related businesses were blowing up, plus IBM does not pay well, so other than it being a business in decline, there wasn’t much to talk about. No one expects anything new from IBM.
Also, they had quite a few big boondoggles where they were the bad guys helping swindle taxpayers due to the goodwill from their brand’s legacy, so being a dying rent seeking business as opposed to a growing innovative business was the assumption I had.
Beneath the countless layers of VMs and copious weird purpose built gear like Tandem and Base24 for the ATMs was a whole bunch of true blue z/OS powered IBM mainframes chugging through thousands and thousands of interlocking COBOL programs that do everything from moving files between partner banks all over the world, moving money between accounts, compounding interest, and extracting a metric shitton of every type of fee imaginable.
If you know z/OS there's work available until your retirement. Miserable, pointless, banal, and archaic legacy as fuck mainframe work.
https://en.wikipedia.org/wiki/Tandem_Computers
https://en.wikipedia.org/wiki/BASE24
https://en.wikipedia.org/wiki/Z/OS
A good friend of mine who worked on a CICS based credit card processing application at that bank doubled his salary twice inside of 4 yrs. First by quitting the bank and going to a boutique consultancy to build competing software (which they sold to other banks) and then by quitting that job and coming back to the bank to takeover the abysmal state the CICS app had lapsed into in his absence.
And that was circa 2010.
One thing that was true of the bank then and I'm sure is true now is that when they see a nail they truly have just the one hammer. When a problem comes along, hit it with a huge sack of cash until it goes away.
Tandem! Now there's a name i haven't heard in a long time. A college friend of mine worked with some of their stuff right out of college and I still remember him telling me about it. It seemed like magic, we were both floored with the capabilities.
/we were in our early 20s and the inet was just taking off so there were lots of "magic" everywhere
https://www.youtube.com/watch?v=SSSB7ZTSXH4
The Remarkable Computers Built Not to Fail by Asianometry
Huge generalizations incoming, there are exceptions to every rule, but in my experience there are no nerds who love tech for tech's sake in the banking world. It's entirely staffed by the "C's get degrees" crowd who just want to clock in, clock out, keep their head down, and retire with a nice pension.
I wanted to work on sexy technology, wrangle clouds, contribute to open source, and hack in modern languages.
I have many friends who are still at that bank 20 yrs later. They're all directors of this that or the other thing, still just grinding out some midlevel whatever career and cruising comfortably. If that ticks all your boxes then by all means go hit up a bank job.
By the time I left I couldn't drink enough liquor in a day to rinse the stench of that job off me. If I hadn't managed to slip that place I'd be dead of liver failure by now.
It's the secret for a long life for some folks, but it ain't for everybody.
Have their own Java implementation, with capabilities like AOT before OpenJDK got started on Leyden, or even Graal existed, for years had extensions for value types (nowadays dropped), and alongside Azul, cluster based JIT compiler that shares code across JVM instances.
IBM i and z/OS are still heavely deployed in many organisations, alongside Aix, and LinuxONE (Linux running on mainframes and micros).
Research in quantum computing, AI, design processes, one of the companies that does huge amounts of patents per year across various fields.
And yes a services company, that is actually a consortium of IBM owned companies many of each under a different brand (which is followed by "an IBM company").
Licensing of course just being typical rent seeking behaviour but their services are valuable given the financial impact if one of their solutions goes down on us (which is very rarely)
IBM (imho) is in the absolute frontline in quantum computers. One could argue if the number of startups in QC means that there is an actual market or not. Companies that lives on VC or the valuation of their stock.
But IBM is not showy, not on the front pages, does not live on VC or stock valuation. IBM makes tons of money decade after decade from customers that are also not showy but makes tons of money. Banks, financial institutions, energy, logistics, health care etc etc. If IBM thinks these companies will benefit from using QC from IBM (and pay tons of money for it), there is quite probably some truth in QC becoming useful in the near future. Years rather than decades.
IBM have run the numbers and have decided that spending the money for engineering, research required is outweighs the money possible to earn on QC services. QCs powerful enough to run the QC-supported algorithms these companies need to make more tons of money. And it's probably not breaking RSA or ECC.
To give you an idea:
- of the risk in regulated industries like banking: a UK bank was once fined *$62 million* for botching a mainframe migration and causing downtime. - of the difficulty and risk in non-tech industries: Australia once spent *$120 million* trying to migrate its social security system off mainframes... and failed.
Mainframes are not their only business, of course, but it's a major cash cow that's under appreciated. I, for one, didn't know that business keeps growing.
Coincidentally, I wrote about the topic of mainframes with relation to IBM's acquisition of Confluent here today: https://blog.2minutestreaming.com/p/ibm-confluent-acquisitio...
What I don't get however is who'd use their custom accelerators for AI inference.
Both have been around for many years, but neither is obsolete, they're just not designed for consumer applications.
They still generate $10-15 billion per year in revenue.
IBM eventually stepped away from the embedded market and eventually lost their foothold in consoles as well. While Raptor did offer Power9 systems at a somewhat accessible price point, the IBM-produced CPUs were still fundamentally enterprise-grade hardware, meaning they retained the high costs and "big iron" features of server tech.
IBM had a hand in both however
But yes they’re mostly enterprise/services/mainframes not anything overly consumer
You can see their roadmap here:
https://www.ibm.com/roadmaps/
1. Red Hat Enterprise Linux, which is by far the most commonly deployed Linux variant among US Enterprise orgs.
2. Ansible
3. Podman
4. Hashicorp Terraform / Consul / Packer / Vagrant / Nomad / Etc.
5. Giant B2B services arm
6. Mainframe, which a lot of science organizations / governments / credit card companies still run. Sometimes you may have an IBM rep show up to replace a part on the mainframe you didn't even know was broken - very reliable, fault tolerant system.
7. The only service I know where you can rent Quantum computing time in the cloud
8. Probably a ton of other things I'm not even aware of.
9. Red Hat OpenShift - so if you're big enterprise running k8s on prem, there's a good chance it's OpenShift, especially in banking / finance / government.
If IBM runs them into the ground, there's a niche for a copy-cat of the original company that you can just found again. Rinse and repeat.
So essentially they sell new hardware and "support" to customers who have been in need to process tabular, multi-GB databases since when a PC was 128MB memory and have been doing electronic record-keeping since the 1970s. They also allow their ~hostages~, ehm, customers who trust them with their data to run processing near the data at a cost/in a cloud style billing model. That is so expensive though that every large IBM-shop has built an elaborate layer of JVMs, Unix and mirror-databases around their IBM appliances. Lately they bought Redhat and hashicorp and confluent thus taking a cut from the "support" of the abominiations of IT systems they helped birth for some more time to come (also remember the alternative JVM OpenJ9, do you all?).
I think the later a company started using centralized electronic record keeping, the higher the likelyhood they are not paying IBM anymore: commercial banks, governments and insurance started digitizing in the 60s (with custom software) and if the companies are old (or in US-friendly petrostates) they are all IBM customers. Corps using ERP or PLM offerings (so manufacturing and retail chains which are younger than banks) used to start digitizing a little later (Walmart only was founded in the 60s and electronic CAD started in the 80s) and while they likely used IBM in the past (SAP was big on DB2) they might not use it anymore (also it helps they usually bought the ERP or PLM from someone else). New Companies whose sole business was to run a digital-platform started on Unix (see Amazon who successfully fought to ditch Oracle even) or just built their whole platform (Google). If those companies predate Unix they usually fought hard to get rid of IBM (Microsoft, Amadeus)
Consulting/outsourcing services have been spun out to Kyndryl, so nowadays IBM only sells hardware, support for their products and ostensibly has some people left to develop their products... The days when that was a big thing and IBM produced all the stuff they sell support for now, have been long gone. A fun link to see how their "product development" operates nowadays is this discussion to bring gitlab-runners to z/OS: https://gitlab.com/gitlab-org/gitlab-runner/-/work_items/275... - tl;dr "hey you opensource company, we are IBM and managed to pay someone to port a go compiler to z/OS. Now we have a customer who wants to use gitlab with z/OS. Would you like to make your software part of our product offering?". A fun fact is that - even within IBM - access to the real mainframe seems to be very limited which shows a bit in the discussion linked above and also with an ex-Kyndryl-person saying: "oh, I once had a contract where we replaced the mainframe and we ran that on Linux-boxes inside IBM, because it was just cheaper that way. Just the big reporting was a bit slow, but the reliability was just fine"
I think we can ignore the "AI" word here as its presence is only because everything currently has to be AI.
So why would IBM add ARM?
> As enterprises scale AI and modernize their infrastructure, the breadth of the Arm software ecosystem is enabling these workloads to run across a broader range of environments
I think it has become too expensive for IBM to develop their own CPU architecture and that ARM64 is starting to catch up in performance for a much lower price.
So IBM wants to switch to ARM without making a too big fuzz about it.
That was my first thought too, but it does not make sense, because if IBM would sell ARM-based servers nobody would buy from them instead of using cheaper alternatives.
As revealed in another comment, at least for now their strategy is to provide some add-in cards for their mainframe systems, containing an ARM CPU which is used to execute VMs in which ARM-native programs are executed.
So this is like decades ago, when if you had an Apple computer with a 6502 CPU you could also buy a Z80 CPU card for it, so you could also run CP/M programs on your Apple computer, not only programs written for Apple and 6502.
Thus with this ARM accelerator, you will be able to run on IBM mainframes, in VMs, also Linux-on-ARM instances or Windows-on-ARM instances. Presumably they have customers who desire this.
I assume that the IBM marketing arguments for this are that this not only saves the cost of an additional ARM-based server, but it also provides the reliability guarantees of IBM mainframes for the ARM-based applications.
Taking into account that today buying an extra server with its own memory may cost a few times more than last summer, an add-in CPU card that shares memory with your existing mainframe might be extra enticing.
The architecture might be non-standard and not very widespread however for what it does and workloads that are suited to it. I dont think any ARM design comes close , maybe Fujitsu's A64FX.
Sun had the same problem after 2001 dotcom when standard PC servers became reliable enough to run web servers on.
It's easier to sell "our special sauce" when building using a custom ARM platform. Then you have no easy comparison with standard servers.
They will probably market the ARM inclusion similarly - as something that the package provides.
As far as POWER i think only Raptor[1] does direct marketingof the power(hehe) and capabilities
[1]https://www.raptorcs.com/
https://www.ibm.com/products/power
The i systems are just POWER machines with different firmware.
Why do you say "starting to"? arm64 has been competitive with ppc64le for a fairly long time at this point
The recent generations of IBM POWER CPUs have not been designed for good single-thread performance but only for excellent multi-threaded performance.
So I believe that an ARM CPU from a flagship smartphone should be much faster in single thread that any existing IBM POWER CPU.
On the other hand, I do not know if there exists any ARM-based server CPU that can match the multi-threaded performance of the latest IBM POWER CPUs.
At least for some workloads the performance of the ARM-based CPUs must be much lower, as the IBM CPUs have huge cache memories and very fast memory and I/O interfaces.
The ARM-based server CPUs should win in performance per watt (due to using recent TSMC processes vs. older Samsung processes) and in performance per dollar, but not in absolute performance.
And the single thread side isn't that good either, but SMT8 is a quite nice software licensing trick
But I could be wrong… I’m going from a historical perspective. I haven’t checked PPC benchmarks in quite a while.
Motorola made CPUs with this ISA. Apple used CPUs with this ISA, some made by IBM and some made by Motorola.
While Motorola and Apple used the name "PowerPC", IBM continued to use the original name "POWER" for its server and workstation CPUs. Later IBM sold its division that made CPUs for embedded applications and for PCs, retaining only the server/workstation CPUs.
However, nowadays, even if the official IBM name is "POWER", calling it "PowerPC" is not a serious mistake, because all the "PowerPC" ISA changes have been incorporated many years ago into the POWER ISA.
So the current POWER ISA is an evolution of the PowerPC ISA, which was an evolution of the original 1990 POWER ISA.
It is better to call it POWER, as saying "PowerPC" may imply a reference to an older version of the ISA, instead of referring to the current version, but the 2 names are the same thing. PowerPC was an attempt of rebranding, but then they returned to the original name.
Legacy apps on s390x do not move because IBM put out a press release and IBM does not get fatter cloud margins by joining the same ARM pile as other vendors. Mainframe migration is not a weekend project. "Easier" usually means somebody signs a six digit check first.
But, what are their legacy finance-sector customers asking for here? Are they trying to add ARM to LinuxONE, while maintaining the IBM hardware-based nine nines uptime strategy/sweet support contract paradigm?
If so, why don't the Visas of the world just buy 0xide, for example?
> develop new dual‑architecture hardware that helps enterprises run future AI and data intensive workloads with greater flexibility, reliability, and security.
> "This moment marks the latest step in our innovation journey for future generations of our IBM Z and LinuxONE systems, reinforcing our end-to-end system design as a powerful advantage."
IBM could put an entire 1k core ARM mini-cloud inside a Z series configuration and it could easily be missed upon visual inspection. Imagine being able to run banking apps with direct synchronous SQL access to core and callbacks for things like real-time fraud detection. Today, you'd have to do this with networked access into another machine or a partner's cloud which kills a lot of use cases.
If I were IBM, I would set up some kind of platform/framework/marketplace where B2B vendors publish ARM-based apps that can run on Z. Apple has already demonstrated that we can make this sort of thing work quite well with regard to security and how locked down everything can be.
The value in z series is in the system design and ecosystem, IBM could engineer an architecture migration to custom CPUs based on ARM cores. They would still be mainframe processors, but likely able to be able to reduce investment in silicon and supporting software.
They called their new architecture "ESAME" for a while for a pretty obvious reason.
I never would have expected such, but now I'm getting used to it.
I'm waiting for Apple and Microsoft to announce collaboration. They probably already do, but Apple knows its bad for marketing.
I'm not sure I can be surprised anymore.
edit: s/390 is big endian.
https://en.wikipedia.org/wiki/Linaro
My gut feeling says to lean more on the bad side. I am very skeptic when corporations announce "this is for the win". Then I slowly walk over to the Google Graveyard and nod my head wisely in sadness ... https://killedbygoogle.com/
https://www.qualcomm.com/news/releases/2025/09/qualcomm-achi...