I bought one of NASA MSFC's old "mainframes" a few years ago, just for fun. It was huge and I didn't have three phase power so I was never able to turn it on, but it was cool to take it apart and examine all of the cards that they were using, and see all of the engineering and design of a million dollar computer. It was an SGI Challenge 10000XL "Rackmount" that was fully loaded, I paid $25 for it. It was utterly huge, it was the size of a refrigerator and the weight of a small car.
There aren't many computer museums around Alabama, and it was cost prohibitive to ship it anywhere as it weighed close to 2000 lbs. Anyways, I was able to find a buyer at $700, and the guy even came and picked it up :)
The case was pretty thick, maybe not 1/4in but about 2-3mm. A lot of the weight came from the power supplies though, it had 3 48V monstrosities that put out a total of 1900W IIRC. Plus there were disk carriages, a squirrel cage cooling fan, and about what seemed like 100 boards that connected to a common "backplane" for IO and power. It added up to be pretty heavy.
Holy crap. IBM really knows how to negotiate support contracts...
For reference: The z9 maxes out at 512G RAM and 54 CPUs. In a slightly apples/oranges-comparison you could buy the equivalent in pizza-boxes every month using just the above support-fee. With spare change.
And then there were the software licences and cost to develop software.
As a rule of thumb you could calculate those like: PC (1) - minis like e.g. AS400 (multiply by 10-50) - mainframe (multiply by 100+).
For example in the early days of MS there was an MS Excel running on IBM mainframes - US$200K per license and per year.
To top that when IBM, DEC e.a. where among the few providing such computing power (into the 1980s) and those companies that required such bang on a broad scale - banks, insurances, research institutions but particular banks with their need for many workstations as well - IBM did framework contracts forcing their mainframe customers to also buy their PCs as workstations often for many years.
That was one of the reasons why you found all those very expensive IBM PCs in banks - tens of thousand in each of the larger banks - while you could already buy no-name PCs for a fraction of the price. (A fully equipped PS/2 could easily cost the price of a small car and for development machines you could get up to a Mercedes S-class entry level price - by now prices of the PCs dropped to a fraction while the prices of the cars multiplied by 3-5)
An interesting example of how monopolies / oligopolies are created / kept alive / and it still takes many years in markets moving as fast as the IT industry until they die or loose the grip on their customers.
Well, sort of. That's why I said apples/oranges. Thing is, for that kind of money even a nice FC/IB-fabric is in the budget - perhaps only every 2 months then.
Of course there are latency tradeoffs, no idea what the z9 was used for. But no matter how you spin it, $700k/yr is one hell of a TCO for a piece of hardware that is almost 10 years old.
Mainframes are awesome tools. I'm curious as to what NASA's computing fabrics look like today - commodity hardware, or typical "Big Enterprise" - dumb, monolithic systems.
Well, they could just build supercomputers from off-the-shelf parts and Linux. Like everyone else seems to be doing these days.
Mainframes have magnificent reliability, uptime and throughput, though, and I'm not sure how a commodity stack compares to that. Modern, cheap hardware is pretty reliable, though. The LHC uses a lot of it in huge Linux farms with MySQL (amongst other things) for data storage.
is it any wonder we are going broke in this country when we are still spending $30000 a year to power one computer that probably has 1/2 the computational power of an iphone?
That figure is mostly the costs of running logistical chains in dangerous areas. It includes soldiers lost to attacks on convoys.
I doubt that less convoys will equal fewer attacks - it's just an accounting quirk that the casualties are apportioned to the fuel. The attacks might be less successful, as you won't be spreading the defence so thin, but you wouldn't get a linear reduction in costs.
A Z9 can have up to 54 processors, but those processors are not identical - there are different ones optimized for, say, Java and relational database workloads.
The first mainframe I used, more than 25 years ago, had about a hundred 3278's connected to it. And it was still fast for interactive use when compared to the 8-bit computers of its day. This Z9 was probably commissioned after 2005.
Actually, the zIIP and zAAP processors for "accelerating" java or db2 workloads are identical to the other processors in terms of hardware; the underlying firmware just loads different microcode onto the chip to prevent it running a full Z/OS instance.
The difference is one of licensing -- IBM charges less for activating a core if you can only run java|db2|linux on it.
They may actually put an accelerator in the system at some point in the future, but for now it's just a way of using more processors without spending as much.
The specialty procs also tend to run at full speed where IBM usually caps the general processors. My workplace recently upgraded from two Z9's to two Z196's and the difference is astounding.