Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
NASA unplugs last mainframe (networkworld.com)
57 points by coondoggie on Feb 12, 2012 | hide | past | favorite | 31 comments


I bought one of NASA MSFC's old "mainframes" a few years ago, just for fun. It was huge and I didn't have three phase power so I was never able to turn it on, but it was cool to take it apart and examine all of the cards that they were using, and see all of the engineering and design of a million dollar computer. It was an SGI Challenge 10000XL "Rackmount" that was fully loaded, I paid $25 for it. It was utterly huge, it was the size of a refrigerator and the weight of a small car.


Ever thought of donating it to a museum? I mean, I don't see much use in it, apart from showing it to your grandchildren someday.


There aren't many computer museums around Alabama, and it was cost prohibitive to ship it anywhere as it weighed close to 2000 lbs. Anyways, I was able to find a buyer at $700, and the guy even came and picked it up :)


2000 lbs? Holy crap.

Where was all the weight? Did they use a 1/4 in steel case or something?


The case was pretty thick, maybe not 1/4in but about 2-3mm. A lot of the weight came from the power supplies though, it had 3 48V monstrosities that put out a total of 1900W IIRC. Plus there were disk carriages, a squirrel cage cooling fan, and about what seemed like 100 boards that connected to a common "backplane" for IO and power. It added up to be pretty heavy.


$700,000 each year for maintenance and support

Holy crap. IBM really knows how to negotiate support contracts...

For reference: The z9 maxes out at 512G RAM and 54 CPUs. In a slightly apples/oranges-comparison you could buy the equivalent in pizza-boxes every month using just the above support-fee. With spare change.


And then there were the software licences and cost to develop software.

As a rule of thumb you could calculate those like: PC (1) - minis like e.g. AS400 (multiply by 10-50) - mainframe (multiply by 100+).

For example in the early days of MS there was an MS Excel running on IBM mainframes - US$200K per license and per year.

To top that when IBM, DEC e.a. where among the few providing such computing power (into the 1980s) and those companies that required such bang on a broad scale - banks, insurances, research institutions but particular banks with their need for many workstations as well - IBM did framework contracts forcing their mainframe customers to also buy their PCs as workstations often for many years.

That was one of the reasons why you found all those very expensive IBM PCs in banks - tens of thousand in each of the larger banks - while you could already buy no-name PCs for a fraction of the price. (A fully equipped PS/2 could easily cost the price of a small car and for development machines you could get up to a Mercedes S-class entry level price - by now prices of the PCs dropped to a fraction while the prices of the cars multiplied by 3-5)

An interesting example of how monopolies / oligopolies are created / kept alive / and it still takes many years in markets moving as fast as the IT industry until they die or loose the grip on their customers.


But then you'd have to reinvent mainframe reliability and IO capacity out of commodity hardware and software.

I'm not saying it's impossible. It's just hard. Maybe even US$ 700,000 hard ;-)


... And then spend the same again on fabric to support that much I/O. Supercomputers/clusters for compute, mainframes for IOPS.


"A supercomputer is way to turn a CPU bound problem into an IO bound problem" as the saying goes.


Well, sort of. That's why I said apples/oranges. Thing is, for that kind of money even a nice FC/IB-fabric is in the budget - perhaps only every 2 months then.

Of course there are latency tradeoffs, no idea what the z9 was used for. But no matter how you spin it, $700k/yr is one hell of a TCO for a piece of hardware that is almost 10 years old.


The z9 is about 6 years old at this point. At the time 512Gb RAM in pizza boxes simply didn't exist.



This line made my day:

Back then, real systems programmers did hexadecimal arithmetic – today, “there’s an app for it!”


Interesting. It's been a while since I last powered down a machine using its power switch.

I suppose it's more dramatic that way. A mainframe certainly deserves the honor of being turned off by a human hand.


Mainframes are awesome tools. I'm curious as to what NASA's computing fabrics look like today - commodity hardware, or typical "Big Enterprise" - dumb, monolithic systems.


The high-performance computing systems at NASA Goddard are described here:

http://www.nccs.nasa.gov/systems.html

And the resources at NASA Ames are described here:

http://www.nas.nasa.gov/hecc/resources/environment.html


Nasa was one of the founders of OpenStack, though there are a lot of different parts of the organization.


Well, they could just build supercomputers from off-the-shelf parts and Linux. Like everyone else seems to be doing these days.

Mainframes have magnificent reliability, uptime and throughput, though, and I'm not sure how a commodity stack compares to that. Modern, cheap hardware is pretty reliable, though. The LHC uses a lot of it in huge Linux farms with MySQL (amongst other things) for data storage.


They do, hence Pleiades, the 7th fastest supercomputer. http://www.nas.nasa.gov/hecc/resources/pleiades.html


Yes, I knew about that. It is cheaper and more productive (almost certainly for their needs) to just keep doing this.

$700,000 a year that would otherwise be spent on mainframe upkeep buys a lot of hardware.


is it any wonder we are going broke in this country when we are still spending $30000 a year to power one computer that probably has 1/2 the computational power of an iphone?


The mainframe referenced in the article is from 2005 http://en.wikipedia.org/wiki/IBM_System_z9 so it probably has significant computational power.



That figure is mostly the costs of running logistical chains in dangerous areas. It includes soldiers lost to attacks on convoys.

I doubt that less convoys will equal fewer attacks - it's just an accounting quirk that the casualties are apportioned to the fuel. The attacks might be less successful, as you won't be spreading the defence so thin, but you wouldn't get a linear reduction in costs.


The z9 comes with a minimum of 16GB of system ram, uses fibre channel drive interconnects, and has dozens of high-performance CPUs.

This is very much BIG iron, not ancient iron.


A Z9 can have up to 54 processors, but those processors are not identical - there are different ones optimized for, say, Java and relational database workloads.

The first mainframe I used, more than 25 years ago, had about a hundred 3278's connected to it. And it was still fast for interactive use when compared to the 8-bit computers of its day. This Z9 was probably commissioned after 2005.


Actually, the zIIP and zAAP processors for "accelerating" java or db2 workloads are identical to the other processors in terms of hardware; the underlying firmware just loads different microcode onto the chip to prevent it running a full Z/OS instance.

The difference is one of licensing -- IBM charges less for activating a core if you can only run java|db2|linux on it.

They may actually put an accelerator in the system at some point in the future, but for now it's just a way of using more processors without spending as much.


The specialty procs also tend to run at full speed where IBM usually caps the general processors. My workplace recently upgraded from two Z9's to two Z196's and the difference is astounding.


Massive differences in terms of capability, and reliability. Mainframe computing is not the same in comparison of clock cycles.


"Mainframe computing is not the same in comparison of clock cycles."

That's the thing everyone has forgotten.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: