Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Amazon EC2 Beta (2006) (amazon.com)
119 points by Jarred on Aug 27, 2016 | hide | past | favorite | 68 comments


It'll never take off. Even if people want to use it, which seems doubtful, there's no way it can ever be economical at scale. Where's the value prop here?


I have a few qualms with this service:

1. For a Linux user, you can already build such a system yourself quite trivially by getting a server setup, installing Xen, and then rolling out your own VMs that way. From Windows or Mac, these VMs could then be accessed over SSH.

2. It doesn't actually replace a dedicated server. This does not solve the connectivity issue.

3. It does not seem very "viral" or income-generating. I know this is premature at this point, but is it reasonable to expect to make money off of this?

(/s)


In case people miss the reference, see the oft-quoted non sarcastic comment about Dropbox's launch here https://news.ycombinator.com/item?id=8863


And look at the prices. No one will ever pay that much just to avoid having to drive to the data center at 2am and swap out a hard drive.


Ha, were people really saying this back then?


Let's look at some August 24, 2006 posts from the peanut gallery...

Here's a skeptical comment[1] from Slashdot that was scored by that community as "5:Insightful":

>"Sun's grid effort has pretty much laid an egg. Perhaps I have the economics wrong, but isn't it more cost effective to build your own cluster out of discarded PCs?"

As counterpoint, here's a positive comment[2] to that same announcement that was more enthusiastic about the possibilities:

>Jeremy Wright • 10 years ago: Holy @#$@... We were about to move to Rackspace mainly for uptime support.

If you take Jeremy Wright's research into the new economics of cloud computing and multiply it by a hundred thousand like-minded people, you can trace a direct line from that kind of post to the Rackspace sale to private equity that made the HN front page yesterday[3]. The seeds of Rackspace's sale 10 years later were sowed from the very moment of Amazon's EC2 beta announcement.

[1]https://slashdot.org/comments.pl?sid=194921&cid=15971136

[2]http://disq.us/p/16wj3u

[3]https://news.ycombinator.com/item?id=12365956


No.


Because there was no HN then :)


Fun to look through old EC2 email from back in the day (from 2007):

> We have an account at that is nearing its EC2 limit - we need to be able to create more than 20 instances, preferably as many as you can give us. Our application is exploding - we're doing well over 5M page views / day and it's doubling every 2 or 3 days, so the request is rather urgent. Can you have someone call or email us right away?

I remember at that time that it was just so much more expensive than physical servers at iWeb. Sure it was convenient, but the performance per dollar was something like 1/3 or 1/4. Crazy.


The value in EC2 has never been in straight up cost, it's been in wild fluctuations in scale. In sites like Reddit, where your peak traffic can be 10x or 100x what your average traffic is, the ability to scale up to handle the top 5th percentile without paying a hundred times more year-round is where savings come in.

Now, though, the benefit in AWS is all the add-ons that you can manage through the same system; DNS, messaging, databases, firewalls, load balancing, etc.

That said, we massively overprovision our servers at my company and it's still probably less than half of what we'd pay at Amazon. That pays for my time to take care of the servers, at least.


actually renting servers is cheap. however renting a stable network isn't.

when you rent servers you either need a overlay network via a VPN or you end up renting a rack (1/3, half, full) and connect it yourself. but as you might guess you are now inside a single datacenter, if it burns or somehow looses it's connection you are offline. that's not a big deal for some sites, but some stuff will suffer from that.

AWS and Clouds are more about networks and traffic spikes than just a bunch of servers. It's also easier to just replace the system in a outage than on most other services where you mostly try to heal it up. On Clouds you just trash it. We have some small sites on 3 AWS Small instances and a AWS RDS Master-Slave. Whenever one instance breaks or AWS makes a notice that this instance will be recycled we just reprovision it. We are also on AWS Frankfurt which didn't have a real outage yet. Google pricing would be in favor but no RDS Postgres yet, same for Azure (even when we get 130 € per month on 7 accounts for free (startup program) for 3 years or so).

however our software is sold to customers as hardware and not designed to run on a cloud (yet), mostly because of sensitive data.


> I remember at that time that it was just so much more expensive than physical servers at iWeb. Sure it was convenient, but the performance per dollar was something like 1/3 or 1/4. Crazy.

This is mostly still true.


there's a large number of small managed services firms doing good business pulling people out of AWS who can't actually afford it.


Kids today don't remember needing a purchase order to buy hardware that had to be shipped, racked and configured. Adding scale went from 6 weeks to 6 seconds.


I know right.

One company I worked for it would take literally 6 months for the internal IT organization to provision a VM and configure it to the state that it was actually usable, and they would chargeback hundreds of dollars a month for it. Physical/dedicated hardware you would need to set the wheels in motion 12-18 months in advance of when you needed it.

That IT department, I still know people there, is currently flailing desperately for a solution to the problem of the business having discovered how quick it is to provision on AWS and what it costs there, even tho' it "looks" expensive it is still a massive saving compared to the fully loaded cost of many large company's IT departments. And even if it takes an hour to spin up an instance, that is still an unbelievable miracle to people who were used to the old way. Questions are being asked for which there are no easy answers...


I've experienced that first hand. I was told it would take 3 weeks to setup and provision a VM that had IIS installed. This was a Microsoft shop, so one would assume the massive VMWare cluster they ran had dozens of instances like that.

The guys in IT couldn't understand why the devs and the rest of the business were rapidly moving to Azure...


The vast majority of business do not need to scale on a dime.

You're paying 30-40% more to provision in 6 seconds in AWS (and let's be real, it's low 5 minutes at least for most resources like instances), but if you've got the cash to waste, go for it.

As always, marketing is key.


It's not just scaling - a lot of the value comes from how you can change your practices to be more reliable: e.g. instead of having the classic model where you get servers setup and run them for years, now every application no matter the scale or how mature uses APIs to interact with your infrastructure, has automated blue-green deployments, every developer (or your CI process) can spin up instances to test in a production-like environment, you have things like RDS/ELB/S3/etc. as turn-key services which anyone can take for granted when designing their application, everything can be secured with limited scope API keys / policies, etc. When you suddenly get a spike in demand (annual reporting, unanticipated new work, one-time migration / reprocessing, etc.) you can have everything you need instantly without a large procurement lead-time or having to either repurpose existing infrastructure or find a use for new hardware for the rest of its service lifetime.

Absolutely all of that _could_ be done in a traditional IT environment but it's rare to see it done at the level of usability that AWS (or Azure, Google, etc.) offers and that leads to cost both in staff time, technical debt, and time-to-recover after failures. You definitely can beat AWS on pricing but it requires both sufficient scale and ongoing commitment to spend staff time developing and supporting infrastructure, which is usually an area where organizations choose to skimp.


When I looked last AWS EC2 isn't just 30-40% more expensive than renting a dedicated server, but up to 5-10x as expensive.

And if you're using a lot of outgoing traffic, AWS gets even worse in comparison.


There comes a time where it is way cheaper to not deal with your own IT. My company has a many thousand core cluster with ~30PB of storage on prem. We finally decided that maintaining that ourselves was nuts, and while it's expensive as hell to pay our cloud provider it ends up being cheaper in the long run for multiple reasons


That's completely nuts - for 30PB you'll be paying like $1.5M per month for storage and compute, you could buy all the hardware for maybe $3-5M and rent space for it for like $50k per month.. over 3 years the same cluster would cost you over $50M on AWS versus $10M to do it yourself (let's be generous and say plus 10 engineers at $2.5m/year to run it) so $17.5M to DIY.. there are some situations where it makes sense, like if your baseline is low and your fluctuation is high, but IMO if you have a baseline of 30PB, you'd be able to pay for a lot more engineers if it was in house.


We haven't fully transferred yet, but yes our monthly bills are quite large. From everything i've heard it's been a net savings. There are other advantages such as the cheaper 'deep storage' tiers, spinning up just the compute we need it on demand instead of having to worry about keeping it all packed, etc. It's going to be expensive and a pain in the ass either way.

Our largest problem has been the realization that "the cloud" is not in fact infinite despite what the the commercials will have you believe. We quite frequently are told to cool our jets.


You can get those same "deep storage" tiers on prem for a fraction of the cost of AWS. There's no scenario, short you of you running the systems at less than 40% capacity, that AWS is cheaper. You have to be criminally inefficient or very very very small (like less than one server small) for AWS to EVER be a cost play.

I should add, bursty workloads like batch processing can be a cost play simply because a once-a-month even that is time sensitive can benefit greatly from instant* CPU that you don't have to pay for the rest of the month. Those workloads tend to be very niche and one-off for most businesses though.


I don't know a lot about the huge scale side, but I tried running a teeny little server of my own on AWS, and it's way overpriced and over-complicated. You can get a single server for like 20% of the price on any number of hosting services. Digital Ocean works pretty well for me right now.


AWS is somewhat lucky in that its best market was primarily organizations where an offering like it was actually very simple and transparent is what is desired and that price / performance is not what it competes well upon. This is rather different from their main business that's very focused upon lowering costs and merely by being online is in its own way performance.

Internal IT in enterprise is so unbelievably terrible in the Fortune 500 that even the worst mass shared hosting provider probably offers more compelling value. Amazon didn't have much of a bar to cross in technicals as much as in value proposition to encourage a transition worth future benefits.


At that scale you can negotiate significant discounts though.


When buying IT equipment worth millions you will get very significant discounts,too.


And if in the US, write $500k a year off as an immediate expense (when purchasing your gear) versus depreciation. AWS spend is ongoing opex that's gone forever.


Also a very good point. The difference in price between what i'd pay for my personal work and my professional work isn't nearly as large as one would assume based on the listed prices.

That does vary by cloud vendor though.


Do you know this to be true, or are you assuming?


AWS has offered fairly significant discounts to organizations I'm familiar with that are well over $1M+ / month in usage, but this discount pales in comparison to how much they have saved in labor and availability compared to internal managed IT even completely ignoring the business agility kinds of arguments.


> and let's be real, it's low 5 minutes at least for most resources like instances

If you've got a pre-baked application AMI, are using EBS-backed instances, and are starting them with auto-scaling or health-checks rather than manually using the console, then no, you really can have new instances up in a couple of seconds.

Mind you, the key (to getting a high ROI from AWS) isn't scaling up on a dime. It's scaling down on a dime, whenever your load decreases for even a minute or two, knowing you can just scale right back up. Being able to run with literally no paid compute-hours spent sitting around doing nothing waiting for a task can save you a rather large amount of money, usually quite easily compensating for that 30-40% AWS surcharge.


AWS charge by the hour, as soon as you power up an instance you are charged a full hour.

If you start and stop one instance 60 time in an hour, you will be charged 60 hours.

So scaling down on a dime and scalling right back up is more expensive than doing nothing.


Bingo. If you've scaled up, you stay scaled up for the instance hour. Otherwise, you just threw money away.


GCE only need 10 minutes and then you can spin down.


That's not at all true. You would be charged for exactly one hour.

Instance hours are just rounded up.

Source: I auto scale my companies entire stack.


Amazon and people who have been hit by this seem to disagree with you:

"Pricing is per instance-hour consumed for each instance, from the time an instance is launched until it is terminated or stopped. Each partial instance-hour consumed will be billed as a full hour."

https://aws.amazon.com/ec2/pricing/

"You are billed for an EC2 instance-hour for each hour or partial hour (rounded up) that your instance is in the “running” state. Instances that are in any other state (“stopped”, “pending”, etc.) are not billed."

https://aws.amazon.com/premiumsupport/knowledge-center/ec2-i...

Here's a company that got hit hard by that behavior :

"A little-discussed fact about AWS EC2 pricing is that users are billed for each server that runs for any partial hour it runs. That means if a user starts a server and then kills it within five minutes, he is still billed for the full hour. That seems acceptable, but if a user kills a server and replaces it with a new server of the exact same type and location, this move doubles the bill."

http://searchaws.techtarget.com/tip/Paying-the-price-when-an...

You should probably take a look at this, you are probably costing your company a lot of money.


I'd love if you could point the way then, because my pre baked AMIs that are EBS backed are always taking at least 3-5 minutes to be healthy and processing traffic or requests from the moment they're spun up.

This is not restricted to a specific instance type, nor a specific region.

There is literally no way you are performing compute in seconds from when an EC2 instance is instaniated.

I've bookmarked this to come back to benchmark this.


> whenever your load decreases for even a minute or two, knowing you can just scale right back up.

Doesn't AWS still charge by the hour ... so if you were spinning instances up and down based on minute-to-minute load you would be overpaying for vs just maintaining a steady baseline.

If you turn an instance off you need to be pretty confident it will stay off for at least an hour.


Would it make sense to spin up a bunch of instances and then stop them, then manage your own autoscaling by just starting/stopping rather than launching/terminating? What's the downside there? You only pay for EBS storage, which is a tiny fraction of compute costs.


It's not even just scaling. Even just getting our dev, as and CI servers started on day 1 is a huge plus.


5 minutes at least for most resources like instances

As of this evening, in eu-west-1 it's about 70 seconds.


...


This is rapidly changing. When I took over technical ops at my employer, we were 100% public cloud. I quickly moved us to private managed cloud and quadruped our capacity for 20% less spend. Now I'm in the process of preparing us to move to colocation, where I'm certain that I can save another 40-50% over three years. What's made this possible is technologies like CoreOS, Kubernetes, and Ceph. We are working on automating the bare metal network management pieces but I am confident that we will soon have an environment that can function in colo without onsite staff. If we automate throughly and pre-cable racks, adding capacity is as simple as using a colo provider's remote hands service to rack and stack new hardware every quarter or so.


Serious question, what are you doing better compared to AWS? Efficiency wise, what can't you offer your own cloud services and cash in on 40-50% markup?


Running a single tenant cloud system is not the same as a multi tenant cloud system.


What happens if that Colo goes down?


This problem is no different in colo as it is in public cloud. If your business requires site redundancy, you build this capability into your app and stack and run a second warm footprint somewhere else.


You wait for it to come back, just like when us-east-1 is having problems every few weeks.


What kind of discount are you talking? I have heard about the negotiation of a quite large deal in the high single to low double digit millions committed annually - and the discounts were basically like a sales tax holiday. Maybe the negotiator just wasn't doing a very good job, but it wasn't the kind of discount I would have expected to see for such a large commitment either.


We're in the healthcare space so we have to own our hardware and co-locate. Being a small outfit, sometimes it's so frustrating trying to find the right resources online these days related to racking and configuring your own bare metal! I can definitely appreciate the 6 weeks to 6 seconds. We recently added a new database server because we were really struggling at peak times. The 5 weeks it took between ordering the $25k server, configuring base os, racking it, replicating current data to it, and then choreographing the switch was brutal. Due to the nature of our product, it had to be a zero-downtime switch. Somedays, I wish it was as simple as clicking upgrade instance on AWS RDS. Other days, I make myself feel better by calculating the thousands I'm saving a month.


We're in the healthcare space so we have to own our hardware and co-locate.

Vendors will say all manner of things regarding how HIPAA compliance requires you to buy their most expensive services, but the HIPAA legislation and related rules are almost silent with regards to implementation requirements that map to actual technologies you could actually use. "Quote me the subsection of the Security Rule you are referring to; it will look like 164.308(a)(5)(ii)(D)." is dispositive of this sort of thing.

That's a real thing, by the way. The requirement, in its entirety: "Do you have procedures for creating, changing, and safeguarding passwords?" Did you see the point where it requires hashing the passwords? No, you didn't, because HIPAA doesn't require hashing passwords. It requires you to have some method of "safeguarding" passwords written down somewhere.

[Edit: Parent has clarified that they're dealing with standard paperwork at clients rather than the legislation itself, which makes sense (and, also, oww).]


Pertaining to your edit, apologies for over-simplifying originally. You are absolutely correct that the Security Rule is very very vague. Many HIPAA audits barely reference the Security Rule and instead use stronger rule sets. Unfortunately, HIPAA is generally documentation of policies as opposed to true technical guidance and requirements. We're also an 8 year old SaaS company so many of our agreements pre-date the "cloud" catching up from a complicance perspective. At the speed of healthcare, I imagine it'll take another decade for the industry to realize that an AWS cloud is probably more secure than something a 10 person organization can cobble together.


A running joke for me is the healthcare providers which are worried whether our firm will use "a database" which "could be hacked" instead of, to make up something which clearly has never been said by anyone regulated in the United States, saving all patient information as drafts in the office manager's hotmail account.


It's just a draft though. If you don't hit send, clearly it's not in a "database".


If you saved email draft into Outlook - it is a database.

If your computer is hacked, then attacker would be able to extract information from that Outlook database.


Sorry, my sarcasm did not come off properly :)


Why is it "have to own", HIPAA?

I know that both HIPAA and CLIA have issues with things like the spot market but you can still be approved via cloud.


Not so much HIPAA directly, but the business associate agreements that many customers make use of. In order to meet the requirements of many of the agreements we have in place, we have to own any and all hardware that PHI data lives on. They don't exactly accept redlines on those kinds of agreements.


Hi jabzd We could help you. You can check out our offering at www.nirdhost.com. You can also see what we've done for behavioral tech http://www.nirdhost.com/blog/2016/how-nirdhost-took-behavior... .


> We're in the healthcare space so we have to own our hardware and co-locate.

https://aws.amazon.com/compliance/hipaa-compliance/


Unfortunately, many standard business associate agreements with clients have yet to catch up to cloud offerings having better compliance guarantees. Many of our agreements with customers, both more recent and legacy, are the reason for the requirement.


Hey Jeff, this EC2 thing sounds like exactly what I'm looking for, except that it seems to be just Linux. I'd like to get FreeBSD running on this instead... is there someone at Amazon I can talk to about making this happen?

</timemachine>


I didn't realise that Jeff Barr has been doing these posts all the way back to 2006.


I started writing the AWS Blog way back in November 2004 and am coming up on 2600 posts!


I really feel for pmarca


What's up with pmarca's website? It gives me a 404 on archive.org all the way back to 2007. I think that's the first time I've ever one do that using Wayback. Nothing but 404's.


Intriguing... can you pls elaborate?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: