Amazon S3 keeps getting better, now supports versioning

A good thing really can get better! Amazon S3, perhaps the most well-known cloud computing infrastructure service, just got another upgrade. The simple storage service now supports versioning:

Versioning provides an additional layer of protection for your S3 objects. You can easily recover from unintended user errors or application failures. You can also use Versioning for data retention and archiving.

This new feature will give the thousands of websites and services using S3 a quick and easy to way to support undo or file revision histories, among other things. It kind of moves S3 “up the stack” a little, in that it can now do something that developers could have built themselves, but in a simple and easy-to-use way.

Combine this powerful new functionality with Import/Export that launched last year and a couple of recent price drops, and it’s easy to see why Amazon continues to lead the way. Developers continue to make extensive use of the service too. At the end of Q3 2009, there were over 82 billion objects stored in Amazon S3. Just incredible.

I remember when S3 launched back in March 2006, when I was building Podcast Spot, a hosting service for podcasters. It completely changed our business. Global, scalable storage with Amazon worrying about all the details? And for such a small cost? It seemed too good to be true. I’m thrilled to see that S3 just keeps getting better, with relatively frequent price reductions too.

Amazon Web Services: Still getting better

aws logo I often think back to 2006 when Dickson and I were in the midst of the VenturePrize business plan competition. It was around that time that Amazon.com launched their first web service, the Simple Storage Service (S3). It had a huge impact on our business, and we’ve been extremely happy customers ever since.

Over the last couple of years, Amazon has introduced a number of additional web services, the most well-known of which might be the Elastic Compute Cloud (EC2). You can think of it like an on-demand computer in the cloud. I had a quick look at it when it launched, but being a Windows shop, we really didn’t have time to invest the extra effort necessary to get it running. Now, Amazon has announced that EC2 will support Windows:

Starting later this Fall, Amazon Elastic Compute Cloud (Amazon EC2) will offer the ability to run Microsoft Windows Server or Microsoft SQL Server. Our goal is to support any and all of the programming models, operating systems and database servers that you need for building applications on our cloud computing platform. The ability to run a Windows environment within Amazon EC2 has been one of our most requested features, and we are excited to be able to provide this capability. We are currently operating a private beta of Amazon EC2 running Windows Server and SQL Server.

Very cool news for Windows developers. It should put some extra pressure on Microsoft too – though apparently they are getting ready to launch something. Watch for more news on that at PDC.

Another interesting new service that Amazon is introducing is a Content Delivery Service:

This new service will provide you a high performance method of distributing content to end users, giving your customers low latency and high data transfer rates when they access your objects. The initial release will help developers and businesses who need to deliver popular, publicly readable content over HTTP connections.

It will run atop S3, so anything that currently exists there can easily be added to the new content delivery network. This is very cool, and will finally bring world-class CDN infrastructure to small businesses. I wish they had introduced this two years ago!

Those are both very important improvements to AWS. Amazon is raising the bar, again. When will Microsoft, Google, and others answer?

Also – I just noticed recently that Amazon has redesigned the AWS website. It looks fantastic, in my opinion, and is much easier to navigate. Keep the positive improvements coming!

Amazon now offers an SLA for S3

Post ImageAmazon announced on Monday the launch of an SLA, or Service Level Agreement, for the S3 web service. The lack of an SLA has always been cited as a “shortcoming” of S3, but I don’t know exactly how many customers have requested it. Enough for them to offer it I guess:

Basically, we commit to 99.9% uptime, measured on a monthly basis. If an S3 call fails (by returning a ServiceUnavailable or InternalError result) this counts against the uptime. If the resulting uptime is less than 99%, you can apply for a service credit of 25% of your total S3 charges for the month. If the uptime is 99% but less than 99.9%, you can apply for a service credit of 10% of your S3 charges.

The SLA is effective as of October 1st, 2007. Jeff makes it sound like they had planned to have an SLA for a long time, but I’m not so sure that’s the case. Doesn’t matter now, they have one!

I think SmugMug’s Don MacAskill makes a good point:

Everything fails sometimes.

The SLA payment is rarely comparable to the pain and suffering your customers had to deal with.

Very true. From my perspective, the SLA isn’t a big deal. I hope it helps Amazon land some more customers though!

Read: Amazon

New Pricing for Amazon S3

Post ImageLate last night Amazon sent an email to S3 customers announcing an upcoming pricing change. Storage costs will remain the same, but the price for bandwidth is going to change:

Current bandwidth price (through May 31, 2007)
$0.20 / GB – uploaded
$0.20 / GB – downloaded

New bandwidth price (effective June 1, 2007)
$0.10 per GB – all data uploaded

$0.18 per GB – first 10 TB / month data downloaded
$0.16 per GB – next 40 TB / month data downloaded
$0.13 per GB – data downloaded / month over 50 TB

$0.01 per 1,000 PUT or LIST requests
$0.01 per 10,000 GET and all other requests

They claim that if the pricing had been applied to usage for March 2007, about 75% of customers would have seen their bill decrease. In some cases however, the price change makes things significantly more expensive, as this thread points out:

Uploading 1GB of 4K files will cost $2.72 instead of $0.20

We haven’t yet figured out how Podcast Spot will be affected, but I suspect we’ll see a slight decrease. I’m also interested to hear from Don MacAskill on SmugMug.

UPDATE: Don talks about the new pricing model here and says they’ll save money.

Read: S3 Forums

Amazon S3: 5 billion objects and counting

Post ImageOne of the more interesting stories to come out of the Web 2.0 Expo is that of Amazon.com’s Simple Storage Service (S3) passing 5 billion stored objects. You can watch a video of Jeff Bezos talking to conference attendees here. According to Bezos, S3 was storing just 800,000 objects in July 2006. That’s some pretty incredible growth, and I expect it will only continue.

More and more I am convinced that web services like S3 will become the norm. Companies like Amazon.com, Google, Microsoft, Yahoo, and eBay are all very good at building and maintaining the infrastructure their services require to operate smoothly and efficiently. It only makes sense to further monetize that competency.

S3 has had an incredibly positive impact on Podcast Spot, and I know we’d be able to make use of additional web services if only they existed.

Read: TechCrunch

Amazon.com could power the new web

Post ImageI have become really interested in Amazon.com over the last little while. The stuff they are doing with their web services platform is just amazing, and it is already having a huge impact on how web businesses are created and operate. We are using Amazon’s Simple Storage Service (S3) in Podcast Spot, and I absolutely love it. Taking the guts of Amazon and making them available as services to other companies was a very smart decision in my opinion, despite what the investors on Wall Street might think.

Here are some excellent resources if you’d like to learn more:

I’m definitely watching to see what else Amazon launches because chances are, it’ll be useful. So far companies like Yahoo and Google have received far more Web 2.0 attention, but I think that will begin to change, and people will realize that Amazon.com is actually one of the most interesting tech companies around.

Amazon EC2

Post ImageI’ve been meaning to post about this for some time now, but haven’t had a chance. I was really excited last Thursday when I read about Amazon’s new web service called “Elastic Compute Cloud” or EC2 for short. After seeing what they did with S3, I was particularly interested in the how EC2 would fit in. And boy does it ever fit in:

Create an Amazon Machine Image (AMI) containing your applications, libraries, data and associated configuration settings. Or use our pre-configured, templated images to get up and running immediately. Upload the AMI into Amazon S3. Amazon EC2 provides tools that make storing the AMI simple. Amazon S3 provides a safe, reliable and fast repository to store your images.

Nicely integrated with S3. The other great feature? Bandwidth between EC2 and S3 is FREE. I cannot even imagine how much cost savings that could equate to. With EC2, you pay only for instance hours used. Each machine instance is equivalent to “a 1.7Ghz Xeon CPU, 1.75GB of RAM, 160GB of local disk, and 250Mb/s of network bandwidth”. Pretty darn sweet.

I’m already thinking of ways we could integrate this into Podcast Spot (we’re already using and loving S3). I’ve only taken a cursory glance at the forums, API and other documentation, but it seems to me there are two missing features that are extremely desirable: persistent storage and support for Windows (currently it only supports Linux). The AWS guys seem to be pretty on top of things though, so if enough people request them, I’m sure the features will get implemented.

I can’t wait to see what Amazon releases next!

Read: TechCrunch