The AWS World Shook and Nobody Noticed
A few days ago at the AWS Summit in New York there was an announcement which in my honest opinion went very noticeably under the radar and i don’t think many people understand exactly what it means.
The announcement i’m talking about is this one EC2 Compute Instances for Snowball Edge .
Let’s dig into the announcement. There are new instance types released the sbe1 family which can been on AWS Snowball Edge device which essentially a computer with a lot of disks inside.
The Snowball is a service that AWS provides to enable you to upload large amounts of data from your datacenter up to S3. Since its inception it is actually a very interesting concept and to me it has always been as a one off way enticing you to bring more of your workloads and your data in a much easier way to AWS.
I also posted this on Twitter
I am absolutely gobsmacked about the announcement of EC2 compute for snowball edge. This changes everything #aws
— Maish Saidel-Keesing (@maishsk) July 17, 2018
Since its inception AWS has always beaten the drum and pushed the message that everything will run in the cloud - and only there. That was the premise they build a large part of their business model upon. You don’t need to run anything on-premises because everything that you would ever want or ever need is available on the cloud, consume as a service, through an API.
During the course of my career a number a number of times the question came up asking, “Does AWS deploy on-prem?” Of course the answer was always “No, never gonna happen.”
Most environments out there are datacenter snowflakes, built differently, none of them look the same, have the same capabilities, features or functionality. They are unique and integrating a system into different datacenters is not easy. Adapting to so many different snoflakes is really hard job, and something we have been trying to solve for many years - trying to build layers of abstraction, automation and standards across the industry. In some way we as an idustry have suceeded, and in others we have failed dismally.
In June 2017 AWS announced general availability of GreenGrass. A service that allows you to run Lambda functions on Connected devices wherever they are in the world (and more importantly - they are not part of the AWS cloud).
This is the first leg in the door - to allow AWS into your datacenter. The first step of the transformation.
Back to the announcement.
It seems that each Snowball is a server with approximately 16 CPUS’s and 32GB of RAM (I assume a bit more to manage the overhead for the background processes). So essentially a small hypervisor - most of us have servers which are much beefeir than this little box - as our home labs or our laptops even. It is not a strong machine - not by any means.
But now you have a the option to run Pre-provisioned EC2 instances on this box. Of course it is locked down and you have a limited set of functionality availble to you (the same way that you have a set of pre-defined option availble in AWS itself. Yes there are literraly tens of thousands of operations you can perform - but it is not a free for all).
Here is what stopped me in my tracks
Connecting and Configuring the Device
After I create the job, I wait until my Snowball Edge device arrives. I connect it to my network, power it on, and then unlock it using my manifest and device code, as detailed in Unlock the Snowball Edge. Then I configure my EC2 CLI to use the EC2 endpoint on the device and launch an instance. Since I configured my AMI for SSH access, I can connect to it as if it were an EC2 instance in the cloud.
Did you notice what Jeff wrote ?
“Then I configure my EC2 CLI to use the EC2 endpoint on the device and launch an instance”
Also this little tidbit..
“S3 Access – Each Snowball Edge device includes an S3-compatible endpoint that you can access from your on-device code. You can also make use of existing S3 tools and applications”
That means AWS just brought the functionality of the public cloud - right into your datacenter.
Is it all the bells and whistles? Infinitely scalable, can run complex map reduce jobs? Hell no - this is not what this is for. (Honestly - I cannot actually think of any use case that I personally would want to run a EC2 instance on a Snowball - at least not yet).
Now if you ask me - this is a trial balloon that they are putting out there to see if the solution is viable - and something that their customers are interested in using.
If this works - for me it is obvious what the next step is. Snowmobile
Imagine being able to run significantly more workloads on prem - same AWS experience, same API - and seamlessly connected to the public cloud.
Ladies and gentlemen. AWS has just brought the public cloud smack bang right into your datacenter.
They are no longer only a public cloud only company - they provide hybrid cloud solutions as well.
If you have any ideas for a use case to run workloads on Snowball - or if you have any thoughts or comments - please feel free to leave below.