AWS CLI Alias - for ECS exec

A few days ago AWS released a very much requested feature “ECS exec” - which enables you to “ssh” into one of your containers running in ECS, regardless of if they are running in AWS Fargate or in Amazon EC2.

I do want to point out that I agree completely with Massimo - opening a remote shell is quite the anti-pattern of using containers in the first place and is not something that you should be doing in production. Everything and anything you need to get out of the container should be stored in a log and sent to an external source - so that you can use that information when the container dies.

But sometimes in the development cycle, when you are iterating quickly on your solution - and things are not working as you expected, then getting into the environment is so damn useful, which is why this feature is awesome.

My colleague Massimo Re Ferre wrote a very detailed blog post on the feature, how you can use it and what you need to do to get up and running. Give it a read.

Another colleague of mine Nathan Peck also published a great blog post about how this is supported at day 1 in Copilot.

I must confess that I have not really used Copilot up until now - but it is so simple to use - and it will become my goto tool from now on.

What I loved about Nathan’s post, is that is so simple to use copilot to interact with a container. Stupid simple.

copilot svc exec

And you are in. I love it.

And then I looked at what you need to do to get into a container with the AWS CLI and saw, that it is definitely not 3 words on a command line.

aws ecs execute-command  \
    --region $AWS_REGION \
    --cluster ecs-exec-demo-cluster \
    --task ef6260ed8aab49cf926667ab0c52c313 \
    --container nginx \
    --command "/bin/bash" \
    --interactive

There has to be an easier way….

If you read my blog last month - you noticed that I posted a list of AWS aliases, which makes your life a lot easier when using the CLI. And I thought to myself, why not make the command easier to use, so here is the alias that I created.

ecs-exec =
  !f() {
    aws ecs execute-command \
    --cluster ${1} \
    --task ${2} \
    --container ${3} \
    --command "/bin/sh" \
    --interactive
  }; f

So lets assume the following:

  • All the prerequisites needed to set this up have been completed (see Massimo’s post)
  • Cluster Name: demo-test-Cluster
  • Task Name: demo-test-Cluster-rjOTwD0y9Ft4
  • Container Name: nginx

To access the shell of the nginx container, the command I would need to run is now:

aws ecs-exec demo-test-Cluster demo-test-Cluster-rjOTwD0y9Ft4 nginx

A lot shorter - isn’t it?

A few assumptions that I made here - and you might have to tweak this to suit your specific environment (for example you might be using /bin/bash instead of /bin/sh)

  1. This is not like Copilot that has more awareness of the environment it resides in - so you need to provide context to the command when you run it - which is a (1) cluster name, (2) task name, (3) container name (which is not mandatory if there is only a single container running in the task), and they need to be in that order.

So back to the previous example, if the nginx container was the only container in the task, then the command would be:

aws ecs-exec demo-test-Cluster demo-test-Cluster-rjOTwD0y9Ft4

I have updated the https://github.com/maishsk/aws-alias repo - with this new alias - so please feel free to submit a PR if you have improvements.

I would be very interested to hear your thoughts or comments, so please feel free to ping me on Twitter(@maishsk).