Getting started with Amazon EKS Anywhere local cluster on AWS Cloud9 ..

The keenly awaited Amazon EKS Anywhere is a new deployment option of Amazon EKS that helps customers simplify the creation and operations of Kubernetes clusters on customer-managed infrastructure aka “your data center or servers” ;-) and it became generally available last week on Sept 8th 2021.

EKS Anywhere gives customers on-premises Kubernetes operational tooling that’s consistent with Amazon EKS. Customers can leverage the EKS console to view all of their Kubernetes clusters (including EKS Anywhere clusters) running anywhere, through the EKS Connector (which is in public preview).

EKS Anywhere is open-source and any customer can download and install it on their on-premises infrastructure. Amazon EKS Anywhere Support Subscription is available for purchase in all AWS public commercial regions

EKS Anywhere supports VMware vSphere as of today for production deployments, with support for other deployment targets in the near future, including support for bare metal coming in 2022. There is also an option to spin up local clusters using Docker on Ubuntu or MacOS (for dev and testing purposes only) and that is what we will create in this blog ..

Note: I had also blogged about the other offering from AWS Container Services — ECS Anywhere, a feature of Amazon ECS that enables you to easily run and manage container workloads on customer-managed infrastructure. You can refer to this blog at

The goal of this blog is to launch an local cluster on an single AWS Cloud 9 instance on AWS running Ubuntu, which is similar to what developers will need to test on something local.

Statutory warning !! — As always, these steps are not comprehensive, and this is for my “personal getting up-to speed” hacky way of learning stuff and you should rely on official AWS documentation for the complete picture ..

Also, look at to get more clarity on the key concepts between EKS Anywhere (on your infra) and Amazon EKS (on AWS Cloud) ..

Lets start the engines !!

I followed the steps in the documentation for creating a local cluster —


I absolutely love AWS Cloud9, and used AWS Cloud9 and launched an m5.xlarge instance type with Ubuntu (might work with low end instances, have not tried. The local cluster has some basic infra and software pre-reqs, please check documentation — The nice things about AWS Cloud9 -totally cloud way of development using a browser, nothing local needs to be installed, security is tied with the AWS console, has a bunch of software already pre-installed like Docker, can select either Amazon Linux or Ubuntu, add IAM roles to the underlying EC2 instance running Cloud9 and a whole lot of cool features.

export EKSA_RELEASE="0.5.0" OS="$(uname -s | tr A-Z a-z)"
curl "${EKSA_RELEASE}/${OS}/eksctl-anywhere-v${EKSA_RELEASE}-${OS}-amd64.tar.gz" \
--silent --location \
| tar xz ./eksctl-anywhere
sudo mv ./eksctl-anywhere /usr/local/bin/
  • Start the installation of a local EKS anywhere cluster
$ eksctl version
$ eksctl anywhere version
$ CLUSTER_NAME=dev-cluster$ eksctl anywhere generate clusterconfig $CLUSTER_NAME \
> --provider docker > $CLUSTER_NAME.yaml

the cluster config file looks like this:

kind: Cluster
name: dev-cluster
cni: cilium
count: 1
kind: DockerDatacenterConfig
name: dev-cluster
count: 1
kubernetesVersion: "1.21"
- count: 1
kind: DockerDatacenterConfig
name: dev-cluster
spec: {}

Create the cluster, it took around 6 minutes or so for me ..

$ eksctl anywhere create cluster -f $CLUSTER_NAME.yaml
eks-anywhere installation — was pretty quick in a matter of few mins

Ensure that kubectl is now configured to communicate with the EKS anywhere cluster ..

$ export KUBECONFIG=${PWD}/${CLUSTER_NAME}/${CLUSTER_NAME}-eks-a-cluster.kubeconfig$ kubectl get ns
capd-system Active 8m8s
capi-kubeadm-bootstrap-system Active 8m19s
capi-kubeadm-control-plane-system Active 8m11s
capi-system Active 8m22s
capi-webhook-system Active 8m24s
cert-manager Active 9m3s
default Active 10m
eksa-system Active 7m44s
etcdadm-bootstrap-provider-system Active 8m18s
etcdadm-controller-system Active 8m16s
kube-node-lease Active 10m
kube-public Active 10m
kube-system Active 10m
$ kubectl get nodes
dev-cluster-4l5d5 Ready control-plane,master 10m v1.21.2-eks-1-21-4
dev-cluster-md-0-7dffd8b78d-d7cz8 Ready <none> 9m59s v1.21.2-eks-1-21-4
kubectl get pods -A

Followed the steps to deploy a test application —

$ kubectl apply -f ""
deployment.apps/hello-eks-a created
service/hello-eks-a created
#forward to port 8080, so that we can view the test application within cloud9
$ kubectl port-forward deploy/hello-eks-a 8080:80

Now, you can view the test application on a browser within Cloud9 tools/preview/preview running application ..

test app running on EKS Anywhere local cluster !!

Follow the documentation as per to create the IAM roles and policies for the EKS connector.

Note: I kind of tripped at this step, while creating the roles … Thanks to my colleague, Yohan, who helped me out ;-) There are two roles involved here, which will need to be created — a service linked role and an IAM role .. The documentation has been updated to make the steps more explicit !!

a service linked role and an IAM role for connector agent
Select EKS anywhere as the provider

Sidenote: Folks have registered other managed Kubernetes providers like GKE using the EKS Connector ;-)

EKS connector with other Kubernetes providers —

Since I registered the cluster via the AWS console, you can download and apply the configuration file. The steps are slightly different, if you have registered the cluster via the AWS CLI.

$ kubectl apply -f eks-connector.yaml$ aws eks describe-cluster \
> --name "eks-a-1" \
> --region us-west-2
"cluster": {
"name": "eks-a-1",
"arn": "arn:aws:eks:us-west-2:xxx:cluster/eks-a-1",
"createdAt": "2021-09-14T14:18:58.417000+00:00",
"status": "ACTIVE",
"tags": {},
"connectorConfig": {
"activationId": "b6f5ee8b-9c2f-43b6-ae38-3e8d7668f7d6",
"activationExpiry": "2021-09-17T14:18:58.011000+00:00",
"provider": "EKS_ANYWHERE",
"roleArn": "arn:aws:iam::xxx:role/AmazonEKSConnectorAgentRole"
Check the EKS console, and verify the cluster is “ACTIVE”

We need to follow to grant permissions to view the cluster from within the AWS EKS console.

I followed the instructions at to get Role ARN of Cloud9

c9builder=$(aws cloud9 describe-environment-memberships --environment-id=$C9_PID | jq -r '.memberships[].userArn')
if echo ${c9builder} | grep -q user; then
echo Role ARN: ${rolearn}
elif echo ${c9builder} | grep -q assumed-role; then
assumedrolename=$(echo ${c9builder} | awk -F/ '{print $(NF-1)}')
rolearn=$(aws iam get-role --role-name ${assumedrolename} --query Role.Arn --output text)
echo Role ARN: ${rolearn}

and I used this Role ARN to update the Amazon EKS Connector cluster role and IAM user yaml files as per before applying them to the local cluster using kubectl ..

and yeah !! I can now the view the EKS anywhere cluster from AWS Console !!

nodes in our eks anywhere cluster
our sample app deployed in the default namespace
kube system namespace
our sample pod ..


There are a bunch of steps at for doing more advanced steps .. I found the installation, including the use of EKS connector very easy and straightforward compared to my experience of installation using other methods like kops in the past ..

I hope to try my hands on a production cluster on vSphere, once I get access to an vSphere cluster ;-)

News freak, Technology geek, hard-core Bangalorean, all things Internet related !! Interested in building a modern, strong & democratic India !!