Prashant Lakhera
11 min readDec 11, 2024

đź‘‹ AWS EKS Auto Mode: A Game-Changer or Just Hype? My Unbiased Take đź‘‹

Hey everyone!

This is one of the most significant releases from AWS re:Invent 2024. After testing this feature, here’s my unbiased opinion.

If you’ve ever managed a Kubernetes cluster, you know how rewarding it can be, but let’s be honest: It’s also a massive headache. That’s where EKS Auto Mode comes in, making things so much easier that it feels like Kubernetes is on autopilot.

I want to share why EKS Auto Mode is such a game-changer and the limitations and drawbacks I discovered. Stick around; this is going to be exciting!

Why EKS Auto Mode?

Kubernetes is powerful, but setting up clusters, configuring plugins, scaling nodes, and keeping everything running smoothly? That’s a full-time job (or three).

Since then, EKS has made things a lot easier. They’ve rolled out many features, like better scaling with tools like Karpenter and open-source goodies that have become standards in the Kubernetes community. But even with all those advancements, there was still a gap. Managing the control plane(AWS Manage it for us)? Easy. Managing the data plane (where your apps run)? Not so much.

Let’s simplify this, this is what a typical cluster on Amazon EKS looks like today. As you can see, a portion of the cluster is managed by Amazon Web Services (AWS), including the cluster control plane, the API server instances, etcd. On the right side of the diagram, you’ll notice a variety of infrastructure and software components managed by customers. These include add-ons, the instances where applications run, and other AWS services required for the applications to function. If this sounds like a lot of work and infrastructure to manage, you’re not alone.

Reference: Automate your entire Kubernetes cluster with Amazon EKS Auto Mode

This is where EKS Auto Mode comes in to save the day.

What’s the Big Deal About EKS Auto Mode?

EKS Auto Mode is about simplifying how you manage your Kubernetes clusters. Imagine creating a fully operational Kubernetes cluster with one click and having everything compute, storage, networking, and scaling handled for you. Sounds too good to be true, right? But that’s exactly what EKS Auto Mode does.

Here’s what makes it awesome:

  1. No More Setup Drama: You don’t need to spend hours designing your cluster or configuring plugins. Auto Mode does everything for you, using best practices baked into AWS.
  2. Automatic Scaling & Optimization: Whether your app suddenly gets a surge of users or quiets down, Auto Mode adjusts the resources dynamically. It even picks the best EC2 instance types for your workloads and optimizes them for cost.
  3. Self-Healing Clusters: If something goes wrong (like a node failing), Auto Mode detects the issue, fixes it, and keeps your app running smoothly. No midnight PagerDuty calls for you!
  4. Secure by Default: Auto Mode uses BottleRocket, a container-optimized OS that’s secure, lightweight, and built for Kubernetes. It also handles OS patching automatically.

Reference: Automate your entire Kubernetes cluster with Amazon EKS Auto Mode

Let’s Break It Down: The EKS Auto Mode Experience

Setting Up a Cluster

Creating a cluster in Auto Mode is ridiculously simple. You just:

  1. Go to the EKS console.
  2. Choose “Auto Mode.”
  1. Click a button and boom! You’ve got a production-ready

Kubernetes cluster with everything preconfigured. No tinkering with YAML files or scratching your head over VPC settings.

Note: EKS Auto Mode requires Kubernetes version 1.29 or greater.

Now here comes the interesting part. When you click on the Compute tab, you don’t see any nodes listed, but you do see two managed node pools: a general-purpose node pool and a system node pool.

Or you can check via command line

$ kubectl get nodepools
NAME NODECLASS NODES READY AGE
general-purpose default 0 True 28m
system default 0 True 28m
  • General purpose node pool: is designed to handle general application workloads. It typically runs the containerized applications deployed by users, such as web servers, APIs, data processing jobs, and other business applications. It provides a flexible and versatile environment suitable for most workloads that do not have specialized system requirements.
  • System node pool: This node pool is primarily reserved for running system-level or infrastructure-related workloads. It ensures that critical Kubernetes components and add-ons required for the cluster’s operation are isolated from user workloads. Examples of such workloads include the Kubernetes DNS service (CoreDNS), logging agents, metrics servers, and networking plugins.

Deploying Apps

When you deploy an app (using something as simple as kubectl apply), Auto Mode comes into action:

  • It provisions compute nodes automatically.
  • Sets up storage, networking, and even load balancers.
  • Monitor everything to make sure it’s running like a dream.

I used the same sample app that was demonstrated during the re:Invent demo https://github.com/aws-containers/retail-store-sample-app

kubectl apply -f https://raw.githubusercontent.com/aws-containers/retail-store-sample-app/main/dist/kubernetes/deploy.yaml
kubectl wait — for=condition=available deployments — all

And the best part? You don’t have to lift a finger. The cluster scales up and down as needed, saving you time and money.

Why You’ll Love It

If you’re still on the fence, here’s why EKS Auto Mode is worth checking out:

  1. It Saves Time: Launching new workloads is faster, which means you can get products to market sooner.
  2. It Saves Money: Auto Mode optimizes compute resources and reduces operational overhead.
  3. It Lowers Stress: Forget about patching, node failures, or scaling nightmares. Auto Mode handles it all, so you can focus on building cool stuff.
  4. It’s Secure: With features like ephemeral nodes and automated updates, you get a better security posture out of the box.

❌ Hopefully, things will be as straightforward as they seem. While testing this product, I identified a few shortcomings.

1: Can You Log Into AWS EKS Auto Cluster NodePools?

You can no longer log into the node as it is managed by Amazon Web Services (AWS). Whether this is a good or bad thing depends on your perspective. While I understand that this approach enhances security, it does introduce some inconvenience.

kubectl get nodepools
NAME NODECLASS NODES READY AGE
general-purpose default 0 True 41m
system default 0 True 41m

AWS EKS Auto Mode does not allow direct access to the nodes in its managed NodePools. For security and management purposes, AWS restricts:

  • SSH Access: Nodes in Auto Mode do not expose SSH for direct access.
  • AWS Systems Manager (SSM): Access to the nodes via SSM is also disabled.

However, you can troubleshoot or monitor your workloads using Kubernetes-native tools (kubectl commands) or AWS-native services such as:

  • CloudWatch Logs for workload logs.
  • EKS console to monitor cluster and workload status.
  • kubectl debug or similar tools to inspect running pods and application configurations.

For deeper troubleshooting or customizations requiring direct node access, you must switch to EKS Standard Mode, where you manage your node groups.

2: Customizing Instance Categories While Creating Nodes in AWS EKS Auto Mode via UI

AWS EKS Auto Mode, you cannot modify the NodePools directly from the UI for their instance categories, however you can use custom labels and configurations through APIs or CLI commands to influence the instance types within a NodePool.

  1. Use AWS Management Console:
  • When creating a NodePool in EKS Auto Mode, the UI allows limited customization for the instance type, generation, and category. However, predefined categories like on-demand, c, m, r, etc., are not modifiable directly.

If you execute the following command:

kubectl get nodepools general-purpose -o yaml

You will find the following configuration snippet:

- key: eks.amazonaws.com/instance-category
operator: In
values:
- c
- m
- r

This indicates that these instance families (c, m, and r) are hardcoded into the configuration.

To apply custom configurations, you can use aws cli or YAML manifest for the NodePool that includes specific instance types and other preferences.

apiVersion: eks.amazonaws.com/v1
kind: NodePool
metadata:
name: specialized-pool
spec:
instanceCategory: g
instanceGeneration: latest
instanceSize: large
instanceTypes:
- g5.large
- g5.xlarge
- inf1.xlarge
- inf2.2xlarge
maxNodes: 12
minNodes: 2
  • Similarly you can use eksctl command to create custom NodePool.
eksctl create nodegroup \
- cluster my-cluster \
- name custom-nodepool \
- node-labels environment=custom \
- nodes 2 \
- nodes-min 1 \
- nodes-max 5 \
- instance-types g5.xlarge

3: Simplifying Node Maintenance While Leaving Control Plane Upgrades to You

This is a common point of confusion, as some people mistakenly believe that EKS auto mode upgrades both the control plane and the node pool. However, in reality, it only upgrades the node pool, and the control plane must be upgraded manually.

I understand that the intention behind this is to reduce operational overhead, and this is what I believe is happening.

  • Node Health Monitoring: EKS checks the health of individual nodes before and after updates using health checks such as node readiness and Kubernetes node conditions (e.g., Ready, DiskPressure).
  • Pod Eviction: Pods are drained from nodes being updated. The Kubernetes scheduler attempts to reschedule these pods elsewhere in the cluster.
  • Disruption Budgets: Pod Disruption Budgets (PDBs) control how many pods can be unavailable during updates to maintain application stability.
spec:
disruption:
budgets:
- nodes: 10%
consolidateAfter: 30s
consolidationPolicy: WhenEmptyOrUnderutilized

As you can see in the pdb configuration for nodepool. This configuration ensures disruptions in the cluster are managed in a controlled way to keep things running smoothly. It allows only 10% of nodes to be affected at a time during updates or maintenance, making sure the rest of the cluster stays stable. The 30-second grace period gives the system time to prepare before starting any consolidation tasks. The consolidation happens only when nodes are either empty or not being used much, which avoids disrupting important workloads. This setup helps balance the need for maintenance with the need to keep the system reliable and efficient.

  • Update Mechanism: EKS utilizes a rolling update mechanism, which ensures only a portion of the nodes are unavailable at any time. This limits the impact on workloads.

Now this is where I see issue

  • Application Sensitivity: If your workloads rely on specific node configurations or versions, automatic updates could introduce incompatibilities.
  • Downtime for Stateful Workloads: Stateful applications without adequate replicas or configured PDBs may experience downtime if disruptions exceed acceptable thresholds.

Also this may be a concern for certain use cases “Automated Upgrades: EKS Auto Mode keeps your Kubernetes cluster, nodes, and related components up to date with the latest patches, while respecting your configured Pod Disruption Budgets (PDBs) and NodePool Disruption Budgets (NDBs). Up to the 21-day maximum lifetime, intervention might be required if blocking PDBs or other configurations prevent updates.”

Issues:

  • Version Mismatches:
  • Automated updates may upgrade Kubernetes components to versions incompatible with older Helm charts, manifests, or custom controllers.
  • Some application dependencies may rely on deprecated APIs, causing failures after an update.

Source: https://docs.aws.amazon.com/eks/latest/userguide/automode.html

4: Additional Charge

EKS Auto Mode incurs an extra management fee based on the duration and type of Amazon EC2 instances it launches and manages. This fee is on top of the standard EC2 instance costs

5: Karpenter dependency

This is a big one. I believe it heavily relies on Karpenter under the hood, which dynamically manages compute resources and handles the provisioning and scaling of nodes.

kubectl get nodepools system -o yaml

You see configurations like:

apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
annotations:
karpenter.sh/nodepool-hash: "4982684901400657622"
karpenter.sh/nodepool-hash-version: v3

While EKS Auto Mode simplifies operations, its reliance on Karpenter also means:

  • Potential for misconfiguration: Incorrect NodePool policies could lead to under-provisioned or over-provisioned clusters.
  • Dependency on AWS Services: Requires understanding of how AWS services like Spot Instances and IAM policies interact with Karpenter
  • Specialized Configurations: EKS Auto Mode uses standardized templates for provisioning infrastructure, such as instance types, volume sizes, and node configurations.Specialized requirements like custom kernel modules, non-standard runtime dependencies, or GPU-specific workloads might conflict with these defaults.For example: A workload requiring low-latency storage (e.g., NVMe drives) may face performance degradation if EKS Auto Mode provisions nodes with standard EBS volumes instead.
  • Network Configurations: EKS Auto Mode handles networking automatically, including VPC and Security Group configurations. Applications requiring fixed IP addresses, custom DNS settings, or direct peering connections might encounter issues if these are overwritten during node or cluster updates.
  • RBAC Conflicts: EKS Auto Mode also configures Role-Based Access Control (RBAC) settings automatically. This can conflict with existing RBAC rules tailored for specific workloads. For example: A workload accessing sensitive data in S3 fails because the default IAM role lacks the required permissions for specific bucket policies.
  • Vendor-Specific Features: Certain workloads may rely on Kubernetes features or plugins specific to non-AWS environments, causing compatibility issues in EKS Auto Mode. For example: EKS Auto Mode uses the Amazon VPC CNI plugin for networking. Workloads relying on other CNIs (e.g., Calico for advanced network policies) may encounter issues.

So my take on this, EKS Auto Mode simplifies operations but requires careful planning and customization to address potential compatibility challenges, especially for applications with unique infrastructure needs

6: Why does system nodepools support arm64 but on-demand Does Not?

This is more of an observation than a shortcoming. Someone may have already noticed that system node pools support arm and amd, whereas general-purpose node pools only support amd. However, when you consider their use cases, it makes sense.

Arm based instances

  • ARM-based instances (Graviton) are more cost-efficient for lightweight, system-critical workloads.System workloads, such as Kubernetes control plane components and logging/monitoring agents, are often optimized for ARM architectures.

Amd based instances

  • Most general-purpose and legacy applications are optimized for amd64 architecture, ensuring broader compatibility.on-demand pools are designed for varied workloads, some of which may not yet support arm64.
kubectl get nodepools general-purpose -o yaml
— key: kubernetes.io/arch
operator: In
values:
- amd64
kubectl get nodepools system -o yaml
- key: kubernetes.io/arch
operator: In
values:
- amd64
- arm64 < - - - - -

Note: The observations shared here are based on testing conducted during the past week. It’s possible that I may have missed or overlooked certain aspects. However, as is often the case with Amazon Web Services (AWS), they actively listen to customer feedback and incorporate improvements. We can likely expect to see an enhanced version of the product in the near future.

Ready to Try It?

If you’re curious, give it a shot! Check out https://docs.aws.amazon.com/eks/latest/userguide/create-auto.html, or jump into the EKS console to see what Auto Mode can do for you.

Seriously, EKS Auto Mode feels like Kubernetes on “easy mode”, with some of the shortcomings I mentioned above. Whether you’re running a small app or managing hundreds of clusters, this tool will make your life a whole lot easier. So, what are you waiting for?

Prashant Lakhera
Prashant Lakhera

Written by Prashant Lakhera

AWS Community Builder, Ex-Redhat, Author, Blogger, YouTuber, RHCA, RHCDS, RHCE, Docker Certified,4XAWS, CCNA, MCP, Certified Jenkins, Terraform Certified, 1XGCP

No responses yet