How to configure an ingress controller for an Amazon EKS Kubernetes Cluster using Rancher 2.1

Posted byChristian Moser Posted on4. December 2018 Comments0

Rancher allows either provisioning a Kubernetes cluster from scratch for several cloud infrastructure providers using its Rancher Kubernetes Engine (RKE) or even more convenient to import an already hosted cluster such as AWS EKS or Google Kubernetes Engine.

In this post, I’m going to explain how a EKS cluster can be imported and properly setup to leverage full ingress support in Rancher. The explained ingress setup is not necessary if the cluster was provisioned with RKE, in that case Rancher will setup and deploy an ingress controller automatically for you on all nodes.

Create AWS EKS cluster

On your Rancher server request:


Add the credentials (Access Key and Secret Key) of a privileged AWS user.

Stick to the Rancher defaults for service roles,  VPC & Subnet and Rancher will create a sandboxed EKS environment on your AWS account. The “Maximum ASG Size”, defines how many nodes the cluster will be allowed to spawn. This can later be adjusted in the auto scaling section (EC2).

After a few minutes, your cluster should be ready.

Setup Ingress for EKS

Probably you have noticed the yellow bar on the screenshot saying:

For the time being, Rancher won’t setup an ingress controller for us. Means we can’t route traffic within the cluster to specific workloads using a L7 load balancer.

-> Select your cluster -> Default -> Load Balancing -> Add Ingress

Means our Ingress rule will stay in “Initializing” forever, since there is no ingress controller that will ever pick up this Rancher ingress configuration.

Ingress resources are a collection of routing rules which are picked up and fulfilled by an Ingress Controller

Let’s fix this.

Install Nginx Ingress Controller

There are a number of ingress controllers available, this post will explain how to setup the Nginx Ingress Controller for EKS.

In order to access your workloads from the Internet, we need to setup a load balancer that routes / forwards the traffic from the Internet to the cluster nodes. Let’s choose a L7 load balancer for the highest flexibility:

Layer 7 load balancers base their routing decisions on various characteristics of the HTTP header and on the actual contents of the message, such as the URL, the type of data (text, video, graphics), or information in a cookie.

As explained in the following excellent glossary:

Install prerequisites

Either install kubectl on your development machine or use the in-browser kubectl provided by Rancher.

From the deploy documentation do the following:

kubectl apply -f

This will install all required stuff for the next steps such as namespace, configmap, serviceaccount etc.

Install Load Balancer

Provision a service of type LoadBalancer, will result in a Classic Load Balancer created in your AWS account.

Instead of running running the L7 install script straight away.

# don't execute!
kubectl apply -f

We first download and modify the yaml file:

Then we set the ssl-cert arn that we can receive from AWS Certificate Manager and fine tune the AWS LB via* annotations if required.

Please check the Kubernetes AWS LB docs for all available configuration options.

kind: Service
apiVersion: v1
  name: ingress-nginx
  namespace: ingress-nginx
  labels: ingress-nginx ingress-nginx
    # replace with the correct value of the generated certificate in the AWS console "arn:aws:acm:us-west-2:XXXXXXXX:certificate/XXXXXX-XXXXXXX-XXXXXXX-XXXXXXXX"
    # the backend instances are HTTP "http"
    # Map port 443 "https"
    # Ensure the ELB idle timeout is less than nginx keep-alive timeout. By default,
    # NGINX keep-alive is set to 75s. If using WebSockets, the value will need to be
    # increased to '3600' to avoid any potential issues. "60"
  type: LoadBalancer
  selector: ingress-nginx ingress-nginx
    - name: http
      port: 80
      targetPort: http
    - name: https
      port: 443
      targetPort: http


Finally install the LB via Rancher:

Navigate to: -> Select your cluster -> System -> Load Balancing -> Import YAML

Then install the L7 LB configuration via kubectl  or Rancher yaml importer.

kubectl apply -f

Now we listen for requests on port 80 and 443 and route them internally to port 80 to our cluster nodes. SSL is terminated at the AWS load balancer, no need to deal with certificates within Rancher.

Verify Ingress

Go back to our previously created ingress resource. It should now be in state “Active”.

-> Select your cluster -> Default -> Load Balancing

Create DNS entry for EKS Cluster

This one is not rocket science with AWS Route53.

I assume that you have your domain already setup, open the hosted zone and create a record set.

Set a name, this one should match the hostname that was specified for the Ingress. (

Select Alias: yes. and enter the ARN of the provisioned Classic Load Balancer.


All http/https requests to the configured domain, in my example, would now be forwarded to the workload / pod hello-world.