Orchestrating Container Operations with Amazon Web Services
ECK, EKS and Kubernetes and the Orchestration of Containers.
Anyone who wants to run IT applications using container technology is first faced with the choice of the right cloud platform and the appropriate orchestration system. You can read more about the various options in the Amazon Web Services ecosystem in this article.
In the article Introduction to container technology we have laid a basic understanding about containers.
Those who now want to take the next step and run their IT applications using container technology are first faced with the choice of a suitable cloud platform. Amazons Web Services (AWS), Microsoft Azure and Google Cloud Plattform (GCP) are currently the most preferred platform providers here. Each of these providers offers its own Kubernetes services as a Platform-as-a-Service (PaaS).
Kubernetes (k8s), originally developed by Google, is now an open source project of the Cloud Native Computing Foundation (CNCF) and currently the most widely used container orchestration platform.
As an open standard, Kubernetes simplifies the migration of containers from Kubernetes instances of different clouds. Orchestration using Kubernetes is therefore not only possible on AWS, but is also used on other leading cloud platforms. All of the previously mentioned cloud hyperscalers belong to the CNCF and are therefore active supporters of Kubernetes, which facilitates the integration or migration of cloud applications. However, the provided services of the hyperscaler are mostly limited to providing the Kubernetes control plane and leave the management of the applications to the user.
Those who work predominantly in the ecosystem of Amazon Web Services on the road should be familiar with the following orchestration tools and be able to evaluate them for their own use.
Orchestration Systems ECS and EKS
Amazons „Elastic Container Service“ (ECS)
ECS is a scalable, fully managed container orchestration service built specifically for AWS - bringing very good integration into the AWS ecosystem.
With the possible orchestration tool Elastic Container Service (ECS) AWS promises security, reliability, and scalability for running sensitive, mission-critical applications. This service is characterized by ease of use and low complexity. Another advantage of ECS is that there is no base fee and no need for a dedicated control plane because the deployment of containers can be managed directly from the AWS management console.
However, there are drawbacks to using ECS: ECS offers neither multi-cloud capability nor the option of an on-premises installation. In addition: Hybrid clusters are only possible via AWS Outposts*, which in turn entails a strong vendor dependency.
*AWS Outposts are fully AWS-managed hardware resources for the on-premise data center that embed the AWS infrastructure including some AWS services, APIs, and tools to achieve a hybrid cloud. AWS Outposts are intended for workloads that require low-latency access to on-premise systems and on-premises computing, such as migrating legacy applications with on-premises system dependencies.
Elastic Kubernetes Service (EKS) comes with a Kubernetes service
Not all users can accept the aforementioned ECS disadvantages. Therefore, AWS now also offers the Elastic Kubernetes Service (EKS) as an orchestration system for a small basic fee.
EKS is a managed, highly available and scalable Kubernetes service. This service enables the deployment, scaling, and operation of container-based applications without having to set up and operate a control plane for Kubernetes yourself.
EKS offers more functionality and a higher level of abstraction and automation than ECS.
Manage Kubernetes clusters with EKS
The Elastic Kubernetes Service thus provides a Kubernetes cluster without the need to install, operate and maintain a separate Kubernetes control plane or node. What is provided is a "control plane", also called a master. This "Control Plane" can be operated with Kubectl, the standard command line interface of k8s. However, other user interfaces can certainly be used as well. The resources required to run applications are defined in so-called "manifest files" (YAML or JSON).
EKS is intended to provide a platform for automated populating, scaling and maintaining application containers on distributed nodes - and supports the use of a whole range of proven container tools.
Shared storage and network resources
Containers are grouped into so-called "pods" in k8s and run on "nodes". Pods are the smallest distributable compute units that can be created and managed with Kubernetes. Pods can be thought of as "pea pods" that house one or more containers with shared storage and network resources and a specification for operation instead of peas.
The contents of a pod are always co-located and co-planned - and executed in the shared context. A pod thus models, in each case application-specifically, a "logical host" for one or more application containers that are relatively tightly coupled.
Nodes are typically virtual servers that then run the container technology managed using Kubernetes. These nodes are in turn combined in a Kubernetes cluster. In these Kubernetes clusters, the pods/containers can be scaled on the one hand, but also the nodes in the cluster itself on the other. In this way, the nodes and containers can be scaled at any time so that the performance is optimally distributed in the cluster.
Increased configuration effort when using EKS
That sounds good at first and promises extensive independence from the rigid hardware foundation. But as always, the devil is in the details. For one thing, well-founded k8s expertise is necessary for container operation under EKS. Also, you need to know: Not all AWS services can be integrated directly; for example, rights management, which AWS specifies via IAM service, is left out. Here, the customer has to integrate/configure the users, roles, and rights within the cluster himself.
In self-healing, the controller compares the target state of the cluster with the actual state and automatically corrects any differences. It works with the key-value store etcd, a consistent, highly available database in which the target configuration and current cluster state are stored. With the integrated scheduler, the functioning nodes with free capacity are determined in response to requests from the API server, from which the appropriate node is then selected in the course of load balancing for the upcoming pod deployment. Using Kubectl, the containers can be started, stopped and manually monitored, while the connections to the pods are managed and controlled by the Kube proxy as a "load balancer".
Compared to ECS, however, a higher configuration effort must be expected with EKS.