AWS Fargate is a managed service offered by Amazon, which behaves as a compute engine for AWS ECS. It helps in running containers without the need to manage servers and clusters. Fargate provides, configures, and scales clusters of virtual machines which help in running containers. The overhead of choosing server types, scaling clusters at the right point in time and optimizing the cluster pack need not be looked after by the user.
Fargate completely eliminates the need of the user requiring to interact with or worry about servers and clusters. It eliminates the user worrying about infrastructure management. It allows the user to focus completely on efficiently designing the application and building high level applications.
ECS can be used in two modes:
When this mode is used, the user just has to take a few steps before launching the application. The steps have been summarized below:
When tasks and services are run with the help of Fargate launch type, the user performs the following operations before launching the application:
Every Fargate task has its own boundary of isolation. Due to this boundary, it doesn’t share the underlying kernel, the CPU resources, memory resources or the elastic network interface with other tasks.
All the task definition parameters which are available are not supported when Fargate launch type is used. Certain parameters are not at all supported, and some of them behave in a different manner when it comes to Fargate tasks.
When a Fargate task is to be defined, it requires that the network mode be set to ‘awsvpc’. This ‘awsvpc’ network mode makes sure that every task has its own elastic network interface. In addition to this, when a service has to be created or when a task is manually being run, the network configuration is also required.
When a Fargate task is defined, it requires that the user specify the CPU memory requirements at the task level. These details can be specified at the container level as well, which is optional. Most use cases specify the CPU and memory requirements at the task level.
Tasks are contained in these clusters which use both Fargate and EC2 launch types. When the user’s clusters are viewed in the AWS management console, Fargate and EC2 task counts which are currently in operation are displayed separately.
Fargate tasks can be used to authenticate private image registries, which includes Docker Hub. This can be done with the help of basic authentication. When private registry authentication is enabled, the user can use Docker images in their Fargate task definitions.
If the user wishes to use private registry authentication, they can create a secret with the help of AWS Secrets Manager that would contain the credentials of the private registry. Within the definition of the container, the ‘repositoryCredentials’ that consists of full ARN of the secret (which was created previously) is specified.
This mode allows the user to have much deeper control, i.e. server-level control which is granular in comparison to infrastructure level which helps run applications by storing them in containers. In this mode, ECS can be used to manage a cluster of servers, as well as schedule how the containers need to be placed on the servers.
ECS has the ability to track the CPU, memory and other resources that are used by the cluster. It also helps in finding the best server for a container which has to be run. This is based on the resource requirements as specified by the user. The user would be responsible to provision the cluster of servers, their patching, as well as their scaling.
The user has the freedom to do the following:
Since ECS launch type provides a much granular level control to cluster of servers, it helps in provisioning more options that could be required to support certain applications or might be needed for certain compliance and government requirements.
The user only worries about containers, application design and developing a high-level application that fully serves the user. There is no need for the user to manage the EC2 instances. User won’t be responsible in choosing instance types, managing scheduling of clusters or optimizing the utilization of these clusters. This will all be taken care of by Fargate.
With the help of Fargate, it is easier to scale applications. The user need not worry about providing enough compute resources for the container applications. Once the application requirements have been provided (CPU, memory, networking policies), Fargate makes sure that all the infrastructure and scaling required to run the container successfully is highly-available. The user need not have to worry about the time when these clusters have to be scaled or how they have to packed in containers so that their utilization is done optimally.
Tens of thousands of containers can be scaled in a matter of seconds with the help of Fargate, and can be used to run production-critical applications.
Fargate can be integrated very easily with ECS. The user needs to define their application like how they would do for Amazon ECS. The user’s application can be packaged into task definitions, and the CPU and memory specifications can be mentioned.
The networking and IAM policies which is required by every container needs to be specified. All of this can be uploaded to Amazon ECS. Once the setup is complete, Fargate launches the applications and helps in managing the containers for the user.
In this post, we understood how Fargate can be used as a compute engine for Amazon ECS without the user having to manage the servers and clusters associated with it.