Containers have become a mainstay within the increasing number of organisations adopting the DevOps way of working. As expected with the growing prevalence of any technology, the ecosystem  of tools for working with said technology also grows. Whilst container-like technology has been around in various forms for decades, the most familiar guise has exploded since 2013 with the introduction of Docker.

At its core, Docker goes a long way to solving the ‘works on my machine’ problem. It allows the abstraction of infrastructure away from the core application logic; this enables portability of a built application between any system that can run Docker containers. In the simplest terms, this means that an application can be tested on-premises then deployed to a production cloud with little risk of the environment variations causing issues.

This blog post serves to explore the latest service from AWS for running Docker containers in the cloud – App Runner. It will also compare this with other AWS services that, at a high level, are designed for similar workloads; in addition to detailing which use cases are right for each service.

App Runner

AWS App Runner launched in May 2021 and is described as “the simplest way to build and run your containerized web application in AWS.” [1]

At a high level, this service sets out to manage several considerations that a user has when running a web application in the cloud. These include :

  • Building from source code into a Docker image
  • Autoscaling
  • Securing with TLS certificates
  • Logging and monitoring
  • Configuring VPCs
  • Connecting your own domain

Whilst all of this is technically possible with other AWS services, App Runner aims to abstract as much complexity away from these tasks as it can. By doing so, the service is positioned firmly in the eyes of developers rather than hardcore operations teams. Amazon have positioned this so that without any prior knowledge of AWS, a developer could get a Docker container running in the cloud using App Runner. This really is the unique selling point of App Runner, within the AWS ecosystem.

Creating a service

So how does App Runner get configured? App Runner can be configured using both the AWS console as well as via CloudFormation or the AWS CDK, the two first-party Infrastructure-as-Code tools. For this blog’s examples, the AWS console will be used.

The first step in creating an App Runner service is to set up the source location for your application. There are two choices for this, a service can either be instantiated from a pre-built Docker image stored in Elastic Container Registry (ECR) or a Docker image can be built from the contents of your source code repository. At the time of writing, GitHub is the only supported location for the latter option; AWS CodeCommit cannot be directly connected to App Runner. If application source code is in CodeCommit, a workaround would be to use AWS CodeBuild to build a Docker image, store it in ECR and configure App Runner to point at the stored image. During the process of configuring the source, there is a choice between manual or automatic deployments. Note that automatic deployments incur a $1/month per service charge.

The next step in the process is to configure how App Runner builds the code in the GitHub repository we specified. This is where early distinctions between this service and other, more complex, AWS container orchestration services appear. Most services require the presence of a Dockerfile in order to determine how to build an image and start a container. App Runner simplifies this by removing the requirement to learn how to construct a Dockerfile, it just needs to know what runtime (Python 3 or Node.js 12 at present), the commands to build and start your application as well as what port to map HTTP/S (80/443) to.

The final step to getting an App Runner service live is configuring the settings that determine factors such as how much compute resource each container has, how it scales, how it deals with failure, what other AWS resources it can access and the tags that are applied. Each section will be explored in more detail.

Service settings allows for the configuration of the service name, compute resource and any environment variables. Compute resources can be configured with 1 vCPU and 2GB, 3GB or 4GB memory, alternatively 2 vCPUs can be selected with 4GB memory. Environment variables can also be specified here that are injected into your service at runtime. These are useful for values that should not be hardcoded into source code such as API endpoints. For sensitive values, it is recommended that App Runner is combined with a service such as AWS Secrets Manager or Parameter Store by passing the ARN of the secret as an environment variable and then resolving it using an AWS SDK inside the application.

Auto scaling determines the minimum and maximum number of instances of the service at any given time, as well as at what level of request concurrency an instance should be considered at its capacity. App Runner has built in load balancing that will distribute requests across the active instances. For example, with Concurrency was set to 100 with Min instances set to 1 and Max instances to 10, the following would be true:

  • 0 concurrent requests = 1 instance (the minimum)
  • 15 concurrent requests = 1 instance
  • 150 concurrent requests = 2 instances
  • 1500 concurrent requests = 10 instances (the maximum)

The configuration of Health check determines how App Runner determines that a particular instance is unhealthy. When an instance is considered to be unhealthy, it is removed from the service and a replacement is provisioned.  This means that an application must be stateless, or store state elsewhere (such as S3) in order to continue to function correctly when containers are replaced.

Whilst App Runner tries to simplify many things, it does not do so to the extent of isolating it from the rest of the AWS ecosystem. Security configuration allows an IAM role to be associated with the service, allowing the application to access other AWS services in a controlled way. A KMS key (default AWS, or customer created) can also be used to encrypt the data that is stored relating to the service. Tags can be utilised for both Tag Based Access Control as well as for greater observability over costs. For example, Service A could be tagged as “Marketing” and Service B could be tagged as “Support”. In AWS Cost Explorer, costs could then be easily attributed to the department that incurred them.

What are the alternatives and when should they be used?

The three main AWS alternatives to App Runner are Elastic Container Service (ECS), Lambda when used with container images and EC2 or Lightsail.

ECS is the hard-hitter in the AWS line up of container orchestration services. It can accomplish pretty much anything thrown at it and comes in two flavours, EC2 and Fargate. The EC2 variant runs on a cluster of EC2 instances that are deployed and managed by the customer, whereas Fargate runs containers directly in a serverless fashion (much like App Runner). There are two stand-out differences between ECS and App Runner in their capabilities. The first is that containers on ECS run inside a VPC which allows applications to access other resources inside a VPC without exposing them publicly. For example, if a database is inside a VPC then for App Runner to access it, it would need to be made publicly accessible which is considered bad security practice. The second is that ECS allows multiple containers to make up an instance of a service whereas App Runner enforces a 1:1 relationship. This means that with ECS, a service could be made up of two containers – one for an application and one sidecar like AWS X-Ray. This isn’t possible with App Runner. If either of these are requirements for an application then ECS is a good alternative to App Runner, despite its additional complexity.

A recent addition to Lambda is the ability to run custom built containers  rather than just source code against one of their pre-built runtimes. An interesting video can be found here from Inawisdom’s head of Solution Engineering Phil Basford, on using Lambda containers for deep learning. Lambda, by design, is great at handling event driven workloads. For example, an event could be a HTTP request being sent to your application. By only running the container when it is absolutely required, this can be a good way to optimise costs. Lambda, like App Runner can scale automatically based on request volume. However, similarly to ECS, Lambda has the advantage of being able to integrate deeper into AWS networking with VPC support. To determine whether Lambda could be a viable alternative to App Runner for a given workload, the question that must be asked is whether the containers need to run continuously or whether they can be started and stopped on an event-driven bases (e.g. to respond to a HTTP request).  In addition to this, to use Lambda functions a developer would be required to understand AWS in greater depth.

Finally, deploying containers on self-managed EC2 or Lightsail instances is another option. At their core EC2 and Lightsail both offer the same functionality – full access to a virtual machine and all the responsibilities that come with it. Where EC2 and Lightsail differ to each other is the marked they are aimed at. EC2 comes with all the bells and whistles, offering 100% control over elements such as subnets, security groups, availability zones and auto-scaling of instances. Lightsail is targeted at a similar demographic to App Runner, individuals who do not want or need to concern themselves with the deeper complexities of AWS. Both EC2 and Lightsail are technically able to run container-based workloads, a developer would have to manage everything related to this. This would include scaling the containers across one or multiple instances, health checks and remediation of unhealthy containers, installation of Docker runtimes and dependencies as well as many others.

Ultimately, if a persistent (not event-driven) container-based workload needs to be run on AWS and the developer wants to have complete control over networking and security as well as potentially running multiple containers to make up one service, Elastic Container Service is the one to choose. For event-driven workloads or where a microservice architecture is in use, Lambda would be a good solution. If what’s needed is a straightforward way to get a container running in the cloud without fussing over subnet ranges, VPCs and security groups, then App Runner might just be the right choice.

In conclusion

The conclusion that can be drawn from App Runner is that it is good at what it does. Not brilliant, but a good start. There are some lacking features such as a direct integration with CodeCommit which is a fairly big deal for customers fully integrated into AWS; however, if as suspected it is designed to attract non-AWS customers into the ecosystem, then GitHub is likely to be the most common location for source code.

App Runner certainly fills a gap in the wide range of AWS services by providing a simple way to get an application running in the cloud without having to think about more complex matters such as VPCs, Security Groups or Subnets. There is scope to improve it further, the features that would be most welcomed are:

  • Integration with CodeCommit, Bamboo and GitLab locations for source code
  • More options than just Python/Node for the non-prebuilt image configuration route
  • Integration with AWS Secrets Manager or Parameter Store to encourage best practice around storing sensitive app configuration values

Whilst it stands well on its own and provides an enticing way into AWS for those new to the cloud, it would potentially not be long before the simplifications force a reach towards a more customisable offering like ECS.

All in all, App Runner does what it says on the tin – it runs apps and it does it well. Just be aware of the limitations of the service to avoid re-architecting your application later when more complexity is needed.