Enterprise setup for running containers on AWS – Step by Step Guide

When planning a cloud adoption on an enterprise level, there are few things that need deeper understanding:

These two steps (account setup and Infrastructure As Code understanding) are very important and need to be done for every AWS environment.

Serverless architectures are a superior way of developing applications in the cloud as they come with almost no operational overhead, out of the box scaling, highest availability and above all a minimal running cost.

However, serverless development is still far on the roadmap for many companies. Most common reasons being “no time for up-skilling our teams” and “complete refactoring of our application would be required”. Those are valid reasons that make companies decide to stick with their existing development stack. That stack today almost always includes running some containerized applications.

This blog post will not try to explain what containers are and how they can be used. Those topics are all well understood by now and are explained in many posts/books/articles. The focus here will be how to setup AWS networking to meet enterprise standards for running containers. In a step by step way, we will build up a networking from the simplest starting point to the final enterprise ready setup. In this process we will use orchestrating container service AWS ECS (Elastic Container Service) but the details of that particular service are also outside of scope for this post.

But what do we mean by enterprise setup of AWS network for running containers?

Most of the startups or smaller companies that don’t have strong security policies will have containers running in an AWS account, within a single VPC, with direct access from Internet. Egress and ingress traffic is flowing to an Internet Gateway that is attached to a VPC and from there goes on to an Application Load Balancer that distributes requests to an appropriate containers that are running on ECS or EKS. That kind of setup is often unacceptable by enterprises with strong security demands. Enterprises want to control all ingress traffic, all egress traffic, to do deep packet inspection, to log and monitor communication between inside and outside world. In this post we will setup such an environment in a step by step way.

Step 1: Have a container app running locally

As a show case application I will use a simple SpringBoot Java application that has a GET REST API endpoint “/price” which accepts “symbol” as a query parameter (e.g. “BTCUSDT” which means bitcoin price in dollars) and then makes another outbound request to the Binance exchange to check the current price of that symbol. It is important that our show case app has inbound and outbound communication as we will setup special networking constructs to route and intercept/inspect these calls.

When we start the application locally (docker run -p 80:8080 binance), this is what we get on a localhost in a browser:

Ok, so our application is properly packaged in a Docker container and tested locally. Now it’s time to move it to AWS.

Step 2: Have the container app running on a public EC2

Making our application working on a public EC2 is verification step that proves our basic VPC setup is done correctly. By that we mean: VPC created, subnet created, security group configured, internet gateway attached to the VPC, routing table associated with the public subnet and a route to internet gateway is set.

Security group for the EC2 instance is configured to allow inbound traffic for HTTP on port 80 (application access) and for SSH on port 22 (management access). Once EC2 instance is launched, we need to SSH into it and install Docker (instructions for Linux2 AMI are here). Then we only need to copy Dockerfile and the code itself on EC2 instance (I like to use SFTP with CyberDuck for file transfer to EC2) and run the Docker container.

When all is done, we should see the same response in the browser but this by using EC2 public IP. In my case, the application responds to requests sent over “http://3.216.94.168/price?symbol=BTCUSDT“.

Step 3: Place ALB in front of the container

It’s great that the app is working, but the networking setup is not great. EC2 instances should never be in public subnets. They should reside in private subnets with a load balancer in front of them.

Let’s modify our AWS setup to support such approach. We move EC2 instance to private subnets, create Application Load Balancer in public subnets and additionally we add NAT Gateway in public subnets for the outbound communication. We added additional availability zone to achieve high availability. With 2 separate NAT Gateways this kind of setup can sustain lose of one availability zone without any interruptions for our users.

Ok, this is better and more secure. With ALB as an internet facing endpoint we can do many additional things such as attach AWS WAF (Web Application Firewall), make authentication over Amazon Cognito with third party identity providers, connect Amazon CloudFront with it and so on. These are additional services that can enhance our application architecture but are outside of scope of what we are trying to explain.

Let’s see if our application works. Now we are using ALB provided endpoint:

For many customers this is good enough. To make it really production-ready setup, we would need to add Amazon Route53 to have a custom domain name and we would configure Auto Scaling group so that more EC2 instances can be added in case of increased load.

But we can make it even better than that.

Step 4: Replace EC2 with ECS/Fargate

To reduce the operational overhead, it makes sense to remove EC2 instances and deploy our container to a managed orchestration service such as Amazon ECS (Elastic Container Service). Changes are minimal. We remove both EC2 instances but keep the rest. We need to add ECR (managed container registry) to keep our Docker images. Then we add ECS with managed serverless compute nodes (AWS Fargate).

This kind of setup is what most of customers that are using containers would be happy with. The environment is secure, scalable and easily integrated with CI/CD pipelines for developers to update images in the Amazon ECR. ECS dashboard provides metrics and logs, but additional logs can be obtained from CloudWatch Container Insights (unfortunately still not enabled for Fargate tasks).

So, what is the issue with this setup?

The main issue here is that there is no separation of application / networking / security duties. Everything is deployed to a single account, in one VPC. We would want to control incoming and outgoing traffic without polluting our application environment. Enterprises have strict standards and separation of duties requirements where different teams are responsible for security, networking and application development.

We need to further enhance our environment to make it more enterprise ready.

Step 5: Create internal Network Load Balancer

We will move Application Load Balancer to a new VPC. But to do that, we need to create another load balancer that will be connected to the ECS service. For that we will use internal Network Load Balancer that will be placed in private subnets of our VPC.

Nothing big changed except that internet-facing Application Load Balancer now has target group configured with IP addresses of the Network Load Balancer. NLB will place one ENI (Elastic Network Interface) in each private subnet and we can see private IP addresses of these ENI’s. These private addresses will be registered as targets for the ALB. It is important that this VPC has one of the CIDR ranges: 10.0.0.0/16 (RFC 1918), 100.64.0.0/16 (RFC 6598), 172.16.0.0/16 (RFC 1918) or 192.168.0.0/16 (RFC 1918). Otherwise, you won’t be able to register targets to the ALB if CIDR ranges are from publicly routable ranges.

Step 6: Make new VPCs and link them with Transit Gateway

Now comes the big change to our environment. We are creating 2 more VPCs (Ingress VPC and Egress VPC) and linking them to Container VPC with Transit Gateway. Incoming internet traffic will be routed through Ingress VPC to Container VPC over Transit Gateway and outgoing traffic (to Binance exchange) will go through Egress VPC.

For simplicity reasons, I haven’t used full detailed view for Ingress and Egress VPC as I did for Container VPC. Both Ingress and Egress VPCs have 2 public and 2 private subnets in two different availability zones. Application Load Balancer requires minimum of 2 AZ’s anyway and NAT Gateway is deployed to 2 different subnets in Egress VPC for high availability reasons.

Note a few things here. We moved NAT Gateways from the Container VPC to Egress VPC and we also moved Application Load Balancer from the Container VPC to Ingress VPC. Container VPC doesn’t have Internet Gateway attached anymore. Because of that we need to create a few VPC endpoints so that ECS can still work. We need endpoint to ECR to fetch images, endpoint to ECS service, to S3 as ECS requires that for temporary storage and finally Log endpoint to CloudWatch. Without these endpoints our containers won’t work as we removed Internet Gateway and NAT Gateway from the VPC.

Another thing to understand is that Transit Gateway works in such a way that is places one ENI in each subnet with which it is attached. We attached our Transit Gateway to both private subnets of the Container, Egress and Ingress VPCs.

Transit Gateway has one route table with all attachments associated to it and with one additional static route that routes all traffic for outside world (0.0.0.0/0) to Egress VPC, as shown on the next picture:

Just attaching the Transit Gateway to VPC’s is not enough. We also need to modify route tables in each VPC to tell them how to route the traffic towards the Transit Gateway.

Container VPC has a route table that sends all 0.0.0.0/0 traffic to the transit gateway.

Ingress VPC also has a route table that says all traffic for 10.0.0.0/16, which is CIDR range of Containers VPC, needs to go to the Transit Gateway. Transit Gateway’s own route table will then know how to pass that traffic to the Container VPC attachment.

Visually, this is how the traffic flows. When we call GET REST API from our browser, the red line shows how that call will travel through to the containers. Yellow line shows how the call from the inside the containers will go to the Binance exchange.

Step 7: Introduce Network Firewall

So far we have only made a clean separation of ingress, application and egress components. Application development teams can get access only to Container VPC where they can deploy new images, check logs and test. But networking and security teams still don’t have all the tools they need to control ingress and egress traffic.

First, we’ll start by introducing AWS Network Firewall. It’s a fully managed firewall solution that we will place in Egress VPC.

We will create a new set of subnets that will hold Network Firewall Endpoints. We also need to modify route table for subnets where Transit Gateway ENI’s are. We need to tell the router that all 0.0.0.0/0 (outgoing traffic) will need to be directed to the Network Firewall Endpoint.

Also, a new route table is created for the new set of subnets that say that after the traffic has come back from the Network Firewall it needs to go to the NAT gateway. That is of course if the traffic hasn’t been blocked (dropped) by the firewall.

The route table for the public subnets of Egress VPC stays unchanged and just redirects all outgoing traffic to the Internet Gateway.

The green line now shows where the traffic is going through.

So, let’s do a test. I don’t really know what the Binance API address is as from my container I am calling a URL (https://api.binance.com/….), but I can check CloudWatch logs for the Network Firewall.

I see that the Network Firewall is recording a call to a destination IP 99.86.230.137 on port 443. I will now make a firewall rule for that address:

Notice that the Action is PASS. As long as it is that way, out application will work fine as we will be able to reach Binance IP. But, when I modify this rule and change the Action to be DROP then the output I see is this:

This is our expected behavior. Now a networking team can have a control and allow for applications to reach only those IP address that are meant to be used. Nothing more than that.

Step 8: Create API Gateway and VPC Link

What about ingress traffic? Can we somehow control it more?

Our entry point is Application Load Balancer. ALB has some good features and we want to keep it. But ALB is missing an important feature – API rate limiting or throttling. With throttling we can protect our applications from increased number of requests, usually generated during a DDoS attack.

ALB doesn’t have that feature, but luckily AWS API Gateway has. We can put an API Gateway in front of ALB. To make sure that ALB is not publicly visible, we will replace our internet-facing ALB with an internal ALB. Then we can use VPC Link feature of API Gateway to link it with the internal ALB.

Now we can customise our throttling levels as we wish.

Additional benefit that we get is that our endpoint is HTTPS and we terminate SSL connection on the API Gateway.

Step 9: Add WAF

To further control ingress traffic we can attach AWS Web Application Firewall to API Gateway. WAF will give us protection from bots and will block common attack patterns, such as SQL injection or cross-site scripting. You can also customize rules that filter out specific traffic patterns. You can get started quickly using Managed Rules for AWS WAF, a pre-configured set of rules managed by AWS or AWS Marketplace Sellers to address issues like the OWASP Top 10 security risks.

Our final enterprise architecture now looks like this:

Bonus step: Split accounts

We made segregation of ingress, egress and application areas on a VPC level. Each has it’s own VPC and we can setup strict IAM policies that have conditions to allow users only access to those VPC’s that they should have access to. Still, all those VPC’s are all within the same AWS account.

What we can do relatively easy is to put each of those VPC’s in separate AWS accounts. Usually you would put Ingress VPC, Egress VPC and the Transit Gateway in a Networking Account and the Container VPC in a specific Workload Production account.

The only modification is that we need to allow Transit Gateway sharing and cross-account attachments. That is very easily done with AWS Resource Access Mananger.

Pricing

It is important to mention that in our architecture there are several VPC components that cost quite some money.

The most significant impact to your monthly AWS bill would come from:

  • NAT Gateway which is $0.045 per hour. In our solution we have 2 NAT Gateways in the Egress VPC which would be $65 per month.
  • Network Firewall which is $0.395 per endpoint hour. Our solution has 2 subnets for high availability with a network firewall endpoint in each of them so that would be $568 per month.
  • VPC endpoint which is $0.01 per hour. We have 8 VPC endpoints, which sums up to $58 per month.
  • Transit Gateway which is $0.05 per attachment hour. We have 3 attachments to 3 VPCs so that would be $108 per month.

Network Firewall is definitely the most expensive component in this architecture. Luckily we have lots of options here. We can easily replace AWS manager firewall with any third-party firewall. Enterprises already have licenses for some firewall solutions that can possibly be reused.

Leave a comment