5 security mistakes that can break your AWS environment

I am an independent AWS consultant providing services through my company 1way2cloud GmbH. If you would like to engage me on your project, feel free to contact me directly at nemanja@1way2cloud.com.

Before the title is taken out of context, let me stress that word *your* in the title. It is highly unlikely that anybody can break AWS cloud itself. Surely people are trying to do so, but they are up against some of the smartest security and networking engineers in the world making sure bad guys are staying outside the gate. This topic is focusing on *your* environment in AWS, the one you are responsible for – your part of the shared responsibility model.

There are plenty of other mistakes one can make as well as plenty of controls that can be used to strengthen your AWS environment. These 5 are the ones that don’t require too much effort to implement but can have really big impact on securing your AWS environments.

By breaking your AWS environment I don’t only think of security breaches that can happen but also of exploding costs. At the end of the day, both data leaking and paying $$$$ to AWS are bad and can put your company out of business.

Mistake #1: Not using Guardrails

AWS IAM (Identity and Access Management) is the guardian of your AWS environment. It controls who has access to what in the AWS environment. Unfortunately, AWS IAM is also one of the most complicated AWS services out there.

To better understand what we mean by “Not using Guardrails” mistake, let’s take a look under the hood of how AWS IAM is functioning. Whenever you make a request to one of 9.000+ AWS Rest API endpoints (not exact number, just a rough estimation), that request is first going to IAM service to check if you are allowed to reach that specific endpoint. IAM service evaluates all the policies that are assigned to your user to role, looks at resource policies if they exist and also looks at different boundaries set by AWS Organizations or IAM service itself. If the access is allowed, you go through to the AWS service you wanted. If not, you are given “Access Denied” message.

Fig 1: IAM intercepting service requests

The policy evaluation that IAM service is doing is quite complex. You can read more details about it in the official AWS documentation here. Just imagine how insanely scalable and performant that service must be to do the policy evaluation on every single API call. And it needs to be fast, to introduce as little overall overhead as possible. Millions of AWS customers making millions of API calls every second… hats off to the IAM service team for creating an incredible architecture for this service.

Back to our topic. IAM service is evaluating two things: guardrails and grants. Guardrails are setting up maximum privileges for users, while grants actually giving rights to users to perform some action.

Fig 2: AWS IAM guardrails and grants

Everybody who is using AWS is already familiar with two types of grants: identity- and resource-based policies. Without them you can’t use any of the AWS services. We’ll take a brief look into these grants in the Mistake #5 down below.

Guardrails are something that I haven’t seen many customers using that often. I think they are essential in controlling blast radius of your AWS environments. There are two types of guardrails:

  • Service Control Policies

Service control policies (SCPs) are used to manage permissions in your AWS Organizations (I truly hope that everyone who has more than 2 AWS accounts is using AWS Organizations service). SCPs offer central control over the maximum available permissions for all accounts in your organization. SCPs help you to ensure your accounts stay within your organization’s access control guidelines. SCPs alone are not sufficient to granting permissions to the accounts in your organization. No permissions are granted by an SCP. The administrator must still attach identity-based or resource-based policies to IAM users or roles, or to the resources in your accounts to actually grant permissions.

SCPs are just like any other policy you write for your IAM users or roles. Just they are applied on the Organization Unit level. Let’s take a look at one SCP that I like a lot, the one that can prevent users to launch expensive EC2 instances. If you have a Sandbox organizational unit where you allow your users to play around, you would want to make sure they don’t spin up expensive EC2 instances. The following SCP allows using only “t2.micro” instances that are free tier eligible.

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "RequireMicroInstanceType",
      "Effect": "Deny",
      "Action": "ec2:RunInstances",
      "Resource": "arn:aws:ec2:*:*:instance/*",
      "Condition": {
        "StringNotEquals":{               	
          "ec2:InstanceType":"t2.micro"
        }
      }
    }
  ]
}

So, what SCPs do you need?

This really depends on how restrictive you want to be. If you use AWS Control Tower for creating your landing zone, you will get by default a set of integrated Control Tower guardrails that come with it.

These are the SCP guardrails that I would use for all of my accounts:

  1. Deny creation of expensive services
    – e.g. EC2 instances from P, X, Z family or creation of private ACM
  2. Deny purchasing of reserved instances
    – you don’t want to enter 3 year upfront commitment by accident
  3. Deny leaving the Organization
    – Hotel California idea “You can check-out any time you like, but you can never leave” 🙂 You don’t want your sub-accounts to leave your Organization because then none of the SCPs will apply to them.
  4. Deny ability to disable Config, CloudTrail and GuardDuty
    – you don’t want to turn off tools that continuously monitor for malicious activity and unauthorized behavior in you accounts
  5. Deny root access
    – root user should never be used
  6. Restrict your services to certain regions
    – there are 24 AWS regions. If you applications and data are in e.g. 2 regions, disable other 22.
  7. Require the use of IMDSv2 for EC2 metadata
    – IMDSv2 is secure version of EC2 metadata service that prevents stealing IAM roles from EC2
  8. Deny ability to create IAM access keys
    – more about this in the next Mistake #2
  9. Deny ability to make a VPC accessible from the Internet (if it doesn’t have it already)
    – if a VPC is not intended to be Internet-accessible, deny attaching internet gateway to it
  10. Require MFA to perform certain API action
    – e.g. deny stopping EC2 instance unless there is MFA
  11. Require a tag on specified created resources
    – you would want all resources to have project or cost center tag
  12. Deny ability to delete KMS Keys
    – if you delete KMS keys, your encrypted data stay encrypted forever
  13. AI services opt-out
    – AWS uses your data to train AI services. You can opt-out of this and not allow AWS to use your data as training data.

These are some of the SCPs that I recommend. However, there are plenty of others that you can create based on your specific security requirements. Just be careful to test your SCPs. If you throw a lot of SCPs on your accounts, you might wonder sometimes why you don’t have access to a specific resource or service. Policy simulator is a good tool to test SCP policies on your account.

  • Permission Boundaries

A situation: you have a developer who is developing something on an EC2 instance. He is a great Java developer but knows nothing about IAM permissions. His EC2 instance needs to access DynamoDB, so he needs to create a role that his EC2 will assume to call DynamoDB. How will he do that? Usually what happens is that developer goes to IAM console, creates a role that has Administrator Access policy and attaches that to his EC2 instance.

That is a problem. If that instance gets compromised, an attacker now has Admin Access in your account and can do whatever he wants.

As a solution, you could create SCP policy to prevent those developers to create any IAM roles. But that would limit the productivity of your developers as they would constantly depend on some internal identity/security team to create new or extend existing IAM roles.

A better solution is to use permission boundaries, where you can delegate role creation to a user and “bound” him to only create a role that has predefined set of policies.

Permission boundary for our previous example with EC2 instance and DynamoDB would now look like this:

{
  "Version" : "2012-10-17",
  "Statement" : [
     {
      "Sid": "SetPermissionsBoundary",
      "Effect": "Allow",
      "Action": ["iam:CreateRole"],
      "Resource": "arn:aws:iam:::role/MyEC2App",
      "Condition": {
         "StringEquals": {
           "iam:PermissionsBoundary":     
           "arn:aws:iam:::policy/DynamoDB_Boundary"}}
      } 
}

Here we allowed creation of a role for the EC2 instance that fits to predefined DynamoDB policies. Developers who got this delegated right to create MyEC2App role can not assign to this role any other policies then just the predefined ones. EC2 instance cannot for example be given a permission to access RDS database as that is outside of the predefined permission boundaries that allow access to DynamoDB only.

Mistake #2: Not using AWS SSO

When someone opens a new AWS account, usually the first step is to create an IAM user with Admin Access who should then create all other IAM Roles that can be assumed via SSO. The issue is that these Admin users go on and create IAM Users instead. IAM Users are “hard-coded” users of AWS. They have access keys that never expire. Whenever you do “aws configure” in your terminal, these access keys end up in plain text format in ~/.aws/credentials file on your local machine such as:

[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY

That is a big risk. If your local machine gets infected by a virus, someone can get access to those keys and can login in your AWS account. Often these keys are being used in some code that ends up on GitHub and exposes your account to scanning bots within seconds. Laptops can get stolen, employees can leave companies, all of those are ways for IAM access keys to get lost and be reused by bad actors later on.

That is why you should avoid creating IAM Users at all. I am sure if AWS had a way to remove IAM Users as a concept from IAM service they would probably do so.

Instead, you should always use AWS SSO to access your environment both programmatically or through management console. AWS SSO gives you a central place where you create and remove users. AWS SSO can be used as an identity store or can integrate with some other external identity store such as Microsoft AD or Azure AD for example.

Once you configure AWS SSO, you can also use it in your terminal to configure programmatic access to AWS. You can type “aws configure sso” and follow instructions on the terminal. At the end, your credentials file will look like this:

[profile my-dev-profile]
sso_start_url = https://my-sso-portal.awsapps.com/start
sso_region = us-east-1
sso_account_id = 123456789011
sso_role_name = readOnly
region = us-west-2
output = json

No access keys here. Each session has an expiration time. After that you just need to “aws sso login” to get a new session token and continue working.

This approach is much safer than using IAM Users.

Mistake #3: Not using AWS Config

AWS Config is a service that maintains a configuration history of your AWS resources and evaluates the configuration against best practices and your internal policies.

AWS Config is a great service that can give you an idea of what and how are resources in your accounts being used. It comes with set of predefined rules that you can help you prevent bad practices. Example, there is a rule “eip-attached” that checks whether all Elastic IP addresses that are allocated to an AWS account are attached to EC2 instances. If some EIP are not attached to any EC2 instances, you are then paying for them to sit idle. You can also write your own AWS Config rules depending on what your internal security and compliance requirements are.

If your application needs to be regulated by some industry or compliance standard, AWS Config provides conformance packs that are making necessary checks relevant to that specific standard. Example, if you are in a credit card processing business, then Operational Best Practices for PCI DSS 3.2.1 conformance pack is the right one for you.

The best practice for using AWS Config is to enable it for all regions that you are using and for all resource types. AWS Config will then monitor changes in all services that are supported in those regions. If you are using AWS Control Tower for creating landing zone in your AWS Organizations, then AWS Config comes already enabled with it.

Customers were complaining earlier that AWS Config is an expensive service. That is true, it used to be very expensive. In the mean time, AWS has changed the way AWS Config is priced and that has reduced overall cost for this service. As an example, my account has AWS Config running as part of AWS Control Tower and it costs about $8 per month.

Mistake #4: Not using Account Factory

Creating new accounts in AWS should always be done as part of AWS Organizations. You want your accounts to be consolidated for billing and for having centralized place to write SCP’s and apply them for all sub-accounts throughout the Organization.

But, creating plain new accounts comes with no restrictions. They require applying certain controls and restrictions. That is why it is important to use some Account Factory that installs such controls in each newly created account.

If you are using AWS Control Tower, it comes with Enroll Account feature that is used to provision new sub-accounts. Accounts created through the Account Factory in AWS Control Tower inherit the guardrails of the parent Organization Unit, and the associated resources are created such as AWS CloudTrail and AWS Config.

There are other third-party vendor tools that you can use for account provisioning. But if you don’t want to use any of them, you can always create your own Account Factory that fits specific needs of your organisation. All of the account factories are using CloudFormation StackSets feature that allows deploying CloudFormation stacks across multiple accounts and regions. In such stacks you can define which rules you want to enforce in each new account, such as e.g.: enable AWS CloudTrail, enable AWS Config with central logging, require MFA usage etc.

Mistake #5: Having too much trust

A big mistake is to trust others. Don’t trust anyone. Don’t trust your internal employees, don’t trust external partners/vendors and don’t trust AWS.

This might sound a bit harsh, but the reality is that most of problems are unintentional. People make mistakes and some of those mistakes can be very costly for your company.

Example: Amazon Cognito is popular authentication service that many startups are using for handling mobile app authentication. All users that register to your application are stored in Amazon Cognito. But Cognito is not backed up or replicated across regions. There is no such functionality in AWS to backup or replicate Cognito user pools. You need to use some unofficial third-party tools to do so. So, with one click of a button you can, by mistake, delete your user pool and all you lost all of your users who previously registered in your app. (I have done that few times, don’t be like me)

  • Don’t trust your employees

As described in Mistake #1, use Guardrails to limit what your employees can do in AWS. Additionally, don’t be shy with creating new accounts for new workloads, especially if you are using Account Factory. AWS account is the highest form of security isolation. If one is compromised, the others are not. Create budget alarms in each account so that you get notified when some account reaches the cost threshold. Create regular backups, in case someone unintentionally deletes data. Use infrastructure as code to reproduce environments in case some service gets deleted. Enable logging on services that you use and make sure you have alerts on logs that can detect and/or remediate anomalies.

  • Don’t trust external vendors

You might use use SaaS products that require access to your environment. SaaS vendors will send you CloudFormation scripts that will allow them access to your AWS account. Make sure you inspect those CloudFormation scripts in great detail and scope down permissions to only those that are absolutely necessary.

Be careful with community AMIs, CloudFormation templates, Lambda Layers, SAR applications and public Docker images. Inspect each of them before importing into your environment. Some of them might contain phishing code.

  • Don’t trust AWS

This is just a cheeky way of saying that you should encrypt everything. AWS would say “dance like nobody is watching, encrypt like everyone is”.

Use end-to end encryption for your applications. That means encryption in transit and encryption at rest. Lots of AWS services integrate natively with AWS KMS (Key Management Service). Generate your master key and import it into KMS. From there, use that key to encrypt everything in your databases for example. You can use key policies to control access to customer master keys (CMKs) in AWS KMS.

You might ask who am I protecting myself from with database encryption?
Mostly from your internal employees who have access to databases. If the data are encrypted and only your application has right to use “kms:Decrypt” action, then all of them will see just garbage data. You might say that AWS database administrators are also your threat, but I wouldn’t be worried about them. AWS has so many internal controls and separation of duties that it is very unlikely that database admins can take your data. They can definitely see your data (after all they are admins of the underlying DBMS for managed services such as Amazon RDS) but if the data are encrypted, all they see is garbage as well.

Conclusion

Security is in general a complex topic and in AWS it is even more complicated as there are so many slippery slopes.

AWS offers you a lot of tools and services to keep your security and cost under control. But sometimes the offering feels overwhelming. The best approach is to have dedicated well-trained teams for Identity and Security areas in AWS, that will make sure right people have right access and that possibility to do mistakes is minimized.

Many of AWS accounts out there are not properly protected. Many companies have AWS bills way bigger than they should be. Many are saying that they have things under control but also many of them are making above mentioned mistakes.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s