AWS ECS is a fully managed container service that allows organizations to easily deploy, manage, and scale containerized applications. AWS ECS optimally distributes the container instances across available EC2 resources or via Fargate – a serverless deployment option to scale containerized workloads. The process of deploying, managing, and scaling containerized applications via AWS ECS is automated, in compliance with the security policies set by the customer and enforced by the cloud vendor.
Despite out-of-box security configuration delivered by AWS, vulnerabilities and / or misconfigurations may allow attackers to compromise container applications. This can potentially allow attackers to gain access to other cloud systems, exfiltrate critical data, execute malware such as ransomware, or compromise user accounts.
For more information on securing and investigating AWS ECS, read our playbook for best practices.
In order to mitigate these security risks, below you’ll find 9 best practices to secure AWS ECS containerized applications:
1. Understand the AWS Shared Responsibility Model:
The first step toward security in the cloud is to carefully understand that security and compliance is a shared responsibility of AWS and its customers. To summarize, customers are responsible for everything “IN” the AWS cloud, whereas AWS is responsible for security “OF” the cloud. For example, customers are responsible for maintaining security of their own data, operating systems, network and firewall configurations, identity and access management, and more. On the other hand, AWS is responsible for securing the overall hardware and global infrastructure (detailed guidelines from AWS on the shared responsibility model).
2. Enforce a Zero-Trust Identity and Access Management (IAM) Policy
Zero-Trust security design follows the principle of: Never Trust, Always Verify. The concept assumes no default trust for any user. It enforces granular rules that define the scope of accessing data and workloads in the cloud, only allowing them to perform actions authorized within the defined organizational policies. The following recommendations can help improve IAM rule enforcement for AWS ECS workloads:
- Create and scope IAM policies in compliance with the zero-trust model. Instead of narrowing down the scope at a specific AWS ECS Task or service level, consider applying policy scope at the Cluster level to reduce administrative overhead while maintaining security.
- Automate pipeline tasks to package and deploy onto the AWS ECS Clusters to isolate the infrastructure API from end-user access.
- Regularly monitor, audit and control how the AWS ECS API is accessed. AWS CloudTrail can be used to monitor API calls and actions performed with ECS IAM Roles.
- Predefined purpose built ECS IAM roles should be preferred instead of containers inheriting role assignments.
3. Ensure End-to-End Encryption for Secure Network Channels
Consider end-to-end encryption of mission-critical workloads running in AWS ECS environments. Encryption prevents unauthorized entities from being able to view or modify confidential information in transit. Some things to consider include:
- ECS allows end-to-end network encryption between TLS connections without terminating encryption certificates at a load balancer. This can be done using the AWS Network Load Balancer or Application Load Balancer. It’s important to note, however, that this approach can add complexity in the event you need to investigate a potential compromise.
- Configure your ECS Tasks to use appropriate security groups which limit connectivity to and from the task resources to only the minimum required. Each security group acts as a virtual firewall for the resource it is applied to.
- For workloads that require strict isolation, use separate AWS Virtual Private Cloud (VPCs).
- Monitor container network flows by using VPC Flow Logs. Alarms can be set to alert you of any unexpected traffic to or from your ECS workloads. In practice, this may only be useful for containers active for longer durations.
4. Inject Secrets into Containers at Runtime
Follow a zero-trust security policy in managing the Secrets definition parameters for AWS containers. Secrets include login credentials, certificates and API keys used by applications and services to access other system components. The following guidelines can be used in managing and using Secrets parameters:
- Use AWS Secrets Manager for secure storage of Secrets credentials instead of baking them into the container. The tool can be used to encrypt information, generate new keys and credentials and share them between multiple accounts.
- Enforce IAM Task Role assignments such that any leaked information is not available for unauthorized users and services.
- Use the AWS Security Token Service to sign API requests using temporary credentials.
- A temporary sidecar container can be used to store secrets to reduce risk of environment data variables leakage on the application container.
5. Regulatory Compliance as the Bare Minimum
Follow the guidelines of the applicable security regulations in your country and industry, but treat them as a bare minimum and not an end-goal of your ECS security plan. AWS helps deploy compliance-focused baseline environments especially focused on the following guidelines across most of the popular regulations such as HIPAA, PCI-DSS and GDPR. Beyond this, it’s important to consider the following (some already mentioned):
- Fully understand and enforce the AWS shared responsibility model for security.
- Use strong end-to-end encryption where possible.
- Enforce strong IAM policy controls.
- Track, monitor and audit network traffic and security performance using built-in AWS tools.
- Have the processes and technology in place to investigate and respond to incidents that may impact your ECS workloads.
6. Gather the Right Data
Configure your container environments to communicate relevant security data and log data to the built-in AWS monitoring tools such as CloudWatch and CloudTrail. These tools can be used to collect data insights at the hardware, service and cluster level. However, this data alone may not suffice for an in-depth investigation of ECS containers.
In this context, the most useful data are the system logs and files from within the container, the containers running processes and active network connections, the container host system and container runtime logs (if accessible), the container host memory (if accessible) and the AWS VPC flow logs for the VPC the container is attached to. You should be able to collect, correlate and enrich these data sources to effectively investigate a container potentially involved in an incident and then collect this same data from any other containers which are likely also affected or connected to any containers operating in a suspicious manner (e.g., they are part of the same application like a web server and database server). However, this data is not natively available through the built-in tools. In order to gain more visibility into ECS containers, third-party incident and threat intelligence capabilities prove vital to discover, monitor and secure all container assets.
7. Best-Practices for AWS Fargate
AWS Fargate is a serverless service that provides the option of fully managed and abstracted infrastructure for containerized applications managed using AWS ECS. The AWS Fargate service performs tasks such as provisioning, management and security of the underlying container infrastructure while users simply specify the resource requirements for their containers. The following security guidelines should be followed when you leverage the AWS Fargate service:
- Data workloads are not encrypted by default. Use the AWS Key Management Service (KMS) tool to encrypt ephemeral container storage.
- Ensure that the runtime privileges and capabilities adhere to your organizational security policies as per the Zero or Limited trust model.
- Since AWS Fargate is a managed service, it offers no visibility and control into the underlying infrastructure. The service supports compliance with ISO, PCI as well as SOC 1,2&3, and meets HIPAA eligibility criteria. Additional measures such as end-to-end encryption may be required to fully meet compliance requirements. You should use third-party tools to allow for data collection from running containers should you need to investigate one.
8. Construct Secure Container Images
Container images consist of multiple layers, each defining the configurations, dependencies and libraries required to run a containerized application. Security of container images should be seen as your first line of defense against cyber-attacks or infringements facing your containerized applications. Constructing secure container images is critical to enforce container bounds and prevent adversaries from accessing the Host OS and Kernel. The following ECS container image security best practices should be considered:
- Use minimal or distroless images. These images only contain runtime dependencies and the application code. You can also use tools such as Dive to identify and remove extraneous binaries from the container images.
- Scan container images for vulnerabilities. Identify the affected layers and dependencies. Trigger automated actions such as access limitation and blocking of vulnerable images to sensitive data workloads.
- If using EC2 as the underlying infrastructure, enforce upper bounds on EC2 instances such as CPU and Memory resources that can be configured for containers within a predefined Task. This reduces the risk of overconsumption or misuse of resources in the event of a network infringement.
- Configure container images with immutable tags and avoid running containers as privileged – which inherits all Linux capabilities as applicable to the Host. In fact, it is recommended to remove all unnecessary privileges and capabilities that may have been automatically inherited when the images are constructed.
- Finally, encrypt the constructed images using the Customer Managed Key (CMK) service if you are storing your images in the AWS Elastic Container Repository (ECR).
9. Ensure Incident Readiness for Containers
It’s often said that incidents are ‘when, not if’ so preparing to investigate and respond is key, regardless of how robust your container security is. When investigating an environment which utilizes containers, data collection needs to happen quickly before automatic cluster scaling destroys valuable evidence. Additionally, you may have thousands of containers so the collection, processing, and enrichment of container data needs to be automated. Some of the key things to consider are:
- Automated Data Collection: Cloud scaling allows you to scale your investigation resources too, not just your applications. Collect and process container data in parallel triggered by an analyst or other detection or orchestration tool (e.g., SOAR) to get valuable information to analysts more quickly and reduce your time to incident containment and resolution.
- Container Asset Discovery: Since containers operate as virtualized environments within a shared host kernel OS, it’s often challenging to keep track of asset workloads running across all containerized machines in a scaled environment. An agentless discovery process that can efficiently discover and track container assets can be used to enforce appropriate security protocols across all container apps.
- Dedicated Investigation Tools and Environment: If you need to figure out what tools to use and where to store incident data when an incident happens, you’ll waste valuable time where more damage can be done and key evidence could be lost. Have a dedicated toolset or environment which automates as much of the investigative process as possible in place ahead of time so that you are ready to respond at a moment’s notice.
- Ability to Quickly Isolate: In the event a container is compromised, it’s critical that you have the ability to quickly isolate it in order to stop the active attack and prevent further spread and damage. In some cases, isolation can be a good first step to take following initial detection. This will allow you to perform a more thorough investigation in the background and ensure proper remediation and containment steps are taken after you have a better understanding of the true scope and impact of the incident.
In summary, it’s important to understand that the nature of AWS ECS as a managed service and the potential scale of an app running across multiple containerized environments makes it challenging to effectively capture data and investigate incidents. In addition to following the industry-proven AWS ECS best practices discussed in this blog, a primary focus toward automation technologies is key to achieving a secure, high performance and defendable container environment.
For more information on securing and investigating AWS ECS, read our playbook for best practices.
About Cado Security
Cado Security is the cloud investigation and response automation company. The Cado platform leverages the scale, speed and automation of the cloud to effortlessly deliver forensic-level detail into cloud, container and serverless environments. Only Cado empowers security teams to investigate and respond at cloud speed.