Container security is the practice of implementing mechanisms/processes to secure containerized applications and workloads. It’s critical in today’s cloud environments to have maximum visibility into aspects like container-host location, identifying running or stopped containers, spotting container hosts not in compliance with CIS benchmarks, and performing vulnerability assessments.
What is a container? Google Cloud defines containers as lightweight packages of application code – combined with dependencies such as specific versions of programming language runtimes and libraries – required to run your software services. A container orchestration platform – like Kubernetes – has a big job to do, automating provisioning as well as starting, stopping, and maintenance of images.
Because container orchestration is abstracted and automated via tools like Kubernetes, it's well suited for integrating into the continuous integration/continuous deployment (CI/CD) lifecycle, and is a key component of adopting DevOps practices. This process is an efficient and reliable way of delivering new applications or updates to code. As such, guardrails are necessary to ensure things don't go haywire. From a container perspective, we should:
As with anything that drives increased speed, we do run the risk of sacrificing security and control in the process. This means security teams must work hand in hand with counterparts in the development organization to ensure proper guardrails and checks are in place.
Container security should be implemented as early on in the CI/CD pipeline as possible to expose application risks faster and reduce as much friction in the development process as possible.
Container security is important because of the technology’s operational complexities within cloud workload environments. Security is also important because containers are the foundation upon which so many of today’s public-internet facing applications are built; this leaves them exposed to many potential risks. Those risks associated with containerizing digital assets are manageable and can be mitigated by:
Now that we know a bit more about container operations and why they’re so popular these days, let’s take a look at how teams might go about putting in place some best practices for securing those environments.
It's important to implement security not just for steady-state containers and scanning images, but also during runtime when containers are operational. Issues can and should be fixed post-deployment, as security is a continuous process that cannot be fully assured in development.
Vulnerabilities can pop up at any time – or could have been lying in wait to be discovered for months – in a CI/CD environment. When deployments and updates go out on the regular, it’s critical for a security team to be able to spot each and every vulnerability they can. Regular scanning to identify vulnerabilities is imperative to a container security program. Container image scanning will typically reference a vulnerability-and-exploit database containing a list of publicly known vulnerabilities.
Infrastructure is becoming more compartmentalized, more ephemeral, and more dependent on code as opposed to physical machines. That’s why monitoring containers for vulnerabilities plays such a critical role in system health. Even if everything tests and passes, issues can still come up when conducting post-deployment testing. That’s why it’s so imperative to use a solution that features a consistent set of security checks throughout the CI/CD pipeline. This enables teams to correct misconfigurations and policy violations without delaying deployment.
Deciding to launch operations into the virtual world is a turning point in a DevOps organization, and there are many benefits of containers. If a business is going to invest in that infrastructure, it’s a good idea to secure it all the way up to the application layer and set standards for how containers will be provisioned. Monitoring and tracking critical container events in real time can help to optimize application performance. To top off the process, it’s a good best practice to leverage real time performance monitoring and analytics – such as CPU, memory, and network usage – for all running containers.
Like other cloud resources, containers and the processes that run within them are assigned roles/permissions that need to be tracked and managed with an identity and access management (IAM) plan, preferably in accordance with least privilege access (LPA). After a DevSecOps organization has calibrated a fancy, new multi-cloud container environment, access should be restricted to only those who need it. IAM is key to making cloud and container services secure and compliant. It can also help to institute a rational and sustainable approach for addressing perimeter fluidity and the substantial challenges of governing cloud environments at scale.
Customers have their choice of vendors when shopping for cloud service providers (CSPs), and there are a variety of different container runtimes and container orchestration platforms to choose from. But it's important to choose one that is properly supported by your underlying cloud platform, while taking into account CSPs have multiple offerings for managing containers.
Docker first came onto the market in 2013, and provides the ability to package and run an application in a container. The platform enables sharing of containers while a user works, and ensures that everyone is seeing and working with the same container and functionality. It helps manage container lifecycle through development, distribution, testing, and deployment.
Kubernetes is an open-source, container-orchestration platform for managing workloads and services. Kubernetes takes charge of container deployment and also manages the software-defined networking layer that allows containers to speak to one another. The platform is portable and facilitates declarative configuration and automation. Google open-sourced the Kubernetes project in 2014.
Google Kubernetes Engine (GKE) was launched in 2018 and is a cluster manager and orchestration system that runs Docker containers. It works with on-prem, hybrid, or public-cloud infrastructure, and can manage container clusters of virtual machines to deploy quickly. GKE can schedule declared containers as declared and actively manage applications.
Amazon Elastic Container Service (ECS) was launched in 2014, and is designed to integrate with the rest of the AWS platform to run container workloads in the cloud and on-prem. ECS provides consistent tooling, management, workload scheduling, and monitoring across environments. Users can also automatically scale apps across Availability Zones, as well as place containers at will, depending on their resource needs and availability.
We’ve touched on it a bit, but these cloud container environments can be complex behind the scenes. Ease-of-use has been so prioritized by providers in recent years, that most of the complexity is indeed relegated to the background. That doesn’t mean, however, that users need not be aware of the challenges of securing these environments. In other words, you have to know how it works to know how to fix it. Let’s take a look at some common container security challenges.
Container Security: Latest News from the Blog