What Is a Container?
A container is a lightweight, isolated environment that packages an application together with everything it needs to run: libraries, configuration files, and the runtime environment. The key insight is that a container shares the host operating system's kernel rather than running a full OS of its own. This makes containers dramatically smaller and faster than virtual machines.
The core problem containers solve is the classic "it works on my machine" problem. Before containers, an application that worked perfectly in a developer's local environment might fail in production because of subtle differences in library versions, OS configuration, or installed software. A container bundles the application and its exact dependencies together, so it runs identically regardless of where it's deployed — a developer's laptop, a test server, or a production cloud environment.
Containers are ephemeral by design. When a container stops, any data written inside it is lost unless explicitly saved to external storage. This makes containers ideal for stateless services — web servers, APIs, microservices — where each instance is identical and disposable. For stateful applications like databases, you mount external persistent storage volumes.
Containers vs Virtual Machines
| Property | Container | Virtual Machine |
|---|---|---|
| OS kernel | Shares host kernel | Full OS per VM (own kernel) |
| Startup time | Milliseconds | 30 seconds to minutes |
| Size | Megabytes | Gigabytes |
| Isolation | Process-level isolation | Full hardware-level isolation |
| Security boundary | Weaker (shared kernel) | Stronger (separate kernel) |
| Portability | Very high — runs anywhere Docker runs | High — but larger and slower to move |
| Density | Hundreds per host | Tens per host |
| Best use case | Microservices, CI/CD, cloud-native apps | Full OS isolation, legacy apps, different OS |
In practice, containers typically run inside VMs in production environments. You provision VMs from your cloud provider (IaaS), and then run many containers on each VM. This gives you the security boundary of VM isolation at the infrastructure level, combined with the density and speed advantages of containers at the application level.
Docker: Images, Containers, and the Dockerfile
Docker is the platform that standardised containers and made them accessible to developers. The Docker ecosystem has four key components you need to understand:
The typical Docker workflow is: write a Dockerfile → run docker build to create an image → push the image to a registry → on any server, pull the image and run docker run to start a container.
Image = blueprint (read-only, stored in a registry). Container = running instance of an image (writable, ephemeral).
Many exam questions and interview questions hinge on this distinction. You push/pull images to/from registries. You start/stop/delete containers on hosts.
Container Registries
A container registry is a repository for storing and distributing Docker images. Understanding the major registries is important for cloud certification exams:
| Registry | Provider | Notes |
|---|---|---|
| Docker Hub | Docker, Inc. | Default public registry. Millions of official and community images. Free for public images; paid for private. |
| Amazon ECR | AWS | Elastic Container Registry. Integrates with ECS and EKS. Private by default; IAM-controlled access. |
| Azure Container Registry (ACR) | Microsoft | Integrates with AKS and Azure DevOps. Supports geo-replication for multi-region deployments. |
| Google Artifact Registry | Google Cloud | Replaced Google Container Registry (GCR). Integrates with GKE. Supports multiple artifact types beyond containers. |
| GitHub Container Registry | GitHub (Microsoft) | Store images alongside source code in GitHub. Common in CI/CD pipelines that build and test on push. |
In a secure enterprise environment, image scanning is performed at the registry level — every pushed image is automatically scanned for known vulnerabilities in its layers before it can be deployed. AWS ECR, ACR, and Google Artifact Registry all provide built-in vulnerability scanning.
Container Orchestration and Kubernetes
Running a single container on a single server is straightforward with Docker. But real production applications might require dozens or hundreds of containers spread across multiple servers, with requirements for automatic scaling, load balancing, rolling updates, and self-healing when a container crashes. This is the problem container orchestration solves.
Kubernetes (abbreviated K8s — K, 8 letters, s) is the dominant container orchestration platform. Originally developed at Google and open-sourced in 2014, it is now maintained by the Cloud Native Computing Foundation (CNCF). Every major cloud provider offers a managed Kubernetes service: AWS EKS (Elastic Kubernetes Service), Azure AKS (Azure Kubernetes Service), and Google GKE (Google Kubernetes Engine).
Kubernetes key capabilities include automatic scaling (add more Pods when CPU usage spikes), self-healing (automatically restart failed containers or reschedule Pods from failed nodes), rolling updates (update containers to a new version with zero downtime by gradually replacing old Pods), and rollbacks (revert to the previous version if a deployment fails).
Docker has its own built-in orchestration tool called Docker Swarm, which is simpler to set up than Kubernetes but less powerful and less widely adopted in production. For small deployments and teams new to orchestration, Swarm is easier to learn. For large-scale, production-grade workloads, Kubernetes is the industry standard. Most cloud certification exams focus on Kubernetes.
Container Security
Containers introduce specific security considerations that appear on Security+ and cloud security exams:
Image vulnerabilities — container images are built on top of base OS images (e.g. Ubuntu, Alpine Linux) that may contain known CVEs. Regularly scanning images and rebuilding them against updated base images is essential. Use minimal base images (Alpine Linux is ~5MB vs Ubuntu's ~70MB) to reduce the attack surface.
Container escape — a vulnerability that allows a process inside a container to break out of the container's namespace and access the host OS. This is the most serious container security risk. Running containers as non-root users and using read-only filesystems reduces this risk.
Secrets management — application secrets (database passwords, API keys) should never be baked into Docker images. Use environment variables, Kubernetes Secrets, or dedicated secrets managers (AWS Secrets Manager, HashiCorp Vault) to inject secrets at runtime.
Namespace isolation — Linux namespaces are the underlying technology that isolates containers from each other and from the host. Containers share the kernel but have separate process, network, and filesystem namespaces. This isolation is weaker than VM isolation — a kernel vulnerability can potentially affect all containers on a host simultaneously.
Container escape is the buzzword for the container-specific attack where a malicious process breaks out of container isolation. Image scanning is the control. Running containers as non-root is the hardening technique. These three points cover the majority of container security questions on Security+ SY0-701 Domain 3.
Exam Scenarios
Studying for a Cloud or DevOps Certification?
Check out the best study resources for CompTIA, AWS, and Azure exams.