Cloud Infrastructure and Container Security
Apply shared responsibility, identity controls, and container/Kubernetes hardening with serverless assessments.
Content
Shared Responsibility Model (AWS, Azure, GCP)
Versions:
Shared Responsibility Model (AWS, Azure, GCP) — The Cloud Isn’t a Magic Force Field
You just survived a deep dive into DDoS, botnets, resilience engineering, and ingress filtering. Great — now welcome to the place where the cloud provider and you play hot-potato with security responsibilities. Spoiler: nobody throws it all away.
Opening: Why this matters (and yes, it links to your DDoS work)
You already learned how attackers can flood services and how networks need ingress filtering like BCP38. In cloud land, some of those protections live with the provider (they run the data centers), but your app still sits on top of their stack like a nervous raccoon in a fancy server room. Misunderstanding who’s responsible for what is how production gets pwned, or how a DDoS turns into a bill-of-shame.
This piece maps the Shared Responsibility Model across AWS, Azure, and GCP, with a special focus on containerized workloads (because containers + cloud = the future's favorite accident surface). Read this if you want to avoid waking up to a mysterious traffic spike and an even more mysterious invoice.
Big idea (one-liner)
Cloud providers secure the cloud infrastructure; you secure everything you put in the cloud.
Translation: They secure the bricks, you secure the stuff built with the bricks.
The shared-responsibility layers (visual cheat-sheet)
| Layer | Provider (AWS/Azure/GCP) | Customer (you) | Notes (container-specific) |
|---|---|---|---|
| Physical data center | ✔️ | ❌ | Providers handle racks, cooling, physical access |
| Network backbone | ✔️ | ❌/✔️ | Providers protect core network and offer DDoS services; you configure subnets, SGs, NACLs |
| Hypervisor / Virtualization | ✔️ | ❌ | You don't touch it unless running BYO hypervisor |
| Host OS (managed services) | varies | varies | e.g., Fargate/Cloud Run: provider-managed. EC2/VMs: customer-managed |
| Containers / Runtime | partly (control plane) | mostly | Managed K8s control plane is provider-managed; nodes may be yours to patch |
| Applications & data | ❌ | ✔️ | YOU are fully responsible for secrets, code, data encryption at rest/in transit |
| Identity & Access Management (IAM) | provides tooling | config & policies | Providers give IAM systems; you write policies and assign roles |
| Logging & Monitoring | provides services | configure & consume | CloudWatch, Azure Monitor, Cloud Logging need proper agents/retention |
How AWS, Azure, and GCP differ (practical distinctions)
Control plane vs nodes
- AWS: EKS control plane is managed; worker nodes are yours unless you use Fargate. EKS has an EKS Fargate option for serverless containers.
- Azure: AKS manages control plane; node management depends on model and can be automated. AKS also has Virtual Node/ACI for serverless.
- GCP: GKE offers Autopilot and standard modes. Autopilot more fully manages nodes and enforces policy defaults.
DDoS protection
- AWS: Shield Standard (automatic), Shield Advanced (paid), CloudFront + WAF for edge protection.
- Azure: DDoS Protection Basic (automatic), Standard (paid), Azure Front Door + WAF.
- GCP: Edge protection via Cloud CDN + Cloud Armor; DDoS protections built-in with paid tiers.
Image scanning & supply chain
- AWS: ECR image scanning, Amazon Inspector.
- Azure: ACR vulnerability scanning + Microsoft Defender for Containers.
- GCP: Container Analysis, Artifact Registry scans.
Networking nuance
- All providers largely prevent IP spoofing on their networks (good BCP38 vibes), but you still must configure security groups, firewall rules, private subnets, and NATs correctly. Misconfiguring these is the classic “open door” exploit.
Containers: the most important “who does what” map
Think of containers as a three-act play:
- Control plane (Kubernetes API, schedulers) — usually provider-managed in managed services. Provider responsibility for availability and security of control plane nodes.
- Worker nodes (where containers actually run) — sometimes your responsibility: patching, runtime hardening, kernel upgrades, container runtime (runc, containerd). If you use managed node pools or serverless (Fargate/Autopilot), provider takes more.
- Your containers & cluster configuration — always your responsibility: images, secrets, RBAC, network policies, pod security standards.
So: if you run EKS on EC2 nodes and you don't patch the nodes or containerd, that's on you. If you run Fargate and there's a kernel-level exploit, AWS is on the hook (mostly).
Hands-on checklist (who does what — immediate actions)
- Provider: ensure you subscribe to appropriate DDoS protection + use CDN/WAF for edge filtering.
- You: enforce least-privilege IAM, use short-lived credentials, enable MFA, and rotate keys.
- Provider: provide managed control plane & patching options (choose Autopilot/Fargate where appropriate).
- You: scan images in CI, sign images, enforce admission policies (OPA/Gatekeeper), enable network policies and Pod Security Standards.
- Provider: supply logging & audit services.
- You: export audit logs to a central SIEM, set retention, alert on suspicious burst traffic patterns.
Small snippets that save lives (pseudo-examples)
Kubernetes: enforce no-root users and read-only root filesystem (admission principle):
# PodSecurity standard reference (simplified)
runAsNonRoot: true
readOnlyRootFilesystem: true
allowPrivilegeEscalation: false
IAM principle (pseudo): give service account only S3:GetObject, not ListAllBuckets.
# pseudo-IAM
{
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::my-app-bucket/*"]
}
Common gotchas (real developer horror stories)
- You enabled public buckets/containers to debug once and forgot them. Oops. (Data breach)
- You relied on provider DDoS and disabled autoscaling; a sudden legit traffic spike costs $$$. (Resilience planning!)
- Using managed k8s but running self-managed nodes — patches left undone — kernel exploit occurred. (Patch your nodes or move to managed nodes)
Ask: Which side of the wall does this risk live on? If the answer is “it depends,” it probably requires a design decision and an incident playbook.
Closing: TL;DR + challenge
TL;DR: The cloud secures the ground. You secure the house and everything inside it. Choose managed services to shift risk upstream — but don’t confuse convenience with security.
Key takeaways
- Map responsibilities when designing systems (control plane vs nodes vs app)
- Use provider DDoS/WAF/CDN but still practice resilience engineering: autoscaling, rate-limiting, caching
- Secure container supply chain: image signing, scanning, admission control
- Centralize logging and enable alerts for anomalous traffic and IAM usage patterns
Final thought: the Shared Responsibility Model is less a binary rulebook and more the most important group project you'll ever join. Know your responsibilities, document them, test them — and for the love of pagers, run your incident drills with cloud failure modes included.
Want a challenge? Draft a one-page runbook that maps the top 5 risks for your app to either "Provider" or "Customer" responsibilities — include one mitigation for each. Do it before your next coffee break.
Comments (0)
Please sign in to leave a comment.
No comments yet. Be the first to comment!