LearningTree Β· AWS Β· Compute

Compute Services β€”
Running Applications in the Cloud

EC2, Lambda, ECS, Elastic Beanstalk β€” four ways to run code in AWS, each trading control for convenience differently. This page maps the full compute landscape before you dive into each service.

01
Chapter One

What is Compute?

Compute services are the core of cloud computing β€” they let you run applications, process data, and execute workloads without managing physical servers.

Every application you build needs compute: a web server answering HTTP requests, a Lambda function processing an S3 upload, a container running a microservice. AWS provides multiple compute models so you pick exactly how much infrastructure you want to own vs. hand off to the platform.

What Does Compute Mean?

Compute = running code or applications. Whenever your application is doing work β€” serving a request, processing a file, executing a job β€” it is consuming compute.

🌐

Examples of Compute Workloads

  • Web servers serving HTTP requests
  • Backend APIs handling business logic
  • Batch jobs processing files overnight
  • Event-driven functions reacting to queue messages
  • ML inference serving model predictions
⚑

What Changes Between Services

  • How much control you have over OS and runtime
  • How much management AWS takes off your hands
  • How billing works (per-hour, per-request, per-second)
  • How scaling happens (manual, scheduled, automatic)
From Fundamentals to Compute

All the fundamentals concepts you've learned feed directly into how compute services work:

πŸ“¦

Virtualization β†—

  • EC2 instances are VMs created by the Nitro hypervisor on physical AWS hosts
  • Every concept from the VM diagram applies directly here
πŸ”©

Cloud Models β†—

  • EC2 = IaaS (you manage OS and above)
  • Lambda / Beanstalk = PaaS (you manage code only)
  • The service model defines your security surface
πŸ›‘οΈ

Shared Responsibility β†—

  • On EC2 you patch the OS β€” on Lambda you don't
  • The compute model determines exactly where your responsibility begins
02
Chapter Two

The Four Services

Core Compute Options in AWS
Mental Model β€” The Control Spectrum

Think of the four compute options as a spectrum β€” more control on the left, less management on the right:

← More control / more responsibility Less management / more abstraction β†’
EC2
You own the OS
ECS
You own the container
Beanstalk
You own the app
Lambda
You own the function

Moving right, AWS takes on more undifferentiated heavy lifting β€” but you give up flexibility. Neither end is universally better; the right choice depends on your workload's requirements.

Concept Diagram
AWS Compute β€” What YOU manage vs what AWS manages per service
EC2 ECS / FARGATE BEANSTALK LAMBDA App Code Runtime OS / Patches Container LB / Scaling Hypervisor Hardware YOU YOU YOU AWS AWS AWS AWS YOU YOU AWS (Fargate) YOU AWS AWS AWS YOU AWS AWS AWS AWS AWS AWS YOU AWS AWS AWS AWS AWS AWS You manage AWS manages More abstraction β†’
03
Chapter Three

How It Works

High-Level Flow
DeployYou deploy application code, a Docker image, or a function to your chosen compute service.
AllocateAWS allocates underlying resources β€” VMs, containers, or execution environments β€” from regional capacity.
RunYour application handles requests, processes events, or executes batch work on the allocated compute.
ScaleScaling happens automatically (Lambda, Fargate) or via defined Auto Scaling policies (EC2, ECS on EC2).
BillBilled for actual usage β€” hours for EC2, execution-ms for Lambda, task-hours for Fargate.
How Services Work Together

Compute services never run in isolation. A typical production architecture looks like this:

🌐
User Request
Browser Β· Mobile Β· API
β†’
βš–οΈ
Load Balancer
ALB / API Gateway
β†’
⚑
Compute
EC2 Β· Lambda Β· ECS
β†’
πŸ’Ύ
Storage / DB
S3 Β· RDS Β· DynamoDB
βš–οΈ

Networking Layer

  • VPC provides the network boundary
  • ALB / NLB distributes traffic to compute
  • Route 53 resolves DNS to the load balancer
⚑

Compute Layer (you are here)

  • EC2, Lambda, ECS, or Beanstalk runs your application code
  • Auto Scaling matches capacity to demand
πŸ’Ύ

Storage / Data Layer

  • S3 stores objects (files, backups, assets)
  • RDS / DynamoDB stores structured application data
  • ElastiCache sits in front of the DB for hot reads
04
Chapter Four

Choosing & Comparing

Choosing the Right Compute Service
Use EC2 when…
  • You need full OS-level control
  • Running legacy or custom runtimes
  • Long-running persistent workloads
  • GPU or bare-metal requirements
  • Custom software at the OS layer
Use Lambda when…
  • Event-driven or on-demand execution
  • Zero server management desired
  • Short-lived functions (<15 min)
  • Spiky or unpredictable traffic
  • Scale-to-zero matters for cost
Use ECS / Fargate when…
  • You're already using Docker
  • Microservices needing independent scale
  • More runtime control than Lambda allows
  • Long-running containerised services
  • No EC2 to manage (Fargate)
Use Beanstalk when…
  • Deploy a standard web app fast
  • Don't want to configure infra
  • Standard runtimes (Node, Java, Python…)
  • Prototype or internal tool
  • Still need EC2 access when required
Key Design Considerations
ConsiderationEC2LambdaECS / FargateBeanstalk
Scaling model Auto Scaling groups Automatic Service auto scaling Built-in ASG
Startup time Minutes (AMI boot) Milliseconds Seconds (container pull) Minutes (EC2)
Billing unit Per second (running) Per 1ms (execution only) vCPU + memory/sec Underlying EC2
Max runtime Unlimited 15 minutes max Unlimited Unlimited
Multi-AZ HA Manual β€” configure AZs Built-in Configure placement Built-in with ALB
OS control Full β€” you choose AMI None Container image only Limited
Common Misunderstandings
MythReality
"EC2 is always the best option." EC2 gives maximum flexibility but maximum management burden. For most modern workloads, Lambda or ECS is a better default.
"Serverless means no servers exist." Servers still exist β€” AWS manages them. "Serverless" means you don't manage servers, not that they disappear.
"All compute services behave the same." Billing, scaling, runtime limits, OS access, startup time β€” all differ significantly. The wrong model wastes money and adds complexity.
"Lambda is always cheaper." Lambda is cheaper for spiky, infrequent workloads. High-volume steady-state compute is often cheaper on EC2 with Reserved Instances.
"Elastic Beanstalk is just EC2." Beanstalk automates the entire environment β€” EC2 + ALB + Auto Scaling + CloudWatch. It's PaaS built on top of EC2, not just EC2.
Summary
πŸ“‹ Compute Services β€” Recap
  • Compute = running code β€” any active workload consumes compute.
  • AWS provides four primary models: EC2 (IaaS), ECS/Fargate (containers), Elastic Beanstalk (PaaS), Lambda (serverless).
  • The spectrum runs from maximum control (EC2) to maximum abstraction (Lambda) β€” more control = more responsibility.
  • Compute integrates with networking (VPC, ALB), storage (S3, RDS), and security (IAM, Security Groups).
  • Pick based on workload needs: OS control, runtime length, traffic patterns, and cost profile.
  • Most production architectures combine multiple compute models β€” one service rarely covers all needs.
πŸ‘‰ Key Takeaway

Compute is where your application actually runs β€” AWS gives you four distinct ways to run it, each trading control for convenience. Pick the model that minimises management burden without sacrificing the control your workload actually needs.