tadata
Back to home

Serverless Architecture: Patterns, Limits, and Economics

#serverless#cloud#architecture#aws#gcp

Serverless computing shifts operational responsibility to the cloud provider, letting teams focus on business logic. But "serverless" is not a universal solution -- understanding its patterns, constraints, and cost dynamics is critical to making good architectural decisions.

Core Serverless Patterns

API Backend

The most common pattern: HTTP requests routed through an API Gateway to functions.

  • API Gateway + Lambda / Cloud Functions
  • Automatic scaling from zero to thousands of concurrent requests
  • Pay-per-invocation pricing

Event Processing

Functions triggered by events from queues, streams, or storage changes.

  • S3 upload triggers image processing
  • SQS/SNS message triggers order fulfillment
  • Kinesis/Pub-Sub stream triggers real-time analytics

Scheduled Jobs

Replacing cron servers with scheduled function invocations.

  • EventBridge Scheduler + Lambda
  • Cloud Scheduler + Cloud Functions
  • Ideal for periodic data syncs, report generation, cleanup tasks

Orchestration Workflows

Composing multiple functions into complex workflows.

  • AWS Step Functions / GCP Workflows
  • Error handling, retries, and parallel execution built in
  • Visual workflow definition

Serverless Services Landscape

CategoryAWSGCPAzure
ComputeLambdaCloud Functions / Cloud RunAzure Functions
APIAPI GatewayAPI Gateway / Cloud EndpointsAPI Management
OrchestrationStep FunctionsWorkflowsDurable Functions
DatabaseDynamoDBFirestoreCosmos DB (serverless)
StorageS3Cloud StorageBlob Storage
MessagingSQS, SNS, EventBridgePub/Sub, EventarcEvent Grid, Service Bus

Known Limits and Constraints

ConstraintImpactMitigation
Cold starts100ms to several seconds latency on first invocationProvisioned concurrency, keep-warm patterns
Execution timeLambda: 15 min, Cloud Functions: 60 minBreak into smaller steps, use Step Functions
Payload sizeAPI Gateway: 10MB, Lambda: 6MB syncUse S3 presigned URLs for large payloads
Concurrency limitsDefault 1000 concurrent per regionRequest quota increases, implement throttling
Vendor lock-inDeep integration with provider servicesPortable business logic, adapter patterns
DebuggingDistributed tracing is harder than monolith debuggingX-Ray, structured logging, local emulators

When Serverless Saves Money

  • Variable traffic -- pay nothing at zero, scale automatically at peak
  • Infrequent workloads -- scheduled jobs that run minutes per day
  • Prototyping -- no infrastructure cost until real traffic arrives
  • Event-driven pipelines -- bursty, unpredictable processing loads

When Serverless Costs More

  • Steady high-throughput -- constant load is cheaper on reserved containers/VMs
  • Long-running processes -- per-millisecond billing adds up for 10+ minute executions
  • High memory workloads -- Lambda pricing scales linearly with memory allocation
  • Chatty architectures -- many small inter-function calls multiply invocation costs

Cost Comparison Framework

Workload ProfileServerlessContainers (Fargate)VMs (Reserved)
Spiky, low baselineBestGoodWorst
Steady, predictableWorstGoodBest
Short burst processingBestGoodOver-provisioned
24/7 high throughputExpensiveModerateCheapest

Resources