tadata
Back to home

Edge Computing: Architecture Patterns and Trade-offs

#edge-computing#iot#cloud#architecture#cdn

Edge computing moves computation closer to the data source or end user. It is not a replacement for the cloud but a complement -- the right architecture places workloads where they deliver the most value based on latency, bandwidth, and data sovereignty requirements.

Edge Computing Tiers

TierLocationLatencyExamples
Device edgeOn the device itself< 1msSmartphones, sensors, gateways
Near edgeLocal network / on-premises1-10msFactory floor servers, retail stores
Far edge / MECTelco edge, regional PoP5-20ms5G MEC, ISP edge nodes
CDN edgeDistributed global PoPs10-50msCloudflare, CloudFront, Fastly
Cloud regionCentralized data center50-200msAWS, GCP, Azure regions

Architecture Patterns

CDN Edge Computing

Run code at CDN points of presence, closest to end users.

  • Use cases: A/B testing, personalization, auth token validation, geo-routing
  • Platforms: Cloudflare Workers, Lambda@Edge, Vercel Edge Functions, Deno Deploy
  • Constraints: limited execution time, no persistent state, restricted APIs

IoT Edge

Process data on-premises or on gateway devices before sending to cloud.

  • Use cases: manufacturing quality inspection, predictive maintenance, video analytics
  • Platforms: AWS IoT Greengrass, Azure IoT Edge, Google Distributed Cloud Edge
  • Constraints: limited compute, unreliable connectivity, device management complexity

Multi-Access Edge Computing (MEC)

Compute at the telecom network edge, enabled by 5G.

  • Use cases: autonomous vehicles, AR/VR, real-time gaming, industrial automation
  • Platforms: AWS Wavelength, Google Distributed Cloud, Azure Edge Zones
  • Constraints: carrier partnerships required, limited geographic coverage

Edge vs Cloud Trade-offs

DimensionEdgeCloud
LatencyVery lowHigher (network round trip)
Bandwidth costReduced (process locally)Higher (transfer all data)
Compute capacityLimitedVirtually unlimited
Data freshnessReal-timeNear-real-time to batch
Management complexityHigh (many distributed nodes)Lower (centralized)
Cost modelHardware + maintenancePay-per-use
Security perimeterPhysically distributedCentralized controls
Update deploymentComplex (fleet management)Simple (centralized)

Edge AI/ML Inference

Running ML models at the edge enables real-time decisions without cloud round trips:

  • Model optimization -- TensorFlow Lite, ONNX Runtime, TensorRT for constrained devices
  • Hardware acceleration -- NVIDIA Jetson, Google Coral, Intel Neural Compute Stick
  • Model management -- versioning, A/B testing, and rollback across thousands of devices
  • Federated learning -- train on edge data without sending it to the cloud

When to Infer at the Edge

FactorEdge InferenceCloud Inference
Latency requirement< 100msSeconds acceptable
Data sensitivityCannot leave premisesCan be sent to cloud
ConnectivityIntermittent or absentReliable
Model complexitySimple to mediumComplex, large models
Update frequencyInfrequentFrequent iteration

Data Synchronization Challenges

Edge architectures must handle data that exists in multiple locations:

  • Conflict resolution -- what happens when edge and cloud disagree? Last-write-wins, CRDTs, or manual resolution
  • Eventual consistency -- edge nodes may be offline; design for eventual sync
  • Data filtering -- send only aggregated or anomalous data to cloud, not raw streams
  • Bandwidth management -- prioritize critical data when connectivity is limited
  • State management -- where is the source of truth? Cloud, edge, or both?

CDN Edge Platforms Comparison

PlatformRuntimeExecution LimitGlobal PoPsPersistent Storage
Cloudflare WorkersV8 isolates30s (paid)300+KV, R2, D1, Durable Objects
Lambda@EdgeNode.js, Python30s (viewer), 30s (origin)CloudFront PoPsLimited (via S3/DynamoDB)
Vercel Edge FunctionsV8 isolates30sVercel networkVia external stores
Fly.ioFull containersNo limit30+ regionsVolumes, LiteFS
Deno DeployV8 isolates50ms CPU35+ regionsDeno KV

Resources