Step 3: Understand IoT Data Flow - Smart Car Park Walkthrough
Learn how sensor readings become real-time dashboard insights
Great! You've deployed the demo
Now let's walk through what you just deployed and see it in action.
Start WalkthroughChoose your next step
Generate Evidence Pack
Create your business case documentation with what you've learned.
Generate Evidence PackWalkthrough progress
Step 3 of 4 • 3 minutes
Understand the IoT Data Flow
See how sensor readings travel from parking spaces to your dashboard in real-time.
Screenshot updating - please check back soon
Screenshot updating - please check back soon
Screenshot updating - please check back soon
Expected outcome
- You understand how IoT sensors detect vehicles
- You see how AWS IoT Core receives sensor messages
- You know how Lambda processes and stores data
- You recognize CloudWatch's role in visualization
From sensor to dashboard: The complete journey
Every parking space in the car park has an ultrasonic sensor that continuously monitors vehicle presence. Let's trace the path of a single sensor reading from detection to dashboard:
AWS services explained
This walkthrough uses several AWS services working together. Here's what each one does:
AWS IoT Core
Role: Message broker for IoT devices
What it does: Receives a batch MQTT message every minute containing all 50
sensor readings on the carpark/sensors/batch topic. The IoT Rules Engine
routes the message to Lambda for processing. IoT Core can handle millions of messages per second,
so this is trivial.
Why it's useful: IoT Core manages device authentication, message routing, and topic subscriptions. You do not build a message broker yourself - AWS provides enterprise-grade infrastructure.
Cost: £1.00 per million messages. With ~21,600 batch messages/month (1 msg × 30/hour × 24h × 30 days), that's just £0.02/month.
AWS Lambda
Role: Process sensor readings
What it does: Two Lambda functions work together. The SimulatorFunction
generates realistic sensor data every minute via EventBridge. The ProcessorFunction
receives each batch from IoT Core, writes items to DynamoDB, and publishes
CloudWatch custom metrics (occupied spaces per zone, total occupancy, sensors reporting count).
Why it's useful: Lambda runs your code without servers. You only pay for compute time used (milliseconds per execution). No idle servers, no capacity planning.
Cost: £0.20 per million invocations. Processing ~21,600 batch messages/month costs less than £0.01/month.
Amazon DynamoDB
Role: Store sensor readings with automatic expiry
What it does: Stores every sensor reading as an item with sensor ID, timestamp, zone, occupancy state, confidence level, and battery level. A global secondary index on zone + timestamp enables efficient queries by zone. Items automatically expire after 7 days via TTL.
Why it's useful: Fully managed NoSQL database with on-demand capacity. No servers to manage, no capacity planning for this demo scale. Queries like "current state by zone" and "24-hour history" run in milliseconds via the GSI.
Cost: ~£3.50/month at this scale with on-demand pricing. Pay only for reads and writes you actually use. TTL deletes are free.
Amazon CloudWatch
Role: Dashboard visualization and alarms
What it does: Lambda publishes custom metrics (occupied spaces per zone,
total occupancy, sensors reporting count) to CloudWatch under the NDXTry/SmartCarPark
namespace. The dashboard queries these metrics and renders 4 widgets. Two alarms monitor
high occupancy (>45/50) and sensor health (<45 reporting).
Why it's useful: CloudWatch handles all the complexity of time-series metric storage, aggregation, and visualization. You define widgets, CloudWatch does the heavy lifting.
Cost: Custom metrics + 1 dashboard + 2 alarms = ~£5/month. Includes 15-month metric retention.
What makes this \"wow\"?
Compare this to manual car park management:
Before: Manual
- 2 staff walk all 3 floors every 2 hours
- Count occupied spaces with clickers
- Write counts on clipboard
- Back in office, enter into Excel
- Create charts manually
- Data is 30-120 minutes old by the time it's visible
- No after-hours monitoring (staff work 9-5)
- Cost: 1.5 hours/day × 260 days × £25/hour = £9,750/year
After: IoT + AWS
- 50 sensors monitor continuously (24/7)
- Occupancy detected automatically via ultrasonic distance
- Data published to AWS IoT Core instantly
- Lambda processes and stores in 2-5 seconds
- Dashboard auto-generates charts in real-time
- Data is never more than 15 seconds old
- Works overnight, weekends, holidays without human intervention
- Cost: £145/year for AWS services
Impact: What took 1.5 hours of manual work daily now happens automatically in real-time. Staff freed to focus on customer service and complex problems. Data quality improves (no human counting errors). Operational cost drops by £9,750/year.
Something went wrong? Troubleshooting help
I don't see the dashboard updating
Possible solutions:
- Check auto-refresh is enabled - Look for "Auto refresh" toggle (top-right), ensure ON and set to 10 seconds
- Verify IoT simulator is running - Lambda console → find the simulator function → Monitor tab should show invocations every minute
- Check CloudWatch Logs - Lambda console → find the processor function → Logs tab → Look for "Published metric to CloudWatch" messages
How can I verify data is flowing?
Three checkpoints:
- DynamoDB table: Open DynamoDB console, find the table matching your stack name, click the Items tab, and click "Scan" — should see recent sensor items with sensor_id, timestamp, zone, occupied, confidence, and battery_level
- Lambda invocations: Open Lambda console, click the processor function, Monitor tab shows recent invocations graph
- CloudWatch metrics: On your dashboard, click any widget's 3-dot menu → "View in metrics" → See raw metric data and timestamps