Building Infrasight: A Real-Time IoT Dashboard with Next.js, MongoDB Timeseries, and Claude Code
Reading time: 6 minutes
I wanted to build something that felt like a real production system — not another to-do app, not a clone of something that already exists. Infrasight is what came out of that: a real-time IoT sensor monitoring dashboard for building management. It tracks hundreds of environmental sensors across floors and rooms, surfaces anomalies, predicts maintenance needs, and pushes live data to the browser over WebSockets.
Here's how I built it, what I learned, and how Claude Code changed my workflow along the way.
What Infrasight Actually Does
Imagine you manage a large commercial building. You've got temperature sensors, humidity monitors, CO2 detectors, power meters, motion sensors — hundreds of devices spread across multiple floors. Infrasight gives you a single pane of glass to monitor all of it.
The dashboard includes:
- Live sensor feeds — readings stream in via Pusher WebSockets and update the UI in real time
- Device health scoring — a composite score based on uptime, error count, battery level, signal strength, and last-seen time
- Anomaly detection — flagging unusual readings with anomaly scores and trend analysis
- Predictive maintenance — forecasting which devices will need servicing, categorized as critical, warning, or watch
- Floor plan visualization — an interactive, collapsible floor-by-floor view of every device and its current status
- Full audit trails — every device mutation is tracked with who changed what and when
- Role-based access control — admins get full CRUD, members get read-only, enforced both server-side and in the UI
It supports 15 device types (temperature, humidity, CO2, power, occupancy, pressure, light, motion, air quality, and more) and 35 reading units. The V2 API has 19 endpoints covering device management, reading ingestion, analytics, auditing, and system monitoring.
The Tech Stack
I was deliberate about every choice here. This isn't a stack assembled from a tutorial — each piece solves a specific problem.
Frontend:
- Next.js 15 with App Router and Turbopack
- React 19 + TypeScript 5.9 (strict mode)
- Tailwind CSS 4 + shadcn/ui for components
- Recharts for data visualizations
- TanStack React Query for server state management
- Pusher.js for real-time WebSocket connections
Backend:
- Next.js API Routes (server-side)
- MongoDB with Mongoose ODM
- Zod for request validation
- Clerk for authentication and organization-based RBAC
Infrastructure:
- Redis (Upstash) for caching and rate limiting
- Sentry for error tracking and performance monitoring
- Prometheus-compatible metrics endpoint
Testing:
- Jest for unit and integration tests
- Playwright for E2E tests
- MongoDB Memory Server for test databases
Interesting Technical Decisions
MongoDB Timeseries Collections
This was probably the most impactful architectural decision. Sensor readings are time-series data by nature — high-volume, append-only, queried by time ranges. MongoDB's native timeseries collections gave me automatic bucketing, compression, and a 90-day TTL without managing any of that at the application level.
The catch is that your metaField (the bucketing key) needs to stay low-cardinality, and you can't change the timeseries schema after the collection is created. I keep metadata lean:
metadata: {
device_id: 'device_001',
type: 'temperature',
unit: 'celsius',
source: 'sensor'
}Four fields. That's it. Everything else lives on the reading document itself or gets joined at query time. This keeps bucket sizes efficient and aggregation queries fast.
Zod-First API Design
Every V2 endpoint validates inputs through Zod schemas before anything else happens. The pattern looks like this across the entire API:
export async function GET(request: NextRequest) {
return withErrorHandler(async () => {
const authContext = await requireOrgMembership();
const params = validateQuery(searchParams, listDevicesQuerySchema);
if (!params.success) throw new ApiError(400, 'VALIDATION_ERROR', params.error);
const devices = await DeviceV2.findActive(filter);
return jsonPaginated(devices, pagination, meta);
});
}withErrorHandler catches everything — validation errors, database errors, auth failures — and normalizes them into a consistent response format. No raw try/catch blocks scattered across routes. No inconsistent error shapes. One pattern, everywhere.
Soft Deletes with Audit Trails
Devices are never truly deleted. A soft delete sets audit.deleted_at and audit.deleted_by, and findActive() filters them out automatically. Admins can view deleted devices, restore them, or see the full audit history of who created, updated, and deleted any device and when. Every mutation records the authenticated user's email.
This was a deliberate choice — in a building management context, losing device history is unacceptable.
Building with Claude Code
I used Claude Code throughout the development of Infrasight, and it genuinely changed how I work.
The biggest shift wasn't speed (though that helped). It was scope. There are features in this project I wouldn't have attempted as a solo developer — not because I couldn't figure them out, but because the time investment wouldn't have been worth it for a personal project. The anomaly detection pipeline, the predictive maintenance analytics, the comprehensive Zod validation layer across 19 endpoints, the full RBAC system — Claude Code made these feasible to build and iterate on.
A few things that stood out:
- Boilerplate elimination. Setting up Zod schemas, API route handlers, React Query hooks, and TypeScript types for each new endpoint involves a lot of repetitive-but-precise code. Claude Code handled that reliably, letting me focus on the actual logic.
- Architecture conversations. When I was deciding between timeseries metadata approaches or evaluating cache invalidation strategies, I could think through the tradeoffs in conversation rather than just reading docs in isolation.
- Refactoring confidence. The V1-to-V2 migration touched nearly every file. Having an AI assistant that could hold the full context of the codebase while making coordinated changes across models, routes, validations, and client code was invaluable.
It's not magic — I still made every architectural decision, reviewed every change, and debugged plenty of things manually. But Claude Code removed enough friction that I could focus on building a system I'm actually proud of instead of getting bogged down in the mechanical parts.
What I'd Do Differently
- Start with timeseries from day one. The V1-to-V2 migration happened because V1 used regular collections for readings. Migrating to timeseries required recreating the collection and rethinking the data model. If I'd committed to timeseries upfront, I'd have saved significant effort.
- Design the RBAC system earlier. Bolting on Clerk and organization-based roles after the API was built meant touching every route handler. Planning for auth from the start would have been cleaner.
- More granular React Query cache invalidation. Some mutations invalidate broader query caches than they need to. With more upfront thought, I could have been more surgical about what gets refetched.
The Numbers
- 19 API endpoints across 6 categories
- 15 sensor device types, 35 reading units
- 500 seeded test devices for development
- 10,000 readings per bulk ingest request
- 90-day automatic TTL on reading data
- Full unit, integration, and E2E test coverage with Jest and Playwright
Wrapping Up
Infrasight started as a way to build something non-trivial with modern tools and ended up being a system I'd genuinely be comfortable deploying. The combination of Next.js 15, MongoDB timeseries, real-time WebSockets, and enterprise-grade auth and monitoring makes it feel like more than a side project.
If you're interested in the code, the repository is public. If you're evaluating my work — this is how I build software: deliberate architecture, comprehensive validation, proper error handling, audit trails, and clean separation of concerns. No shortcuts on the things that matter.
© 2026 Amadou Seck. Published on aseck.dev