$ miric.dev

MongoDB .local Stockholm

From Hours to Milliseconds

Why Great Experiences Run on Live Data

01

Why Latency Matters

86,400,000 ms

The gap between batch and real-time

Most enterprise data pipelines still rely on batch ETL — extract overnight, transform in the morning, load by lunch. For internal dashboards that cadence is fine. But the moment that same data powers a customer-facing experience, the gap between “fresh enough” and “stale” becomes the gap between conversion and abandonment.

The latency spectrum runs from hours (classic batch) through minutes (micro-batch) to true milliseconds (stream processing). Each step down that spectrum unlocks new categories of experience: real-time recommendations, fraud detection before a transaction settles, pricing that reflects the market right now rather than the market at last night’s close.

Every experience lives on a latency spectrum

Personalisation

< 200 ms

Risk & Fraud Checks

1–2 s

Operational Dashboards

< 15 s

Strategic Analytics

min–hours

lowhigh

When everything runs at batch speed, the consequences compound. Personalisation can’t adapt, fraud slips through, and dashboards only ever show you the past. The entire stack converges on the latency of the slowest system.

When everything runs at the speed of the slowest system

Personalisation feels generic

Dashboards tell you what already happened

Fraud is caught too late

Experiences feel disconnected

lowhigh

Research consistently shows that every 100ms of added latency costs measurable conversion. The question is not whether low latency matters — it’s whether your architecture can deliver it without a full rewrite.

02

The Live Data Plane

The shift from batch to live is not about replacing your data warehouse. It’s about adding an operational data plane that sits between your systems of record and your systems of engagement. MongoDB Atlas, combined with Atlas Stream Processing, acts as that plane — ingesting change streams, applying windowed transformations, and materializing results into collections that your application reads with single-digit-millisecond latency.

From systems of record to application in milliseconds

Systems of Record

CDC

Stream Processing

Materialized Views

Application

Atlas Stream Processing lets you define continuous queries over change streams using a familiar aggregation pipeline syntax. You don’t need a separate streaming platform or a new query language. The data stays within the MongoDB ecosystem, and the output lands in collections your application already knows how to read.

The result is an architecture where the operational database is not just a persistence layer — it becomes the live data plane that powers every downstream experience, from search to personalisation to AI inference.

03

Patterns You Can Apply

Three patterns emerged repeatedly across the use cases we explored in Stockholm. Each one is independently valuable and can be adopted incrementally — you don’t need to rebuild your stack to start seeing results.

Forward Cache

< 10 ms reads

Project system-of-record data into MongoDB via CDC, creating a resilient read cache that serves traffic even when upstream systems are unavailable.

Real-Time Feature Store

< 50 ms features

Stream user behaviour events, compute rolling aggregates via Atlas Stream Processing, and serve feature vectors directly to recommendation engines.

Live Data Toolbox

ms–s configurable

A composable set of building blocks — change streams, triggers, and stream processing pipelines — assembled to fit your specific latency requirements.

04

The Future: AI & Autonomy

Every serious AI deployment eventually hits the same wall: the model is only as good as the data it can access at inference time. Retrieval-augmented generation, real-time scoring, and agentic workflows all demand a data plane that returns results in milliseconds, not minutes.

The live data plane we built throughout this talk is exactly that foundation. When your operational database already serves fresh, pre-computed features alongside vector embeddings, adding an AI layer becomes an incremental step rather than a separate infrastructure project.

The practical advice: start small.

01

Pick one experience

02

Move it from batch to live

03

Measure, then expand