The world's oil chokepoint,
watched in real time by AI.
HormuzWatch tracks every vessel, news headline, and market signal at the Strait of Hormuz and runs them through a live Apache Flink + Claude AI pipeline that generates intelligence briefings, trajectory forecasts, risk heatmaps, and throughput estimates updated in seconds.
33 kilometres.
One fifth of world supply.
The Strait of Hormuz sits between Iran and Oman at the mouth of the Persian Gulf. At its narrowest point it is 33 kilometres wide, and roughly 20% of the world's daily oil supply passes through it.
Any disruption — an Iranian vessel seizure, a naval confrontation, a closure threat — sends commodity markets moving within hours. In periods of elevated US-Iran tension, what is physically happening in the strait is directly relevant to energy traders, shipping analysts, and anyone tracking how a narrow waterway shapes the global economy.
News headlines report these events with a delay of hours, and without the underlying vessel data. HormuzWatch removes the delay and makes the connection between maritime patterns and market signals visible in real time.
Track Brent/WTI spreads against real-time tanker routing and throughput data. A drop in westbound tankers appears in HormuzWatch days before EIA weekly inventory data.
Monitor tanker utilisation, shadow fleet activity, and STS transfer patterns as vessels navigate sanctions pressure and conflict risk.
The 2019 tanker attacks, 2012 sanctions, and 2011 closure threat all had visible AIS signatures before they became news. HormuzWatch surfaces those patterns live.
What the system
actually does.
Two things: a streaming data pipeline and a monitoring dashboard. The pipeline ingests AIS transponder data, news feeds, commodity prices, and prediction market odds, processes them through twelve Flink stream processors on Ververica Cloud, and synthesises the results with Claude. The dashboard makes all of it visible and queryable.
Live vessel tracking
Every commercial ship in the strait is tracked in real time from three AIS data sources: AISStream.io WebSocket, MarineTraffic, and AISHub. Ships render as directional icons colour-coded by type — orange tankers, red military, cyan LNG — with 12-position trail lines and a mouseover detail panel.
Sanctioned vessel detection
Known IMO-sanctioned MMSIs — Iranian Revolutionary Guard vessels and shadow fleet tankers — are matched against the live AIS feed. Sanctioned vessels glow red on the map and trigger immediate CRITICAL intelligence events.
AI situation reports
Claude consumes the intelligence event stream and generates plain-language briefings citing historical precedents: the 2019 tanker attacks, 2012 sanctions, 2011 closure threat. A four-layer cost control system (significance scoring, delta triggering, Haiku/Sonnet tiering, 2-minute minimum interval) keeps the daily AI cost at cents.
Market correlation
Live prices updated every 30 seconds: WTI and Brent crude, LNG futures, tanker stocks (Frontline, Scorpio, DHT Holdings), energy majors, and Polymarket/Kalshi prediction market odds on Iranian military action. The dashboard shows commodity and maritime intelligence on the same screen.
Trajectory forecasting
Dashed prediction lines extend from every moving vessel, showing estimated positions at 15, 30, 45, 60, 90, and 120 minutes. A Flink MapFunction runs haversine dead-reckoning on every vessel with speed ≥ 1.0 kt. Analysts can see whether a military vessel's course intersects tanker lanes before it arrives.
Natural language queries
A chat panel in the ANALYTICS tab accepts plain-English questions grounded in live Flink-processed state. "Which vessels near Fujairah should I be watching?" "How does today's risk score compare to the 2019 tanker attacks?" Answers stream token-by-token via SSE.
Risk heatmap
A Mapbox fill layer shades 0.2° geographic grid cells from transparent green through amber to glowing red. A 5-minute tumbling window in Flink aggregates intelligence events by grid cell. During chokepoint events, risk concentrates at the strait mouth near Qeshm Island — visible immediately.
Analyst geofence studio
Draw any polygon on the map; it becomes a live monitoring zone. The Flink DynamicGeofenceFilter uses BroadcastProcessFunction to receive geofence updates and JTS polygon containment for O(1) testing on every AIS position. Zones persist across sessions via the API.
How data flows
through the system.
Twelve stream
processors, live.
Six original Flink detectors built in the first sprint. Six new ones added in the Intelligence Overhaul, including a CEP multi-signal correlator, a trajectory predictor, a fleet proximity graph, a dynamic geofence engine, and a strait throughput estimator.
Each detector is a Java KeyedProcessFunction, BroadcastProcessFunction, or CEP pattern job running continuously on Ververica Cloud. They consume from Kafka, process in real time, and publish back to Kafka. The frontend never touches a Flink job directly.
The foundation
- Live AIS vessel tracking with sanctioned MMSI matching
- Six KeyedProcessFunction anomaly detectors on the AIS stream
- Claude Haiku/Sonnet tiered briefing synthesis with cost control
- Yahoo Finance market data + Polymarket prediction markets
- React + Mapbox GL JS dashboard with live Intel feed
Intelligence Overhaul
- CEP multi-signal correlator: two detectors firing in 30 min = CRITICAL event
- Risk heatmap: 5-min tumbling window over 0.2° geographic grid cells
- Trajectory prediction: haversine dead-reckoning at 6 time horizons
- Fleet proximity graph: D3 force-directed, keyed by 0.2° sector
- Strait throughput estimator: westbound tanker crossings at 58°E
- Natural language query panel: Claude Opus over live Flink state via SSE
- Streaming briefing panel: word-by-word SSE with synthesis latency
- Historical AIS replay: asyncio + JSONL at up to 20× speed
- Analyst geofence studio: Mapbox Draw + Flink BroadcastProcessFunction + JTS
- Pipeline telemetry panel: live metrics from in-memory AppState
The technology
I work with every day.
I work at Ververica, the company that created Apache Flink. My job is helping organizations build real-time data systems. HormuzWatch is what happens when you point that same technology at something you want to understand about the world.
I love using stream processing to explain complex situations to people. Geopolitical events, commodity markets, vessel behaviour under sanctions pressure: most people follow these through delayed news coverage. Real-time data removes that delay and makes the underlying patterns legible to anyone paying attention. That is what this system does, and it is why I built it.
Ververica Cloud handles deployment, scaling, job restarts, and monitoring. I handle the detection logic. That division of responsibility is what makes a system this complex buildable by one person.
- Managed Flink deployment — no cluster administration
- Automatic failover and job restart on failure
- Real-time job metrics, backpressure monitoring, savepoints
- Native Kafka source/sink connectors
- Built by the team that wrote Apache Flink
A news report about Iranian naval activity and a cluster of tankers decelerating near Qeshm Island are two independent signals. Without stream processing you notice them separately, hours apart. With a CEP correlator they become one CRITICAL intelligence event the moment they co-occur. That is the gap this system closes.
Technology that
explains the world.
The Strait of Hormuz carries roughly one-fifth of the world's daily oil supply through a passage 33 kilometres wide at its narrowest point. In April 2025, with US-Iran tensions at a decade high, I kept watching news headlines but couldn't find anything that showed me what was actually happening in the strait, in real time.
I'd built AISGuardian for Baltic Sea infrastructure protection and knew the data existed. AIS transponders broadcast from every commercial vessel. News feeds were parseable. Commodity prices were accessible. What didn't exist was a system that connected all three streams, found the patterns, and explained what they meant.
HormuzWatch started as a Kafka + Flink + React stack that flagged AIS anomalies. Then I added Claude to turn the anomaly stream into intelligible briefings. Then market data, prediction markets, precedent-aware context, and a second sprint that added ten new capabilities including real-time NL queries, trajectory forecasting, and an analyst geofence studio.
The result is a system I actually use. When something happens at Hormuz, I open the dashboard.
AISGuardian
Maritime vessel tracking for Baltic Sea critical infrastructure. The predecessor to HormuzWatch, built during the Aiven Kafka Challenge. Same stack, different mission: cable protection instead of commodity market intelligence.
Let's build something
worth talking about.
I take on a limited number of advisory and fractional engagements. Only projects where I can make a real difference. If you're navigating growth, AI, or revenue challenges in a technical B2B environment, let's talk.