← back

Starlink Performance Globe

2026

Starlink Performance Globe is a full-stack network visualization project that turns raw internet telemetry, live orbital data, and demand-side traffic estimates into a single interactive 3D map. The goal was to make a fast, globe-scale view of how the network behaves: where latency is low, where satellites are overhead, and where modeled congestion likely builds across a system spanning 31 million latency measurements, roughly 9,615 live satellites, and about 100 geographic regions.

Dataset construction

I built a reproducible global latency dataset from M-Lab NDT7 data in BigQuery, filtering tests down to SpaceX ASN traffic and aggregating per-test minimum RTT into spatial grid cells. That pipeline compressed 31 million real-world Starlink measurements into a globe-ready dataset that could be rendered interactively instead of treated as a static research artifact.

On the frontend, those normalized latency and congestion layers are served from a FastAPI backend and drawn as WebGL hex tiles on a 3D globe. I also wrote the geospatial utilities needed to keep the tiles visually consistent across latitudes, including hex polygon generation, tile-radius calculation, and antimeridian and polar edge-case handling.

Real-time orbital simulation

The live satellite layer was built to handle the real scale of the constellation rather than a toy sample. I ingested current TLE data, parsed it into in-browser satellite records, and used orbital propagation to simulate about 9,615 Starlink satellites directly in the client. Positions update continuously without per-frame server calls, which keeps the globe responsive while still feeling live.

To keep that layer reliable, I added an hourly TLE refresh job, cached parsed satellite records to avoid repeated orbit work, and dropped invalid propagation outputs before they could poison the render loop. The result is a satellite visualization that feels real-time but is still operationally manageable.

Congestion modeling

Beyond plotting satellites and latency, I wanted the project to say something about network load. I prototyped multiple congestion-modeling approaches before settling on a regional pipeline that combines Cloudflare Radar traffic distributions, global usage assumptions, diurnal demand shaping, and local orbital density. That produces a comparable regional demand metric instead of a vague “busy” score across about 100 regions.

I then estimated available supply by counting visible satellites within a computed service radius and summing per-satellite capacity based on altitude-derived generation assumptions. Paired with timezone-aware demand multipliers and cached geo-enrichment lookups, that made it possible to compare demand versus capacity across about 100 regions in a way that is transparent enough to explain directly in the UI.

Production and operations

I containerized the Python backend, deployed the stack across Render and Vercel, and set up background jobs to keep the data pipeline moving without manual intervention. That included managed refresh work inside the FastAPI app lifecycle and a second scheduled keep-alive flow so free tier services would not spin down between visits, which kept the live demo available without babysitting the infrastructure.

I also spent time on the product details that make a technical demo feel finished: preserving the last-known overlay state to prevent flicker during async layer changes, rendering the model equations and assumptions in the interface for interpretability, and keeping the frontend build hardened without slowing down local development.