A few weeks ago I was watching a cricket match on my phone. The stream dropped to what looked like 480p mid-over.
I cursed my wifi. Then I started wondering whether it actually was my wifi.
So I spent three weeks running technical audits across five OTT streaming platforms. Standard browser developer tools, signed in as a paying or registered user. No DRM bypass, no unauthorized access, no clever exploits. Just the network panel, the Performance API, and a careful eye on what each platform's player was actually doing on the wire.
What I found was less about whose stream is "best." It was about how differently platforms make architectural choices when solving the same problem: get video to a paying user reliably.
Same technical problem. Five completely different answers.
This piece pulls together what I observed. Platforms are anonymized A through E. The methodology section at the bottom explains what was measured and what wasn't.
The cache TTL finding that surprised me most
Streaming video works by chopping content into small segments (2 to 10 seconds each) and delivering them on demand. The CDN caches these segments at edge locations close to viewers. How long a segment stays in cache is set by a Cache-Control: max-age header.
Long cache: origin server gets hit rarely, costs are low. Short cache: origin server gets hit constantly, costs scale linearly with traffic.
Across the five platforms, segment cache TTLs ranged from 5 minutes to nearly a year for the same kind of asset.
| Platform | Manifest TTL | Segment TTL |
| A (global hyperscale) | Signed, ~1hr expiry | Signed, ~1hr expiry |
| B (Indian market leader) | 37 minutes | ~1 year |
| C (Indian, mid-market) | 2 minutes | 5 minutes |
| D (Indian, regional) | ~3 months | ~3 months |
| E (global hyperscale) | Signed via private protocol | Signed |
Read that table again.
Platform B caches each video segment for nearly a year. Platform C caches the same kind of object for five minutes. Both serve Indian users. Both run on commercial CDNs.
The difference is a deliberate engineering choice with massive cost implications.
A segment cached for a year hits origin once and serves from edge for everyone forever. A segment cached for 5 minutes hits origin every five minutes per edge node, multiplied by every edge node serving traffic. At scale, this is the difference between a CDN bill that works and one that doesn't.
The reason Platform B can cache aggressively: they treat segments as immutable. Once packaged, never changed. Platform C re-validates them constantly, probably out of caution about content updates, but the caution is unnecessary if your packaging pipeline is right.
This choice doesn't show up on any architecture diagram. But it separates teams that have thought hard about CDN economics from teams that haven't.
URL signing: the security layer most platforms skip
When you watch a video, your player fetches segment URLs from the CDN. Whether those URLs are signed determines whether they can be shared.
Platform B signs every segment URL with an HMAC token that expires in about an hour. The URL is bound to a session. Try to use it from a different IP or after expiry, and you get a 403.
Platforms C and D ship plain, unsigned URLs.
Anyone who pulls a URL from their browser's network panel can paste it into another browser, on another network, and stream the content directly. With Platform D's months-long cache TTL, a leaked URL stays valid for an absurdly long time.
The DRM on the segment bytes still protects against re-distribution of decrypted content. But unsigned URLs eliminate the first layer of defense. They make scraping easier. They make casual sharing trivially possible. They turn the CDN into a public file server with extra steps.
Most platforms that skip URL signing aren't doing it deliberately. They inherited a CDN config that didn't include token authorization, and nobody went back to fix it.
Where auth tokens live
This is the finding that surprised me least but matters most.
Every modern web platform stores a session token somewhere on the client. Two options: a cookie marked httpOnly (JavaScript on the page cannot read it), or localStorage (any JavaScript on the page can read it).
The pattern was striking:
| Platform | Auth storage |
| A | httpOnly cookies only |
| B | httpOnly cookies only |
| C | Tokens duplicated across cookies and localStorage |
| D | OAuth2 access and refresh tokens in localStorage |
| E | httpOnly cookies + private protocol |
Why does this matter?
If anyone successfully injects JavaScript into the platform's pages, through stored XSS, a compromised third-party SDK, or a malicious browser extension, they can read whatever's in localStorage and exfiltrate it. They cannot read httpOnly cookies. The cookie can still make requests on the user's behalf, but the raw token never leaves the browser.
Refresh tokens are the highest-stakes case. An access token is usually short-lived. A refresh token might be valid for days or weeks. An attacker who exfiltrates a refresh token can mint new access tokens long after the user has logged out and gone to bed.
Platforms that get this wrong usually have an architectural reason. A third-party SDK or a legacy OAuth flow that needed JavaScript access at some point. The fix is well-documented. The cost of not fixing it scales with your XSS exposure, which scales with your third-party JS footprint.
This is one of those "the cost is invisible until something goes wrong, and then the cost is enormous" patterns.
Player choices: build, buy, or wrap
Three strategies for getting a video player on your platform.
Build it yourself. Platform A built Cadmium, an entirely proprietary player that talks to its CDN over a private protocol. Platform E went the same route. Multi-year investment, dedicated player team, only justified at hyperscale.
Buy a vendor. Platform D uses a commercial player engine bundled into their app. The vendor handles the player, the DRM integration, the ABR controller. The platform handles UI and CMS.
Wrap an open-source player. Platform B uses Shaka Player (Google maintains it) under their own branded wrapper with custom telemetry, DRM orchestration, and UI. Platform C does the same with Video.js.
For the longest time I assumed the "best" platforms wrote their own players. The audit data corrected me.
Platform B is widely considered best-in-class for its market. They use off-the-shelf Shaka with a thin wrapper. They wrote the parts that matter (telemetry, ABR memory, DRM caching) and let Google maintain the player engine.
If you're building an OTT at any scale below Netflix, you almost certainly don't need to write a player from scratch. Pick an open-source engine, wrap it well, ship it.
CDN topology: owning vs renting the wire
This is where Platform A is in a class of its own.
Most platforms (B, C, D) use commercial CDNs. Akamai, CloudFront, Cloudflare. Their video segments live on the CDN's edge servers, which are geographically distributed but run by the CDN, not the platform.
Platform A built and operates Open Connect Appliances. Physical servers shipped to ISPs, who install them inside their own networks.
When you watch Platform A's content from a major Indian ISP, your video doesn't traverse the public internet. It comes from a Platform A appliance physically located inside the ISP's data center, on the ISP's own network, often with zero transit cost.
The hostnames told the story. I observed segments served from clusters in two different Indian cities, inside two different ISPs, simultaneously, on a single playback session. The platform's client was steering between four different appliances mid-playback based on conditions I couldn't see.
This is a 10+ year capital investment that no other platform in my audit comes close to matching. It's not replicable at small scale, and it's not even strictly necessary at small scale.
But it explains why Platform A's streams feel different. They're physically closer to the user than anyone else's, by a wide margin.
Telemetry: centralized vs federated
How does each platform know what's happening with your stream? They send telemetry beacons.
Platform A: small number of beacons per session, all to its own first-party endpoint, in JSON, with an outbox pattern (failed sends queued in localStorage and retried). Telemetry treated as a first-class engineering concern.
Platform B: beacons in Protobuf (a binary wire format) to a single first-party endpoint. Response acknowledgment is two bytes. Beacons are 5 to 12 KB. Under surge conditions, this matters. Telemetry itself becomes a load source if you're not careful.
Platforms C, D, and others: beacons fanned out to multiple third-party SDKs simultaneously. Mixpanel, CleverTap, NPAW Youbora, Branch.io, Facebook, Google Analytics, Comscore, Conviva, AppsFlyer. One platform's watch page made requests to over 30 distinct hosts.
There's a cost to this federation.
During my audit, one platform's video QoE telemetry endpoint was returning HTTP 503 errors. Their pipeline was broken at the moment I measured it, and presumably had been for some time without detection.
Centralized telemetry has fewer single points of failure than federated telemetry, and easier observability.
The pattern is consistent. Platforms that take observability seriously consolidate. Platforms that treat telemetry as a checkbox spray it across vendors.
Accessibility: the largest gap I observed
I expected to find architectural differences. I didn't expect the gap on accessibility to be this stark.
For a single drama series episode:
| Platform | Audio tracks | Subtitle tracks | Audio descriptions |
| A | 35 across 23 languages | 42 across 33 languages | 14 tracks |
| B (Indian leader) | 1 (English) | 1 (English) | None |
| C (Indian) | 1 (regional language) | 1 (regional language) | None |
| D (Indian regional) | 1 (English, on a regional drama) | 1 (English) | None |
| E | Multiple | Multiple | Not measurable |
Platform A's catalog has been built for a global multi-language audience for over a decade, and it shows.
Platform D, which positions itself as a regional Indian OTT, shipped English-only audio on a regional-language drama series. That's either a packaging mistake on the title I watched, or a capability gap, or a cost choice. Whichever it is, it directly contradicts the platform's stated regional positioning.
Audio descriptions, narration tracks for visually impaired viewers, are present on exactly one of the five platforms. Fourteen tracks across multiple languages on Platform A. Zero on the others.
Accessibility is the dimension where the gap between "platform that takes its users seriously" and "platform that ships the minimum" is most visible.
It's not a hard problem. It's a priority.
What this means if you're building a streaming platform
A few patterns worth taking seriously.
Cache asymmetry is your friend. Manifests should not be cached. Segments should be cached forever, or close to it. They have completely different lifecycles and need completely different cache strategies.
Sign your segment URLs. Every CDN supports it. There's no good reason to ship plain URLs in 2026.
Keep auth out of localStorage. httpOnly cookies have been the right answer for fifteen years. The exceptions are vanishingly rare and almost always trace back to a third-party SDK someone forgot to question.
Don't write a player from scratch unless you're at hyperscale. Wrap Shaka or hls.js. Spend your engineering on the parts users actually feel: telemetry, ABR memory, DRM caching, UI.
Centralize your telemetry. If you're sending the same events to five vendors, you're paying five times for the same insight, debugging five integrations, and giving five third parties access to your user data. Pick one. Build the rest yourself.
Treat accessibility as core, not as an add-on. Multi-language audio and subtitles aren't extras for a global platform. They're the product.
Methodology
All observations were made via standard browser developer tools while signed in as a paying or registered user. No DRM was bypassed. No access controls were circumvented. No license server payloads were captured beyond noting that requests fired and to which endpoints.
Platform identities are anonymized. Findings that could uniquely identify a platform have been described in general terms or omitted.
Single VOD title per platform, on desktop Chrome, on a residential Indian connection. Network throttling and mobile network behavior were not in scope.
If you're building or scaling an OTT, talk to us
The wire tells stories the marketing doesn't. If you recognized your platform in the audit above (good or bad), or if you're building one and want a second set of engineering eyes on your architecture, that is exactly what MatrixGard does.
We do read-only infrastructure audits across cloud, security, and delivery layers. Same methodology as the audit above, but applied to your own stack with full access and a written report at the end. See how a MatrixGard audit works or start with the free 2-minute readiness checklist.
Avinash S is the founder of MatrixGard, a fractional DevSecOps practice helping founder-led teams ship cloud infrastructure that holds up under audit, scale, and incident pressure. Eight-plus years across enterprise and startup cloud environments. M.Tech Cyber Security at SRMIST.