Engineering Around Network Failures Since 1995
A Technical Brief on Why Video Streaming Requires 15 Different Protocols
WINK Streaming Technical Documentation • November 2023
"If every network (or at least bottleneck) in the world implemented FQ-CoDel (or some other Smart Queuing technique) with Explicit Congestion Notification, all TCP connections were end-to-end with no middleboxes, and every sender used TCP_NOTSENT_LOWAT and made the decision on what to send next at the very last possible moment, none of this fancy stuff would be necessary. But that's not the internet we live in today."
This perfectly encapsulates why we can't have nice things in network engineering. Let's explore why video streaming in 2025 still feels like performing surgery with a chainsaw while wearing boxing gloves.
CDNs don't exist because the internet is good at moving data. They exist because the internet is catastrophically bad at it. After 30 years of "improvements," we're still engineering around the same fundamental failures: bufferbloat, middleboxes, and networks run by people who think QoS means "Quite Obviously Stupid."
If networks actually worked, we'd need exactly one protocol: RTP over TCP with proper congestion control. Instead, we have HLS, DASH, RTSP, RTMP, SRT, QUIC, WebRTC, and 47 proprietary protocols because each one works around a different flavor of network stupidity.
Bufferbloat is when network equipment has excessive buffers, causing:
User: "I have gigabit internet!" Reality: 3 seconds of buffering in their router Result: 3000ms ping under load Video streaming: Completely broken
Marketing: "Business-grade connection!" Reality: DOCSIS cable modem with 500ms buffer Result: Video calls become slideshows Solution: "Have you tried rebooting?"
Advertised: "5G Ultra Wideband!" Reality: Tower buffers 2 seconds of packets Result: Live streaming impossible Carrier response: "Working as designed"
Each protocol exists to work around specific network failures:
Why it exists: Firewalls only allow HTTP
Workaround: Pretend video is a series of files
Downside: 3-10 second latency
Reality: It works everywhere, so we're stuck with it
Why it exists: HLS is Apple-controlled
Workaround: Same idea, different standard
Downside: Now we support two things that do the same thing
Reality: Because standards committees need jobs
Why it exists: Designed when networks worked properly
Problem: Requires multiple ports, killed by NAT
Still used: Security cameras on local networks
Reality: Great protocol murdered by middleboxes
Why it exists: Flash needed streaming
Problem: Flash is dead
Still used: Because changing things is hard
Reality: Zombie protocol that won't die
Why it exists: Broadcasting needs reliability
Workaround: Add FEC and retransmission to UDP
Actually good: One of the few modern protocols
Reality: Most firewalls block it anyway
Why it exists: Google gave up on fixing TCP
Workaround: Rebuild TCP over UDP
Smart: Bypasses middlebox interference
Reality: Some networks now block UDP port 443
Why it exists: Browsers need real-time video
Complexity: ICE, STUN, TURN, SDP negotiation
Works: Eventually, after 47 connection attempts
Reality: So complex it has its own complexity acronyms
Why it exists: WebRTC is too complex
Problem: Only Chrome supports it
Safari: "We'll think about it" (they won't)
Reality: Dead on arrival
CDNs aren't about "accelerating" content. They're about working around network failures:
Without CDNs: ├── Direct connections would be faster (if they worked) ├── Lower latency (if packets arrived) ├── Simpler architecture (if NAT didn't exist) └── Cheaper infrastructure (if networks were reliable) With CDNs: ├── It actually works └── Users can watch video
Setup: Fortune 500 company, "enterprise-grade" network
Problem: Video streaming broken for 5,000 employees
Investigation:
Root Cause: Firewall configured to buffer 30 seconds of packets "for deep inspection"
Solution: Bypass firewall, blame "compatibility"
Lesson: Enterprise equipment is enterprise-grade stupid
Setup: Major cellular provider, "optimizing" video delivery
Problem: Live streams randomly freezing
Investigation:
Root Cause: Carrier's "video optimizer" transcoding on the fly, adding 8-second delay
Solution: Encrypt everything, make it look like HTTPS
Lesson: ISPs will break anything they can see
Setup: Major tech conference at luxury hotel
Problem: 2,000 attendees, video demos all failing
Investigation:
Root Cause:
Solution: Everyone used phone hotspots
Lesson: Never trust venue internet
Throw away the entire internet and start over with:
Continue building increasingly complex workarounds:
We don't build for the network we want. We build for the dumpster fire we have.
The quote that started this document describes a network that will never exist. We don't have FQ-CoDel everywhere. We don't have working ECN. We have middleboxes destroying everything. TCP_NOTSENT_LOWAT is a fantasy.
Instead, we have:
This is why CDNs exist. This is why we need 15 protocols. This is why video streaming is still hard in 2023.
Our job isn't to fix it - our job is to make video work despite it being broken.
Welcome to the reality of network engineering: We're not building technology, we're building workarounds for everyone else's failures.
This technical brief was written by WINK Streaming engineers who have spent decades working around network failures. Every horror story is real. Every problem still exists. Every solution is a compromise.
For more information about how WINK products handle the reality of broken networks, visit wink.co or contact our support team who have seen it all and fixed none of it, because the problems aren't on our end.
Copyright 2023 WINK Streaming. All rights reserved. The networks aren't.
Need help navigating the network chaos? Contact our support team.
Get Support