CDN Reality Check

Engineering Around Network Failures Since 1995

A Technical Brief on Why Video Streaming Requires 15 Different Protocols

WINK Streaming Technical Documentation • November 2023

The Quote That Started This Document

"If every network (or at least bottleneck) in the world implemented FQ-CoDel (or some other Smart Queuing technique) with Explicit Congestion Notification, all TCP connections were end-to-end with no middleboxes, and every sender used TCP_NOTSENT_LOWAT and made the decision on what to send next at the very last possible moment, none of this fancy stuff would be necessary. But that's not the internet we live in today."

This perfectly encapsulates why we can't have nice things in network engineering. Let's explore why video streaming in 2025 still feels like performing surgery with a chainsaw while wearing boxing gloves.

Executive Summary: Welcome to the Dumpster Fire

CDNs don't exist because the internet is good at moving data. They exist because the internet is catastrophically bad at it. After 30 years of "improvements," we're still engineering around the same fundamental failures: bufferbloat, middleboxes, and networks run by people who think QoS means "Quite Obviously Stupid."

If networks actually worked, we'd need exactly one protocol: RTP over TCP with proper congestion control. Instead, we have HLS, DASH, RTSP, RTMP, SRT, QUIC, WebRTC, and 47 proprietary protocols because each one works around a different flavor of network stupidity.

Part 1: The Network Utopia That Will Never Exist

What We Actually Need

  1. FQ-CoDel (Fair Queuing Controlled Delay) Everywhere
    • Fair bandwidth sharing between flows
    • Low latency under load
    • Active queue management
    • Prevents bufferbloat
  2. Explicit Congestion Notification (ECN)
    • Signal congestion before packet loss
    • Allow TCP to back off gracefully
    • Maintain low latency
    • Actually deployed and not blocked
  3. End-to-End TCP Connections
    • No NAT breaking everything
    • No proxies "optimizing" traffic
    • No middleboxes rewriting packets
    • No corporate firewalls dropping anything unusual
  4. Smart Sending with TCP_NOTSENT_LOWAT
    • Minimal kernel buffering
    • Last-moment sending decisions
    • Adaptive to network conditions
    • Actually supported by applications

What We Actually Have

  • Cable modems with 2GB of RAM used entirely for packet buffering
  • ISP routers running firmware from 2009 with tail-drop queuing
  • Corporate firewalls that block ECN because "it looks suspicious"
  • Mobile carriers doing "TCP optimization" that breaks everything
  • NAT devices that forget UDP mappings after 30 seconds
  • Hotel WiFi with 90% packet loss marketed as "High Speed Internet"
  • Satellite links with 600ms RTT and random 5-second delays
  • That one Windows XP machine still running critical infrastructure

Part 2: The Bufferbloat Apocalypse

What Is Bufferbloat?

Bufferbloat is when network equipment has excessive buffers, causing:

  • Multi-second latency on "fast" connections
  • Broken congestion control (TCP can't detect congestion)
  • Horrible user experience (video freezing while speedtest shows 100Mbps)

The Horror Stories

Home Router Reality:

User: "I have gigabit internet!"
Reality: 3 seconds of buffering in their router
Result: 3000ms ping under load
Video streaming: Completely broken

ISP Equipment:

Marketing: "Business-grade connection!"
Reality: DOCSIS cable modem with 500ms buffer
Result: Video calls become slideshows
Solution: "Have you tried rebooting?"

Mobile Networks:

Advertised: "5G Ultra Wideband!"
Reality: Tower buffers 2 seconds of packets
Result: Live streaming impossible
Carrier response: "Working as designed"

Part 3: The Middlebox Nightmare

The Seven Circles of Middlebox Hell

1. NAT (Network Address Translation)

  • Breaks peer-to-peer connections
  • Forgets UDP mappings randomly
  • Some implementations limited to 64K connections total
  • Double NAT, triple NAT, NAT behind NAT

2. Corporate Proxies

  • Only understand HTTP/1.0
  • Buffer entire responses before forwarding
  • Add 500ms latency "for security scanning"
  • Block WebSocket because "it's not HTTP"

3. "Security" Appliances

  • Deep packet inspection breaking encrypted streams
  • Rate limiting that triggers on video streams
  • Blocking anything that "looks unusual"
  • Resetting connections after arbitrary timeouts

4. Carrier Grade NAT (CGNAT)

  • Shared IP addresses between thousands of users
  • Port exhaustion
  • Breaking anything that needs incoming connections
  • No control or visibility

5. TCP "Optimizers"

  • Mobile carriers rewriting TCP parameters
  • Transparent proxies splitting connections
  • "Helpful" bandwidth saving that destroys video
  • Compression that makes things worse

6. Hotel/Conference WiFi

  • Captive portals breaking everything
  • Rate limiting per device
  • DNS hijacking
  • Random packet loss because "oversubscribed"

7. ISP Traffic Shaping

  • Throttling video to 1.5Mbps
  • But only after 60 seconds
  • Different rules for different protocols
  • "Network management" = customer frustration

Part 4: Why We Have 15 Different Protocols

The Protocol Proliferation Problem

Each protocol exists to work around specific network failures:

HLS (HTTP Live Streaming)

Why it exists: Firewalls only allow HTTP

Workaround: Pretend video is a series of files

Downside: 3-10 second latency

Reality: It works everywhere, so we're stuck with it

DASH (Dynamic Adaptive Streaming over HTTP)

Why it exists: HLS is Apple-controlled

Workaround: Same idea, different standard

Downside: Now we support two things that do the same thing

Reality: Because standards committees need jobs

RTSP (Real Time Streaming Protocol)

Why it exists: Designed when networks worked properly

Problem: Requires multiple ports, killed by NAT

Still used: Security cameras on local networks

Reality: Great protocol murdered by middleboxes

RTMP (Real Time Messaging Protocol)

Why it exists: Flash needed streaming

Problem: Flash is dead

Still used: Because changing things is hard

Reality: Zombie protocol that won't die

SRT (Secure Reliable Transport)

Why it exists: Broadcasting needs reliability

Workaround: Add FEC and retransmission to UDP

Actually good: One of the few modern protocols

Reality: Most firewalls block it anyway

QUIC (Quick UDP Internet Connections)

Why it exists: Google gave up on fixing TCP

Workaround: Rebuild TCP over UDP

Smart: Bypasses middlebox interference

Reality: Some networks now block UDP port 443

WebRTC

Why it exists: Browsers need real-time video

Complexity: ICE, STUN, TURN, SDP negotiation

Works: Eventually, after 47 connection attempts

Reality: So complex it has its own complexity acronyms

WebTransport

Why it exists: WebRTC is too complex

Problem: Only Chrome supports it

Safari: "We'll think about it" (they won't)

Reality: Dead on arrival

Part 5: CDN Architecture - Accepting Defeat

Why CDNs Exist

CDNs aren't about "accelerating" content. They're about working around network failures:

  1. Geographic Distribution
    • Not for speed, but for avoiding broken paths
    • Multiple routes = better chance one works
    • Closer servers = fewer middleboxes to traverse
  2. Protocol Translation
    • Accept RTMP, output HLS
    • Accept SRT, output DASH
    • Accept anything, output whatever works
    • The ultimate protocol adapter
  3. Buffering and Caching
    • Hide network instability
    • Smooth out packet loss
    • Pretend the internet works
    • Lie to users about quality
  4. Connection Multiplexing
    • One good connection to origin
    • Thousands of bad connections to users
    • Shield origin from the chaos

The CDN Paradox

Without CDNs:
├── Direct connections would be faster (if they worked)
├── Lower latency (if packets arrived)
├── Simpler architecture (if NAT didn't exist)
└── Cheaper infrastructure (if networks were reliable)

With CDNs:
├── It actually works
└── Users can watch video

Part 6: Real-World Horror Stories

The Enterprise Client

Setup: Fortune 500 company, "enterprise-grade" network

Problem: Video streaming broken for 5,000 employees

Investigation:

  • Gigabit connection to internet ✓
  • Modern firewall ✓
  • Professional IT team ✓

Root Cause: Firewall configured to buffer 30 seconds of packets "for deep inspection"

Solution: Bypass firewall, blame "compatibility"

Lesson: Enterprise equipment is enterprise-grade stupid

The Mobile Carrier

Setup: Major cellular provider, "optimizing" video delivery

Problem: Live streams randomly freezing

Investigation:

  • Strong signal ✓
  • High bandwidth ✓
  • Low packet loss ✓

Root Cause: Carrier's "video optimizer" transcoding on the fly, adding 8-second delay

Solution: Encrypt everything, make it look like HTTPS

Lesson: ISPs will break anything they can see

The Hotel Conference

Setup: Major tech conference at luxury hotel

Problem: 2,000 attendees, video demos all failing

Investigation:

  • "1Gbps" WiFi advertised ✓
  • Professional AV team ✓
  • Dedicated bandwidth ✓

Root Cause:

  • Access points limited to 50 clients each
  • DHCP pool of 254 addresses
  • Single DSL uplink shared with guest rooms

Solution: Everyone used phone hotspots

Lesson: Never trust venue internet

Part 7: The Protocols We Actually Need

For the Network We Have, Not the Network We Want

Adaptive Bitrate Everything

  • Assume bandwidth will randomly change
  • Assume latency will spike to infinity
  • Assume 10% packet loss is "good"
  • Have 5 quality levels ready

Multiple Protocol Support

  • Try QUIC first (might work)
  • Fall back to WebRTC (probably works)
  • Fall back to HLS (always works, badly)
  • Keep RTMP running (for that one client)

Client-Side Buffering

  • Buffer 30 seconds because networks are garbage
  • But pretend it's 2 seconds for user experience
  • Lie about live streaming (it's always delayed)
  • Hide the horror from users

FEC (Forward Error Correction)

  • Send every packet twice
  • No wait, three times
  • Actually, just use 200% overhead
  • Bandwidth is cheaper than reliability

Part 8: The Uncomfortable Truths

Things That Will Never Get Fixed

  1. Bufferbloat - Router manufacturers don't care
  2. NAT - IPv6 adoption is still at 40% after 20 years
  3. Middleboxes - Security theater requires packet inspection
  4. ECN - Blocked by default because "security"
  5. Corporate networks - IT departments fear change
  6. Mobile carriers - "Optimization" = profit
  7. Hotel WiFi - Lowest bidder always wins

What This Means for Engineering

Stop Optimizing for Good Networks

  • They don't exist
  • They never will exist
  • Design for the worst case
  • It's always the worst case

Protocol Support Is Forever

  • Once you support it, you can't remove it
  • Some customer depends on it
  • Their network only allows that protocol
  • You're stuck with it forever

Complexity Is Mandatory

  • Simple solutions don't work
  • Complex solutions barely work
  • Everything is a workaround
  • Embrace the chaos

Part 9: What Would Actually Fix This?

The Nuclear Option

Throw away the entire internet and start over with:

  • Mandatory FQ-CoDel on every device
  • ECN required and respected
  • No middleboxes allowed
  • End-to-end principle enforced
  • TCP_NOTSENT_LOWAT by default
  • Criminal penalties for bufferbloat

The Realistic Option

Continue building increasingly complex workarounds:

  • More protocols
  • Bigger buffers
  • Smarter clients
  • More CDN locations
  • Machine learning to predict failures
  • Acceptance that it's all terrible

Part 10: WINK's Approach - Embrace the Chaos

Our Philosophy

We don't build for the network we want. We build for the dumpster fire we have.

Multi-Protocol Support

  • RTSP for local networks that work
  • HLS for networks that don't
  • SRT for broadcast customers
  • QUIC for the brave
  • WebRTC for browsers
  • RTMP because it won't die

Adaptive Everything

  • Bitrate adaptation
  • Protocol switching
  • Path selection
  • Buffer sizing
  • FEC levels
  • Resignation levels

CDN Integration

  • We accept CDNs are mandatory
  • Build for edge delivery
  • Cache everything
  • Trust nothing
  • Verify everything
  • Cry quietly

Conclusion: It's Not Getting Better

The internet is broken. It has been broken since 1995. It will remain broken.

The quote that started this document describes a network that will never exist. We don't have FQ-CoDel everywhere. We don't have working ECN. We have middleboxes destroying everything. TCP_NOTSENT_LOWAT is a fantasy.

Instead, we have:

  • Bufferbloat making "fast" connections slow
  • Middleboxes breaking every protocol
  • NAT destroying peer-to-peer
  • Carriers "optimizing" traffic into uselessness
  • Firewalls blocking anything interesting
  • Networks that are actively hostile to real-time video

This is why CDNs exist. This is why we need 15 protocols. This is why video streaming is still hard in 2023.

Our job isn't to fix it - our job is to make video work despite it being broken.

Welcome to the reality of network engineering: We're not building technology, we're building workarounds for everyone else's failures.

Appendix: Recommended Reading

  • "Bufferbloat: Dark Buffers in the Internet" - Jim Gettys
  • "The Middlebox Morass" - Every network engineer's autobiography
  • "ECN: A Protocol Everyone Blocks" - A tragedy in three acts
  • "Why We Can't Have Nice Things" - The Internet Engineering Task Force

About This Document

This technical brief was written by WINK Streaming engineers who have spent decades working around network failures. Every horror story is real. Every problem still exists. Every solution is a compromise.

For more information about how WINK products handle the reality of broken networks, visit wink.co or contact our support team who have seen it all and fixed none of it, because the problems aren't on our end.

Copyright 2023 WINK Streaming. All rights reserved. The networks aren't.

Need help navigating the network chaos? Contact our support team.

Get Support