Transport Layer Foundation

TCP turns unreliable packets into a dependable conversation.

The Transmission Control Protocol sits above IP and gives applications something raw packets do not: ordered delivery, retransmissions, flow control, congestion control, and a consistent byte stream that survives loss and reordering.

Reliable Lost data is retransmitted until it arrives or the connection fails.
Ordered Applications read bytes in sequence, even if IP delivers packets out of order.
Adaptive Send rate changes as receivers and networks expose their limits.
Core idea

Why TCP exists

IP is a best-effort packet delivery system. Routers may drop, duplicate, delay, or reorder packets, and IP itself does not repair any of that. TCP adds the missing transport semantics so applications can treat the network as a long-lived stream of bytes rather than a gamble of independent datagrams.

Mechanics

The pieces that make TCP dependable

TCP is not a single feature. It is a set of tightly linked mechanisms: segment headers for state, acknowledgments for feedback, windows for pacing, timers for recovery, and a connection model that allows both endpoints to reason about the same byte stream.

Ports

Source and destination ports identify which applications should receive the byte stream.

Sequence numbers

Every byte is numbered so receivers can reassemble the stream, detect loss, and discard duplicates.

Acknowledgments

The ACK number says, “I have everything up to byte N-1; send me byte N next.”

Flags

SYN, ACK, FIN, RST, PSH, and URG control state changes and delivery semantics.

Window size

The advertised receive window limits how much unacknowledged data the sender may keep in flight.

Options

MSS, window scaling, timestamps, and SACK extend TCP for higher throughput and better loss recovery.

Lifecycle

From SYN to FIN

A TCP connection is stateful. Each side keeps track of sequence space, send and receive windows, timers, and which transitions are legal next. That shared state is why a connection can recover from loss without the application rebuilding context on every packet.

01
Setup

Three-way handshake

Client sends SYN with an initial sequence number, server replies with SYN-ACK, client confirms with ACK. This synchronizes state and negotiates options before application data flows.

02
Transfer

Reliable byte stream

Applications write bytes, TCP breaks them into segments, receivers ACK what arrived, and senders retransmit missing data when timers or duplicate ACKs reveal loss.

03
Shutdown

Graceful close

Each side sends FIN when it has no more data to send. TCP supports half-close, so one direction may end while the other still delivers remaining bytes.

Control Loops

TCP balances two kinds of pressure

Sending too fast causes drops. Sending too slowly wastes capacity. TCP continuously estimates both receiver capacity and network capacity, then chooses the safer limit.

Flow control

Protects the receiver. The receive window stops a fast sender from overwhelming a slow reader or a full kernel buffer.

Congestion control

Protects the network. The congestion window estimates what the path can sustain without building harmful queues.

Effective send window

A sender can generally transmit up to the smaller of the advertised receive window (`rwnd`) and the congestion window (`cwnd`). That single rule captures why TCP is cooperative with both endpoints and the network path itself.

Loss Recovery

How TCP reacts when the network misbehaves

Acknowledgments are TCP’s feedback channel. They reveal progress, expose stalls, and hint at missing data. Recovery depends on reading those hints quickly enough to keep throughput high without causing collapse.

Timeout-based retransmission

If an ACK does not arrive within the retransmission timeout, TCP assumes the segment or its ACK was lost and sends the data again. The timeout is derived from measured round-trip time plus a safety margin, so TCP can adapt to both fast LANs and unpredictable WAN paths.

Selective repair

With SACK enabled, receivers can report exactly which byte ranges arrived. That lets senders repair holes precisely instead of blindly resending large chunks that might already be buffered successfully.

Congestion control algorithms

  • Slow start quickly probes available capacity by doubling the congestion window each round-trip time.
  • Congestion avoidance grows more carefully once TCP nears the path limit, usually adding about one MSS per RTT.
  • Fast retransmit resends likely-lost data after duplicate ACKs instead of waiting for a long timeout.
  • Fast recovery reduces the sending rate without dropping all the way back to one segment.
  • Modern variants such as CUBIC and BBR estimate bandwidth and delay differently, but they still operate within TCP’s reliability model.
Modern Usage

Where TCP still dominates

TCP remains the default transport for systems where correctness beats raw immediacy. Even when applications speak HTTP, PostgreSQL, SSH, or SMTP, TCP usually provides the delivery contract under the hood.

Web traffic

HTTP/1.1 and HTTP/2 usually run over TLS over TCP. TCP handles in-order delivery; TLS adds confidentiality and authentication.

Databases and APIs

Stateful services prefer TCP because a missing byte in a query or response is unacceptable.

Files and replication

Bulk transfers, backups, and stream replication benefit from retransmissions, backpressure, and congestion adaptation.

When not to use it

Real-time voice, gaming, and some media systems may prefer UDP or QUIC when lower latency matters more than perfect ordering.

Subtle but important

TCP is stream-oriented, not message-oriented

If an application writes 10 bytes and then 20 bytes, the peer might read 30 at once, 15 and 15, or any other split. TCP preserves order, not application message boundaries. Protocols built on top of TCP must define their own framing, such as content lengths, delimiters, or fixed-size headers.

Performance tradeoff

Head-of-line blocking is both a feature and a cost

Because TCP guarantees in-order delivery, later bytes wait behind earlier missing bytes. That is excellent for correctness, but it can increase latency for multiplexed traffic on lossy paths. QUIC was designed in part to reduce that pain at the application transport level.

Troubleshooting

What to look for when TCP feels slow or fragile

Packet captures, socket statistics, and latency measurements all become easier to interpret once you know which control loop is unhappy: setup, delivery, receiver pacing, or network congestion.

01

Repeated retransmissions often mean packet loss, reordering, or a middlebox problem.

02

A tiny receive window points to application slowness or receiver buffer pressure.

03

High RTT with a large congestion window can indicate bufferbloat.

04

Frequent resets suggest abrupt application termination, protocol mismatch, or policy enforcement.

05

Handshake failures can come from blocked ports, SYN filtering, or asymmetric routing.

Bottom line

TCP is a reliability engine with feedback at every step.

It starts with synchronized state, numbers every byte, learns from acknowledgments, limits flight with windows, retransmits what is missing, and slows down when the path is under stress. That combination is why so much of the internet still relies on TCP decades after it was defined.