The Fundamental Trade-off
TCP and UDP both operate at Layer 4 (Transport layer) of the OSI model and both use port numbers to direct traffic to the correct application. The difference is reliability versus speed. TCP guarantees that every segment arrives at the destination in the correct order, with corrupted segments retransmitted automatically. UDP sends packets and doesn't check whether they arrived — it's fire-and-forget. This reliability comes at a cost: TCP's handshake, acknowledgements, and retransmissions add overhead and latency. UDP's lack of overhead makes it faster and lower-latency, at the expense of reliability.
TCP — Reliability Through Connection
TCP establishes a connection before any data is sent using the three-way handshake: the client sends a SYN (synchronise) segment → the server responds with SYN-ACK (synchronise-acknowledge) → the client sends ACK (acknowledge). Only after this handshake is complete does data flow. The handshake establishes sequence numbers that both sides use to track which segments have been received and which need retransmission. When the connection ends, a four-way FIN-ACK sequence tears it down gracefully.
TCP reliability features: Sequencing — each segment is numbered so the receiver can reassemble them in order even if they arrive out-of-order (common in networks where packets take different paths). Acknowledgements — the receiver sends ACKs confirming received segments; if an ACK isn't received within a timeout, the sender retransmits. Flow control — the receiver advertises a window size (how much data it can buffer) to prevent the sender from overwhelming it. Congestion control — TCP reduces its send rate when it detects network congestion (dropped packets), preventing network meltdown. These features make TCP ideal for applications where data integrity is paramount — downloading a file, sending email, browsing a website. A missing byte in an HTML page or a corrupted block in a downloaded file would break the application.
The TCP SYN Flood Attack
The TCP handshake creates a vulnerability: during a SYN flood attack, an attacker sends thousands of SYN packets to a server with spoofed source IP addresses. The server responds with SYN-ACK to each and waits for the final ACK — but the ACK never comes (the source IP is fake). The server's half-open connection table fills up, and it can no longer accept legitimate connections. This is a DoS (Denial of Service) attack that exploits TCP's reliability mechanism. Mitigation: SYN cookies (the server doesn't allocate state until the ACK arrives), rate limiting SYN packets, and firewall rules detecting abnormal SYN rates. SYN flood attacks appear directly on Security+ and Network+ exam scenarios.
UDP — Speed Through Simplicity
UDP has no handshake, no sequencing, no acknowledgements, and no retransmission. A UDP datagram is sent and immediately forgotten by the sender. If a packet is lost, dropped by a congested router, or arrives out of order, UDP doesn't know and doesn't care. This sounds like a weakness — and for some applications it is — but for many applications UDP's speed and simplicity are exactly what's needed.
DNS queries use UDP port 53. A DNS query is a single small packet and its response — there's no need for the overhead of a three-way handshake for a sub-millisecond lookup. If the UDP query is lost, the DNS client simply retransmits after a short timeout — the application handles reliability at a higher level. VoIP and video conferencing use UDP because audio and video are delay-sensitive. A retransmitted voice packet that arrives 500ms late is useless — by the time it arrives, the conversation has moved on. A slightly choppy call is far better than a perfectly reliable call that's a second behind the speaker. Online gaming uses UDP for the same reason — position updates for a player must be current; an old position retransmitted after a delay is worse than no update at all. DHCP uses UDP (ports 67/68) because it operates before the client has an IP address — a TCP connection requires both parties to have IP addresses.
TCP vs UDP Protocol Reference
| Protocol | Transport | Port(s) | Why That Transport? |
|---|---|---|---|
| HTTP | TCP | 80 | Web pages must be complete and correct |
| HTTPS | TCP | 443 | Encrypted web — reliability required |
| FTP | TCP | 20/21 | File transfers — no bytes can be lost |
| SSH | TCP | 22 | Remote shell — every keystroke must arrive |
| SMTP | TCP | 25 | Email delivery — must not lose messages |
| DNS | UDP (primarily) | 53 | Small query/response — speed over reliability; TCP for zone transfers |
| DHCP | UDP | 67/68 | No IP address yet — can't establish TCP |
| TFTP | UDP | 69 | Simple file transfer — no auth, minimal overhead |
| SNMP | UDP | 161/162 | Monitoring — speed over guaranteed delivery |
| VoIP/RTP | UDP | Various | Real-time audio — latency-sensitive |
| NTP | UDP | 123 | Time sync — fire and forget |
| RDP | TCP | 3389 | Remote desktop — screen data must be reliable |
TCP vs UDP Summary
| Property | TCP | UDP |
|---|---|---|
| Connection | Connection-oriented (3-way handshake) | Connectionless — no handshake |
| Reliability | Guaranteed delivery and ordering | Best-effort — no guarantees |
| Error handling | Retransmits lost segments automatically | No retransmission — app must handle |
| Speed | Slower — handshake and ACK overhead | Faster — minimal overhead |
| Header size | 20 bytes minimum | 8 bytes |
| Use cases | HTTP/S, email, SSH, file transfer, RDP | DNS, DHCP, VoIP, streaming, gaming |
QoS and Traffic Priority
QoS (Quality of Service) is a network mechanism that prioritises certain traffic types over others when the network is congested. UDP-based real-time traffic (VoIP, video conferencing) is highly sensitive to delay and packet loss — even brief congestion causing 200ms of additional latency degrades voice quality noticeably. QoS marks VoIP packets with a higher priority using DSCP (Differentiated Services Code Point) values in the IP header, and network devices service higher-priority queues first during congestion. A properly configured QoS policy guarantees that VoIP packets are forwarded before email and file transfer traffic, even when all are competing for the same link. This is why VoIP deployments always require QoS configuration alongside the phones and call manager — without it, congested networks degrade voice quality unpredictably.
For the Network+ exam: QoS is configured at Layer 3 (DSCP markings in the IP header) and enforced at Layer 2 (802.1p class of service in Ethernet frames). The most common exam scenario: "users report choppy VoIP calls during peak hours when network utilisation is high — what should the administrator configure?" → QoS with higher priority for VoIP traffic.
TCP Port States — netstat and Troubleshooting
The netstat command (or ss on modern Linux) displays active TCP connections and their states, which is essential for network troubleshooting. Key states: LISTEN — the application is waiting for incoming connections on that port (a web server listening on port 443). ESTABLISHED — an active connection exists between two endpoints. TIME_WAIT — the connection has ended but the OS is waiting to ensure all packets have been delivered before releasing the port. CLOSE_WAIT — the remote end has closed the connection but the local application hasn't yet. A large number of TIME_WAIT connections is normal under load. A large number of CLOSE_WAIT connections may indicate an application that's not closing connections cleanly. SYN_SENT — a connection attempt has been made and the three-way handshake is in progress. Running netstat -an and identifying which ports are in LISTEN state confirms which services are running and accessible — a key step in port-based firewall troubleshooting.