Architecture

This page describes the internal design and concurrency model of the HTTPServer library.

Request Lifecycle

When a client connects to the server, the request traverses through several components:

  1. Acceptance: The Server loop accepts the new socket connection.

  2. Concurrency: The connection is wrapped in a ConnectionGuard (for rate limiting) and enqueued in the ThreadPool.

  3. Parsing: A worker thread picks up the task and uses the HttpParser to translate the raw bytes into an HttpRequest object.

  4. Routing: The Router matches the request method and path against registered routes.

  5. Handling: The matching callback (handler) is executed, returning an HttpResponse.

  6. Transmission: The response is serialized and sent back over the socket.

  7. Cleanup: RAII guards ensure resources (sockets, connection counts) are released even if an error occurs.

Concurrency Model

The server uses a Fixed Thread Pool architecture to handle multiple concurrent requests efficiently without the overhead of creating new threads for every connection.

  • ThreadPool: Manages a set of worker threads and a task queue. If the queue is full (defined by max_queue_size in config), new connections are rejected to prevent memory exhaustion.

  • ConnectionGuard: Tracks the number of active connections per IP address. It uses a ConnectedIp map to enforce max_connections_per_ip limits.

Internal Safety Mechanisms

Idle IP Cleanup

To prevent the connection tracking map from growing indefinitely, the PerioidIdleIpCleanup component runs a background thread that periodically purges records for IPs that haven’t connected within the idle_timeout_seconds window.

Signal Handling

The Server::installSignalHandlers() method sets up listeners for SIGINT and SIGTERM. When received, the server initiates a graceful shutdown, allowing active requests to finish before stopping the worker threads and closing listeners.