Why Use a CDN?
In the early days of the internet, website content was delivered to users from a single, centralised server. However, as the internet expanded and more users came online, this model proved insufficient. It was hindered by long load times, network congestion, and the risk of a single point of failure. As online content started becoming more complex—ranging from text and images to audio, video, and interactive media—the situation escalated.
To address these challenges and improve content delivery, the concept of a Content Delivery Network (CDN) was developed. The first CDN was introduced by Akamai Technologies in 1998.
Latency and the Need for CDNs
Latency is the time it takes for data to travel from one point to another in a network, commonly referred to as “delay.” In the context of the internet, it is the time taken for a request from a user’s computer (like loading a web page) to reach the server hosting the website and for the server’s response (the web page’s content) to return to the user’s computer.
In the early days of the internet, when servers were often located in a single, centralised location, latency could be high, especially for users located far from the server. For example, a user in Asia trying to access a website hosted on a server in North America would experience more latency than a user in North America due to the greater distance the data must travel.
This latency is not only about the physical distance but also the number of “hops” the data must make along the route, passing through different networks and devices such as routers and switches. Each hop introduces a delay, and the more hops data must make, the higher the latency.
CDNs and the Reduction of Latency
CDNs were developed in part to solve this latency problem. By distributing and storing content on numerous servers located in various geographical locations (edge servers), a CDN brings the content geographically closer to the end users.
When a user requests content (such as loading a web page), the request goes to the nearest edge server rather than all the way to the original server. Because this server is geographically closer to the user than the origin server, the content travels a shorter distance, reducing the number of hops and, therefore, the latency. This process is known as caching.
For example, using the CDN model, a user in Asia might receive content from an edge server located in Asia, even if the original content is hosted on a server in North America.
This reduction in latency leads to faster load times, providing a better user experience. It is especially important for content that requires high bandwidth or real-time delivery, such as video streaming, where high latency can lead to buffering and reduced video quality.
The Genesis of the CDN
In 1995, a major event called the “Comet Hale-Bopp” was expected to make a close pass by Earth. MIT planned to photograph this event and share the images online. However, they soon realised that their servers wouldn’t be able to handle the anticipated traffic. This led to the development of an algorithm to distribute the content across multiple servers—a fundamental concept in CDN technology.
Akamai, co-founded by Tom Leighton, one of the professors who worked on the algorithm, commercialised this idea, and launched the first CDN in 1998. This initial CDN infrastructure dramatically improved the speed, reliability, and scalability of internet content delivery, helping websites manage the explosive growth of online traffic and users.
Why Do We Use CDNs?
CDNs are now a critical part of the internet infrastructure and serve multiple purposes:
- Improved Performance: By caching content at strategically placed physical nodes around the world, CDNs can deliver content to users more quickly. This reduces latency and enhances user experience, especially for bandwidth-heavy content like video and gaming.
- Reliability and Redundancy: By distributing content across a wide network of servers, CDNs reduce the risk of interruptions in service. If one server fails or experiences heavy traffic, requests can be rerouted to another server, ensuring consistent availability of content.
- Scalability: CDNs can easily handle spikes in traffic, making them ideal for websites with highly variable traffic or special events (like online sales or product launches) that generate massive short-term demand.
- Security: CDNs can provide enhanced security features such as DDoS protection. By filtering out malicious traffic at the edge servers, CDNs can prevent attacks from reaching the origin server.
- Cost-Effectiveness: CDNs can reduce hosting costs by minimizing data traffic through the website’s original server, thereby reducing bandwidth costs.
In summary, CDNs emerged as a solution to the performance, scalability, and reliability challenges posed by the growth of the internet and increasing complexity of online content. They have evolved significantly since their inception and are now a fundamental part of the internet infrastructure, delivering a vast portion of the content we consume online every day.