Click here to close now.


Microservices Expo Authors: Pat Romanski, Liz McMillan, Carmen Gonzalez, Jason Bloomberg, Tim Hinds

Related Topics: @CloudExpo, Microservices Expo, Agile Computing, Cloud Security, @BigDataExpo, SDN Journal

@CloudExpo: Article

Is the Cloud a CDN Killer?

Using CDNs and cloud together ensures a best-of-both-worlds combination for an optimal online user experience

In our Internet-driven world, both organizations and consumers have come to expect fast, always-on data access from any device. As a result, content providers are tasked with delivering massive files and streaming media to tablets and smartphones while simultaneously ensuring superior website performance. To meet the challenges of this digital data deluge, Content Delivery Networks (CDNs) are often used to efficiently distribute large amounts of content to online users.

The emergence of cloud computing has allowed companies to embrace new, cost-effective approaches to building out their IT infrastructure. The challenge of scaling is no longer prohibitively expensive, and the ability to do so in near-real time allows small and medium-sized businesses to more effectively compete with larger enterprises for market share.

While the cloud can deliver important agility benefits, it still requires careful, strategic planning to address potential challenges, including security and data bottlenecks. CDNs can help to address these issues, enabling cloud networks to meet today's content delivery demands while satisfying customers' expectations for an optimal online experience. When used together, the cloud and CDNs can actually offer the best of both worlds.

The Role of CDNs
A Content Delivery Network is a distributed network of servers and file storage devices deployed to place content and applications in geographic proximity to users, which reduces the load on origin site infrastructure and bandwidth. CDNs are highly flexible and address a wide range of needs - making it possible to simulate a broadcast video network over the Internet, cache large files for faster delivery or optimize entire websites. A CDN is a critical element of modern infrastructure deployment to create a satisfying online user experience.

One of the core functions of a CDN is to optimize media delivery, which involves the streaming of live events and prerecorded video and audio content. A CDN provides content creators and publishers with a robust infrastructure solution for online media distribution to geographically dispersed end users.

Another key benefit of a CDN is its ability to deliver large files and software without the capital expense of building a global network to achieve sufficient bandwidth. Static site caching (also known as reverse proxy caching) is also a prime feature that allows the CDN to point its distributed network at an origin server and cache the content of that origin in geographically diverse areas through a mechanism called GEO-DNS. Though many factors such as current loads contribute to this routing process, the main factor is the proximity of information. The result is much faster page loads and an improved end-user experience.

CDNs offer a multitude of ways to create a dependable, high-quality online user experience by addressing single-point-of-failure, global delivery and scalability concerns. This raises the question of whether the opportunities created by the cloud impact or even overlap with the capabilities of CDNs. If the cloud offers a more economically feasible way to bring content closer to end users, is the CDN still a useful delivery platform? In a word, yes.

Enter the Cloud
In the days before cloud, the main way to address issues regarding performance, availability and scale was by decreasing the physical distance between the origin servers and the end users. Existing infrastructure was optimized and then physically replicated in other geographic locations. Aside from the large capital investments required by this approach, there were other drawbacks and challenges - while companies controlled their infrastructure, they had no control over the network between their servers and the end user, and also had to determine how to replicate their data globally.

The cloud offers businesses a less costly way to expand infrastructure - the ability to scale virtually, on demand, without having to build or buy costly hardware. Businesses can now replicate their infrastructure in the desired geographic locations by purchasing a virtual machine with the required specifications - essentially, buying a "slice" of someone else's pre-built infrastructure. This proves a more cost-effective way to scale and reduce latency for geographically dispersed areas.

Both the cloud and CDNs have evolved into utility platforms, each designed for specialized purposes. The cloud is a utility computing platform consisting of large physical stacks of computational resources or multi-tenant slices of a pre-built mass computational array. This type of dynamic computing power is ideal for processing big data and business intelligence problems, and evolved from the concept of mainframes.

Conversely, a CDN is a utility delivery platform, specializing in one-to-many distribution as opposed to the two-way interactive exchange performed by utility compute platforms. In contrast to the cloud, CDNs are designed specifically to deliver content from servers to the end users as part of a repeatable process. While dynamic content needs to be computed, large amounts of static content also need to be delivered, but only the dynamic portion needs to come from the origin.

Using cloud and CDNs together creates a holistic system that meets the demands for content delivery as well as economical computing power.

CDN POPs and Cloud Availability Zones
Contrary to what the name implies, the cloud has a physical structure, and the proximity and placement of equipment will have an impact on the results. Users in different latitudes/longitudes will have different online experiences depending on their physical distance from the point of origin. Regardless of the provider, most choose to offer cloud availability by region as opposed to state or metropolitan area. Architecturally, having multiple availability zones for cloud and CDNs is beneficial to localize transactions and reduce latency.

Using a CDN extends the reach of the origin server and places cached website content, multimedia or other large files in closer proximity to the end user. CDNs accomplish this by using origin and edge POPs (Points of Presence) that have storage, caching and transfer capabilities. Incoming requests for content are intercepted by the DNS service, which verifies the user's location, and the content is then delivered from the closest POP. By distributing content via a one-to-many repeatable process, end users can consume content more efficiently without increasing the load on origin site infrastructure.

If a POP becomes overwhelmed, the request is routed to the next available POP, which then fulfills the request for content. In either scenario, the POP distributes the local copy via the most efficient route without placing any burden on the origin server. CDN POPs allow for scale during traffic spikes, whereas a server can become overwhelmed and vulnerable once a certain threshold of concurrent interactions is reached.

How CDNs Can Accelerate Cloud Deployments
Without CDNs, the cloud would not be able to meet the performance expectations of today's online users. In fact, CDNs can help alleviate many obstacles to cloud adoption by addressing several key concerns:

Security. A CDN can help ward off raw volume DDoS attacks that can leave web servers inaccessible to users. CDNs essentially absorb the load and prevent the servers from becoming overwhelmed by abnormally high traffic volume. Without a CDN to act as a buffer against these attacks, cloud servers would be exponentially more vulnerable. This is particularly important for eCommerce websites with servers that store personal data and account information.

Availability of service. If an eCommerce server goes down, the effect will not be immediately apparent if content is cached in CDN POPs. By setting Time to Live (TTL), content providers can control how long a piece of static content will remain cached. Determining TTL depends on the nature of the content and how often it needs to change. CDN edge POPs will continue to deliver the cached content for this duration and will check with the server after this time period expires to see if the content has changed.

Data transfer bottlenecks. CDNs help prevent data transfer bottlenecks by efficiently delivering content through multiple egress points to distribute the load. By leveraging a CDN, businesses can scale the egress throughput, which allows the core infrastructure to use its bandwidth for the compute traffic.

Performance assurance. With the growing use of tablets, smartphones and other devices, content providers must be able to deliver streaming media and large amounts of data with minimal latency, or risk losing customers to the competition. Though the smartphone and tablet industry owes its existence to the capabilities of the cloud, delivering a high-quality user experience would not be possible without a CDN. Once content is cached in a CDN POP, a repeatable process delivers content from one-to-many, resulting in lower latency for end users and better server performance.

Scalable storage. CDN file storage devices offer flexibility options that scale as needed. In contrast, cloud storage is available in fixed amounts that can only be scaled up or down by contacting your cloud storage provider. CDN storage devices can scale up based on the size of the content packet to be distributed, resulting in increased operational agility for the business.

Scaling. A CDN increases the capacity of infrastructure, which means servers won't get overwhelmed when video goes viral or when an eCommerce website experiences unexpected traffic spikes. The ability to offload rich media to the CDN allows the compute platform to run more efficiently, and by shouldering the load, the CDN reduces the risk of web servers becoming overwhelmed.

Cloud-CDN Use Cases
Organizations within most industries can benefit from using the cloud and CDNs simultaneously, particularly those with high-performance and low latency requirements as well as a geographically dispersed user base. Below are several common use case examples:

Online Gaming. Low latency, high performance and scalability are mission-critical for multi-player online video games. If players experience downtime or delays, they will quickly abandon the game in search of a new one. Using a CDN can create a high-quality game-playing experience through more efficient content delivery and by helping to manage traffic spikes.

Media and entertainment/OTT content providers. More consumers are choosing to watch their favorite shows and movies via online channels instead of traditional distribution networks. As a result, the ability to efficiently and securely stream video to global locations is critical for the media and entertainment industry as well as OTT (over-the-top) video providers.

Online retail. Website performance, availability, security and scalability are critical factors for online retailers. If content is slow to load or unavailable, consumers will simply take their business elsewhere. For companies that generate most of their revenue online, even minimal downtime can have drastic effects on profits and consumer loyalty. CDNs improve website availability by allowing consumers to browse online catalogs with little server interaction. Offloading the rich media content onto the CDN allows the cloud servers to perform better, resulting in a more efficient purchasing process for consumers.

Cloud and CDN: Symbiotic Relationship
Even though the cloud revolutionized IT infrastructure from a cost perspective, cloud adoption has actually created an increased need for CDNs. The massive amounts of computing power now available via the cloud requires efficient content distribution to meet user expectations. While the cloud allows companies to extend the reach of origin sites into new geographic areas, the result is greater demand for improved performance.

The line between the cloud and CDNs has indeed become blurry, and whether they will continue to exist as we know them today remains to be seen. If they eventually merge into a single platform for the deployment of global applications, the resulting combination of massive computing and delivery capabilities will fundamentally change the face of the Internet.

Regardless of the technology platform and changes that may occur, in today's global economy, high-performance content delivery is a must for any website or online application serving geographically dispersed end users. Using CDNs and cloud together - in whatever form this ultimately takes - ensures a best-of-both-worlds combination for an optimal online user experience.

More Stories By Pete Mastin

Pete Mastin is the vice president of Performance IP and Content Delivery Network services at Internap. He has more than 20 years of experience in software development, project management and operations, with a focus on digital content management and delivery since 1998. Prior to Internap, Pete served as CTO for MulticastMedia where he developed and deployed an Online Video Platform (Media Suite) and an award-winning transcoding platform. He has spoken at a number of industry events on the subject of digital content management and delivery, including Streaming Media, NAB, Digital Hollywood and TMC’s IT EXPO.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

@MicroservicesExpo Stories
With containerization using Docker, the orchestration of containers using Kubernetes, the self-service model for provisioning your projects and applications and the workflows we built in OpenShift is the best in class Platform as a Service that enables introducing DevOps into your organization with ease. In his session at DevOps Summit, Veer Muchandi, PaaS evangelist with RedHat, will provide a deep dive overview of OpenShift v3 and demonstrate how it helps with DevOps.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Docker is hot. However, as Docker container use spreads into more mature production pipelines, there can be issues about control of Docker images to ensure they are production-ready. Is a promotion-based model appropriate to control and track the flow of Docker images from development to production? In his session at DevOps Summit, Fred Simon, Co-founder and Chief Architect of JFrog, will demonstrate how to implement a promotion model for Docker images using a binary repository, and then show h...
All we need to do is have our teams self-organize, and behold! Emergent design and/or architecture springs up out of the nothingness! If only it were that easy, right? I follow in the footsteps of so many people who have long wondered at the meanings of such simple words, as though they were dogma from on high. Emerge? Self-organizing? Profound, to be sure. But what do we really make of this sentence?
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
There once was a time when testers operated on their own, in isolation. They’d huddle as a group around the harsh glow of dozens of CRT monitors, clicking through GUIs and recording results. Anxiously, they’d wait for the developers in the other room to fix the bugs they found, yet they’d frequently leave the office disappointed as issues were filed away as non-critical. These teams would rarely interact, save for those scarce moments when a coder would wander in needing to reproduce a particula...
Last month, my partners in crime – Carmen DeArdo from Nationwide, Lee Reid, my colleague from IBM and I wrote a 3-part series of blog posts on We titled our posts the Simple Math, Calculus and Art of DevOps. I would venture to say these are must-reads for any organization adopting DevOps. We examined all three ascpects – the Cultural, Automation and Process improvement side of DevOps. One of the key underlying themes of the three posts was the need for Cultural change – things like t...
It is with great pleasure that I am able to announce that Jesse Proudman, Blue Box CTO, has been appointed to the position of IBM Distinguished Engineer. Jesse is the first employee at Blue Box to receive this honor, and I’m quite confident there will be more to follow given the amazing talent at Blue Box with whom I have had the pleasure to collaborate. I’d like to provide an overview of what it means to become an IBM Distinguished Engineer.
The cloud has reached mainstream IT. Those 18.7 million data centers out there (server closets to corporate data centers to colocation deployments) are moving to the cloud. In his session at 17th Cloud Expo, Achim Weiss, CEO & co-founder of ProfitBricks, will share how two companies – one in the U.S. and one in Germany – are achieving their goals with cloud infrastructure. More than a case study, he will share the details of how they prioritized their cloud computing infrastructure deployments ...
Ten years ago, there may have been only a single application that talked directly to the database and spit out HTML; customer service, sales - most of the organizations I work with have been moving toward a design philosophy more like unix, where each application consists of a series of small tools stitched together. In web example above, that likely means a login service combines with webpages that call other services - like enter and update record. That allows the customer service team to writ...
As we increasingly rely on technology to improve the quality and efficiency of our personal and professional lives, software has become the key business differentiator. Organizations must release software faster, as well as ensure the safety, security, and reliability of their applications. The option to make trade-offs between time and quality no longer exists—software teams must deliver quality and speed. To meet these expectations, businesses have shifted from more traditional approaches of d...
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
If you are new to Python, you might be confused about the different versions that are available. Although Python 3 is the latest generation of the language, many programmers still use Python 2.7, the final update to Python 2, which was released in 2010. There is currently no clear-cut answer to the question of which version of Python you should use; the decision depends on what you want to achieve. While Python 3 is clearly the future of the language, some programmers choose to remain with Py...
SYS-CON Events announced today that Dyn, the worldwide leader in Internet Performance, will exhibit at SYS-CON's 17th International Cloud Expo®, which will take place on November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Dyn is a cloud-based Internet Performance company. Dyn helps companies monitor, control, and optimize online infrastructure for an exceptional end-user experience. Through a world-class network and unrivaled, objective intelligence into Internet condit...
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
Somebody call the buzzword police: we have a serious case of microservices-washing in progress. The term “microservices-washing” is derived from “whitewashing,” meaning to hide some inconvenient truth with bluster and nonsense. We saw plenty of cloudwashing a few years ago, as vendors and enterprises alike pretended what they were doing was cloud, even though it wasn’t. Today, the hype around microservices has led to the same kind of obfuscation, as vendors and enterprise technologists alike ar...
Opinions on how best to package and deliver applications are legion and, like many other aspects of the software world, are subject to recurring trend cycles. On the server-side, the current favorite is container delivery: a “full stack” approach in which your application and everything it needs to run are specified in a container definition. That definition is then “compiled” down to a container image and deployed by retrieving the image and passing it to a container runtime to create a running...
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively.
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...