Welcome!

Microservices Expo Authors: Elizabeth White, Jyoti Bansal, Pat Romanski, AppNeta Blog, Liz McMillan

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Article

Rethinking Enterprise Networks: Transformative Approach to Fuel the Cloud

Companies are transforming their Wide Area Networks to more quickly and efficiently fuel their move to the cloud

It's clear that cloud computing has transformed the enterprise IT landscape, from the computing infrastructure layer up through enterprise software, as companies move to leverage more efficient and cost-effective service-delivery models and bring new cloud-based products and services to the market. Perhaps less known is the innovation taking place at the network level and how leading companies are transforming their Wide Area Networks (WAN) to more quickly and efficiently fuel their move to the cloud.

Moving to the cloud requires network managers and IT shops to implement scalable solutions that ensure the reliability and performance of cloud-based applications across their extended enterprise. Cloud computing drives the need for more reliability across the WAN and ever-increasing amounts of highly available, secure and reliable bandwidth across all users, locations and geographies. However, many enterprises are constrained by their existing network infrastructure, both from a cost and performance perspective. They can't cost-effectively scale their networks; and latency, jitter and packet loss impact performance and reliability in the cloud.

Transforming to a next-generation WAN architecture plays a critical role in enabling enterprises to more easily and cost-effectively migrate to and better support public, private and hybrid cloud environments.

According to Forrester, ‘enterprise use of the cloud has arrived,' with nearly half of all companies in North America and Europe setting aside budget for private cloud investments in 2013. Legitimate budgeting to integrate cloud services into existing platforms and deploy software apps to the cloud confirm that IT shops are ‘no longer denying it's happening in their company.' Increasingly, enterprises are moving beyond their own data centers to leverage infrastructure and applications, choosing to host their own applications externally or leverage services from third-party providers.

Cloud infrastructure providers, such as Amazon and Rackspace, are well established in the enterprise IaaS market delivering compute, storage and hosting services to businesses of all sizes from SME to large multi-national corporations. Since its launch in 2006, Amazon Web Services' (AWS) S3 offering has grown to encompass more than two trillion objects stored and company revenues have grown past $2 billion.[1] AWS clearly dominates the cloud platform space holding as much as 70 percent of the market, with its enterprise clients spending anywhere from $12,000 to $2.5 million per year on its infrastructure services. The push of traditional companies such as Microsoft, IBM and HP, as well as and a host of other players, into this space further validates the arrival of cloud in the enterprise market.

On the software side, service providers like Salesforce.com have been offering cloud-based enterprise software for years, enabling companies to optimize their costs under a pay-per-use model, while simplifying the delivery of reliable apps that scale more easily. According to the Aberdeen Group[2], SaaS is becoming an increasingly important deployment model for enterprise applications, with highest adoption among CRM and ERP solutions. Nearly 80 percent of all companies are currently using at least two or more SaaS applications and many have reported decreased spending on application deployment resulting from SaaS usage.

Enterprises will continue to use a range of cloud solutions, developed internally and sourced from external providers, to more efficiently and effectively distribute mission-critical applications on a global scale. Most will need to move beyond their traditional legacy networks to ensure higher levels of performance, reliability and scalability of these applications across the WAN.

Traditional WAN Design and Optimization Approaches: Falling Short in the Cloud
Cloud is causing an explosion in enterprise bandwidth and making traditional WAN management obsolete. Bandwidth demands will continue to grow and the enterprise WAN will continue to need more bandwidth. While the actual services delivered are the main attraction in the cloud, enterprises are finding that traditional Multiprotocol Label Switching (MPLS) networks and WAN acceleration technologies can't keep up.

Migrating to the cloud has put new pressures on WAN connectivity, from both a cost and performance perspective. Existing networks and optimization solutions cannot provide the capacity, reliability and scalability required across all users, locations and geographies. Work environments and application needs have changed, and will continue to change dramatically. In many cases, network design has become a limiting factor with reliance on traditional architectures that have not been optimized to support how applications are being hosted and accessed in public, private and hybrid cloud environments, or how and where people work.

Traditional WAN architecture is based on a hub-and-spoke model with data distributed from headquarters to branch locations and across centers (DC2DC) connected via public and private networks. At the branch or edge locations, sites are connected via low bandwidth MPLS links often over T1 to DS3 access links from the local telco. Connectivity across larger more bandwidth-intensive sites, such as corporate HQs and data centers, use expensive MPLS WAN links configured in a higher bandwidth core, typically in the range of 100Mbs.

The majority of enterprise WAN links are high cost, site-to-site private MPLS lines sourced from incumbent telcos like Verizon and AT&T. As enterprise bandwidth demands increase, the high cost of MPLS-based WAN connectivity and the complexity of underlying networks impact the enterprise's ability to cost-effectively scale their networks with the growth in demand.

Not only is the enterprise use of global MPLS for "backbone" traffic becoming less cost competitive as scale increases, but it is increasingly challenging to control costs associated with real-time applications, distributed cloud services and rich media. Traditional network topologies can also limit an enterprise's ability to fully leverage infrastructure and server virtualization as a means to more effectively distribute enterprise applications across all locations and users, and application performance suffers over long distance network paths. Furthermore, as enterprises seek to leverage solutions sourced from external providers, using MPLS as the connectivity method to SaaS and other public cloud locations is not agile enough and doesn't scale effectively given the high cost per bit.

MPLS is not the only factor driving the need for enterprises to rethink their networks. The public Internet is becoming an increasingly important distribution medium to reach customers and stakeholders, but managing performance is becoming critical.

While the Internet provides ease of access across a broad base of users, it often lacks the performance and reliability required to support mission-critical, cloud-based enterprise solutions. Packet loss and jitter are more common across the Internet than MPLS; and network congestion and latency vary across locations and geographies as no single provider can guarantee end-to-end performance. Nevertheless, accessing services via the Internet is a reality, and it is increasingly important for enterprises to architect network solutions that best optimize "public" access to cloud-based apps and services.

Another approach enterprises have used to optimize the performance of business-critical applications over the enterprise network has been through WAN optimization. Traditional WAN optimization techniques use appliances and hardware installed at corporate and remote locations to improve end-to-end application performance by increasing data-transfer efficiencies across wide-area-networks. These technologies are often application or protocol-specific and seek to optimize how individual applications work over the WAN instead of making the WAN work better for all applications.

While these appliances have helped deliver better application performance, this approach tends to be more tactical in nature, rationing a limited supply of bandwidth instead of addressing the organization's more strategic need to add more bandwidth or capacity to support ever-increasing demands. As more applications and services are deployed to the cloud, and more bandwidth-intensive applications and real-time data are delivered across the extended enterprise, the enterprise's demand for bandwidth will continue to increase.

Furthermore, while traditional WAN optimization solutions are dual-sided with one box at a data center and another at a branch office, this approach to optimize cloud applications can only be implemented with a single-sided solution since an appliance cannot be placed in front of an application residing in the cloud. As such, traditional solutions can fall short in the cloud, and are better suited for improving the performance of non-real-time applications, such as email, network backup and remote file access.

Rethinking Enterprise Networks: Next-Generation WAN Architecture
Enterprises that wish to leverage private, public or hybrid cloud solutions to distribute data and applications across a country or around the globe need to rethink their WAN architecture to achieve the required scale within existing budgets. Bandwidth economies of scale between highly connected network aggregation points offer exponential improvements in bandwidth availability at a fraction of the cost, but most enterprises are unaware of how to tap into these aggregation points or even that they exist.

The first step is connecting existing enterprise data centers and the WAN directly into carrier-neutral data centers that are "highly connected" and provide direct access to wide array of high capacity, high bandwidth connectivity options, as well as a growing base of cloud infrastructure and application providers.

These carrier-neutral data centers, operated by providers such as Equinix and Telx, are well known for outsourced IT services, including data center colocation, managed hosting of external-facing websites and applications, proximity to public cloud services, and as secondary sites for disaster recovery and business continuity. However, many enterprises are less familiar with these facilities as a key enabler of a high performance, next-generation WAN architecture.

Integrating these facilities as "super nodes" in the WAN provides enterprises a long-term approach to increase control over performance, reliability and scalability for the cloud while providing a means to significantly drive down bandwidth costs.

Carrier-neutral facilities are centrally located and provide enterprises broad access to competitive carrier markets with a near limitless supply of diverse, inexpensive bandwidth from Tier-1 and Tier-2 network carriers. By leveraging these facilities, enterprises are no longer constrained by the incumbent telcos and their legacy networks and have direct access to fiber and bandwidth from competitive providers at prices much lower than MPLS along with a wider array of MPLS and similar services.

Re-architecting existing networks to a next-generation WAN architecture provides a means to more cost-effectively scale the WAN to grow with the enterprise's demands than traditional MPLS-dense, hub-and-spoke networks. Additionally, bandwidth can easily be added at lower cost, secure hosting or rack space for new hardware or software can be deployed, and latency performance can be improved by connecting additional proximity locations.

Beyond the cost and scalability benefits of network transformation, by building out a higher performance core network integrating super nodes and direct fiber connectivity, enterprises can substantially improve performance and reliability of virtualized, networked and cloud-based solutions, both for intranet applications as well as SaaS and cloud-based services.

Carrier-neutral, data centers often serve as network access points or public peering locations, close to the core of the Internet and public cloud services. Moving closer to the Internet core enables more reliable access to third party SaaS, IaaS and other cloud-based services, even delivering close to "on-net" reliability of cloud services located at the same colocation facility. Furthermore, these facilities are often close, in terms of latency, to a large number of users and businesses connecting to the Internet, enabling more reliable access and service delivery to a broader base of users.

This architectural approach can provide better performance and help to address several of the key WAN factors affecting application performance while delivering enhanced end-to-end network performance, speed and reliability. A next-generation WAN architecture sets the foundation to enable enterprises to better leverage the power of virtualization and gain the efficiencies of the cloud to more effectively distribute enterprise applications and services. A higher performance core network connecting corporate data centers and third-party facilities with more robust WAN connectivity allows enterprises to take advantage of bandwidth costs and application performance benefits today, while providing the ability to cost-effectively scale to meet future demands.

This next-generation WAN architecture is the exact approach that today's leading companies are using to transform their global WAN architectures around highly connected aggregation points or "super nodes". Moving from legacy MPLS networks, these companies are building out their own high capacity, highly connected core backbones, and pushing MPLS to the edge. Once connected to the right network aggregation points, bandwidth costs begin to fall rapidly while bandwidth increases and access to cloud-based infrastructure and applications is streamlined and simplified.

CFN Services works with leading companies to map their legacy WAN to this new cloud world order. To learn more about CFN's network transformation solutions and how next-generation WAN architecture can improve business performance, please visit www.cfnservices.com

Gain additional insights on how leading organizations are utilizing smarter networking strategies to improve network and application performance in the Aberdeen Group's "Building a Smarter Networking Strategy for the Modern Large Enterprise" white paper.

References:

  1. Cloudyn, AWS Client Research
  2. The Growing Importance of SaaS as an Application Deployment Model, Aberdeen Group, March 1, 2013

More Stories By Mark Casey

Mark Casey is President and CEO of CFN Services, a leading provider of high-performance network and application delivery solutions for real-time, mission-critical and distributed computing environments. Leveraging FiberSource®, CFN’s comprehensive network-optimization knowledge base and its proprietary Net Transform™ methodologies, the company analyzes, architects and implements network and cloud solutions.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
Software development is a moving target. You have to keep your eye on trends in the tech space that haven’t even happened yet just to stay current. Consider what’s happened with augmented reality (AR) in this year alone. If you said you were working on an AR app in 2015, you might have gotten a lot of blank stares or jokes about Google Glass. Then Pokémon GO happened. Like AR, the trends listed below have been building steam for some time, but they’ll be taking off in surprising new directions b...
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
@DevOpsSummit has been named the ‘Top DevOps Influencer' by iTrend. iTrend processes millions of conversations, tweets, interactions, news articles, press releases, blog posts - and extract meaning form them and analyzes mobile and desktop software platforms used to communicate, various metadata (such as geo location), and automation tools. In overall placement, @DevOpsSummit ranked as the number one ‘DevOps Influencer' followed by @CloudExpo at third, and @MicroservicesE at 24th.
The IT industry is undergoing a significant evolution to keep up with cloud application demand. We see this happening as a mindset shift, from traditional IT teams to more well-rounded, cloud-focused job roles. The IT industry has become so cloud-minded that Gartner predicts that by 2020, this cloud shift will impact more than $1 trillion of global IT spending. This shift, however, has left some IT professionals feeling a little anxious about what lies ahead. The good news is that cloud computin...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership abi...
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
After more than five years of DevOps, definitions are evolving, boundaries are expanding, ‘unicorns’ are no longer rare, enterprises are on board, and pundits are moving on. Can we now look at an evolution of DevOps? Should we? Is the foundation of DevOps ‘done’, or is there still too much left to do? What is mature, and what is still missing? What does the next 5 years of DevOps look like? In this Power Panel at DevOps Summit, moderated by DevOps Summit Conference Chair Andi Mann, panelists l...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
We've all had that feeling before: The feeling that you're missing something that everyone else is in on. For today's IT leaders, that feeling might come up when you hear talk about cloud brokers. Meanwhile, you head back into your office and deal with your ever-growing shadow IT problem. But the cloud-broker whispers and your shadow IT issues are linked. If you're wondering "what the heck is a cloud broker?" we've got you covered.
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace. Traditional approaches for driving innovation are now woefully inadequate for keeping up with the breadth of disruption and change facing...
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...