|By Mark Casey||
|June 2, 2013 03:00 PM EDT||
It's clear that cloud computing has transformed the enterprise IT landscape, from the computing infrastructure layer up through enterprise software, as companies move to leverage more efficient and cost-effective service-delivery models and bring new cloud-based products and services to the market. Perhaps less known is the innovation taking place at the network level and how leading companies are transforming their Wide Area Networks (WAN) to more quickly and efficiently fuel their move to the cloud.
Moving to the cloud requires network managers and IT shops to implement scalable solutions that ensure the reliability and performance of cloud-based applications across their extended enterprise. Cloud computing drives the need for more reliability across the WAN and ever-increasing amounts of highly available, secure and reliable bandwidth across all users, locations and geographies. However, many enterprises are constrained by their existing network infrastructure, both from a cost and performance perspective. They can't cost-effectively scale their networks; and latency, jitter and packet loss impact performance and reliability in the cloud.
Transforming to a next-generation WAN architecture plays a critical role in enabling enterprises to more easily and cost-effectively migrate to and better support public, private and hybrid cloud environments.
According to Forrester, ‘enterprise use of the cloud has arrived,' with nearly half of all companies in North America and Europe setting aside budget for private cloud investments in 2013. Legitimate budgeting to integrate cloud services into existing platforms and deploy software apps to the cloud confirm that IT shops are ‘no longer denying it's happening in their company.' Increasingly, enterprises are moving beyond their own data centers to leverage infrastructure and applications, choosing to host their own applications externally or leverage services from third-party providers.
Cloud infrastructure providers, such as Amazon and Rackspace, are well established in the enterprise IaaS market delivering compute, storage and hosting services to businesses of all sizes from SME to large multi-national corporations. Since its launch in 2006, Amazon Web Services' (AWS) S3 offering has grown to encompass more than two trillion objects stored and company revenues have grown past $2 billion. AWS clearly dominates the cloud platform space holding as much as 70 percent of the market, with its enterprise clients spending anywhere from $12,000 to $2.5 million per year on its infrastructure services. The push of traditional companies such as Microsoft, IBM and HP, as well as and a host of other players, into this space further validates the arrival of cloud in the enterprise market.
On the software side, service providers like Salesforce.com have been offering cloud-based enterprise software for years, enabling companies to optimize their costs under a pay-per-use model, while simplifying the delivery of reliable apps that scale more easily. According to the Aberdeen Group, SaaS is becoming an increasingly important deployment model for enterprise applications, with highest adoption among CRM and ERP solutions. Nearly 80 percent of all companies are currently using at least two or more SaaS applications and many have reported decreased spending on application deployment resulting from SaaS usage.
Enterprises will continue to use a range of cloud solutions, developed internally and sourced from external providers, to more efficiently and effectively distribute mission-critical applications on a global scale. Most will need to move beyond their traditional legacy networks to ensure higher levels of performance, reliability and scalability of these applications across the WAN.
Traditional WAN Design and Optimization Approaches: Falling Short in the Cloud
Cloud is causing an explosion in enterprise bandwidth and making traditional WAN management obsolete. Bandwidth demands will continue to grow and the enterprise WAN will continue to need more bandwidth. While the actual services delivered are the main attraction in the cloud, enterprises are finding that traditional Multiprotocol Label Switching (MPLS) networks and WAN acceleration technologies can't keep up.
Migrating to the cloud has put new pressures on WAN connectivity, from both a cost and performance perspective. Existing networks and optimization solutions cannot provide the capacity, reliability and scalability required across all users, locations and geographies. Work environments and application needs have changed, and will continue to change dramatically. In many cases, network design has become a limiting factor with reliance on traditional architectures that have not been optimized to support how applications are being hosted and accessed in public, private and hybrid cloud environments, or how and where people work.
Traditional WAN architecture is based on a hub-and-spoke model with data distributed from headquarters to branch locations and across centers (DC2DC) connected via public and private networks. At the branch or edge locations, sites are connected via low bandwidth MPLS links often over T1 to DS3 access links from the local telco. Connectivity across larger more bandwidth-intensive sites, such as corporate HQs and data centers, use expensive MPLS WAN links configured in a higher bandwidth core, typically in the range of 100Mbs.
The majority of enterprise WAN links are high cost, site-to-site private MPLS lines sourced from incumbent telcos like Verizon and AT&T. As enterprise bandwidth demands increase, the high cost of MPLS-based WAN connectivity and the complexity of underlying networks impact the enterprise's ability to cost-effectively scale their networks with the growth in demand.
Not only is the enterprise use of global MPLS for "backbone" traffic becoming less cost competitive as scale increases, but it is increasingly challenging to control costs associated with real-time applications, distributed cloud services and rich media. Traditional network topologies can also limit an enterprise's ability to fully leverage infrastructure and server virtualization as a means to more effectively distribute enterprise applications across all locations and users, and application performance suffers over long distance network paths. Furthermore, as enterprises seek to leverage solutions sourced from external providers, using MPLS as the connectivity method to SaaS and other public cloud locations is not agile enough and doesn't scale effectively given the high cost per bit.
MPLS is not the only factor driving the need for enterprises to rethink their networks. The public Internet is becoming an increasingly important distribution medium to reach customers and stakeholders, but managing performance is becoming critical.
While the Internet provides ease of access across a broad base of users, it often lacks the performance and reliability required to support mission-critical, cloud-based enterprise solutions. Packet loss and jitter are more common across the Internet than MPLS; and network congestion and latency vary across locations and geographies as no single provider can guarantee end-to-end performance. Nevertheless, accessing services via the Internet is a reality, and it is increasingly important for enterprises to architect network solutions that best optimize "public" access to cloud-based apps and services.
Another approach enterprises have used to optimize the performance of business-critical applications over the enterprise network has been through WAN optimization. Traditional WAN optimization techniques use appliances and hardware installed at corporate and remote locations to improve end-to-end application performance by increasing data-transfer efficiencies across wide-area-networks. These technologies are often application or protocol-specific and seek to optimize how individual applications work over the WAN instead of making the WAN work better for all applications.
While these appliances have helped deliver better application performance, this approach tends to be more tactical in nature, rationing a limited supply of bandwidth instead of addressing the organization's more strategic need to add more bandwidth or capacity to support ever-increasing demands. As more applications and services are deployed to the cloud, and more bandwidth-intensive applications and real-time data are delivered across the extended enterprise, the enterprise's demand for bandwidth will continue to increase.
Furthermore, while traditional WAN optimization solutions are dual-sided with one box at a data center and another at a branch office, this approach to optimize cloud applications can only be implemented with a single-sided solution since an appliance cannot be placed in front of an application residing in the cloud. As such, traditional solutions can fall short in the cloud, and are better suited for improving the performance of non-real-time applications, such as email, network backup and remote file access.
Rethinking Enterprise Networks: Next-Generation WAN Architecture
Enterprises that wish to leverage private, public or hybrid cloud solutions to distribute data and applications across a country or around the globe need to rethink their WAN architecture to achieve the required scale within existing budgets. Bandwidth economies of scale between highly connected network aggregation points offer exponential improvements in bandwidth availability at a fraction of the cost, but most enterprises are unaware of how to tap into these aggregation points or even that they exist.
The first step is connecting existing enterprise data centers and the WAN directly into carrier-neutral data centers that are "highly connected" and provide direct access to wide array of high capacity, high bandwidth connectivity options, as well as a growing base of cloud infrastructure and application providers.
These carrier-neutral data centers, operated by providers such as Equinix and Telx, are well known for outsourced IT services, including data center colocation, managed hosting of external-facing websites and applications, proximity to public cloud services, and as secondary sites for disaster recovery and business continuity. However, many enterprises are less familiar with these facilities as a key enabler of a high performance, next-generation WAN architecture.
Integrating these facilities as "super nodes" in the WAN provides enterprises a long-term approach to increase control over performance, reliability and scalability for the cloud while providing a means to significantly drive down bandwidth costs.
Carrier-neutral facilities are centrally located and provide enterprises broad access to competitive carrier markets with a near limitless supply of diverse, inexpensive bandwidth from Tier-1 and Tier-2 network carriers. By leveraging these facilities, enterprises are no longer constrained by the incumbent telcos and their legacy networks and have direct access to fiber and bandwidth from competitive providers at prices much lower than MPLS along with a wider array of MPLS and similar services.
Re-architecting existing networks to a next-generation WAN architecture provides a means to more cost-effectively scale the WAN to grow with the enterprise's demands than traditional MPLS-dense, hub-and-spoke networks. Additionally, bandwidth can easily be added at lower cost, secure hosting or rack space for new hardware or software can be deployed, and latency performance can be improved by connecting additional proximity locations.
Beyond the cost and scalability benefits of network transformation, by building out a higher performance core network integrating super nodes and direct fiber connectivity, enterprises can substantially improve performance and reliability of virtualized, networked and cloud-based solutions, both for intranet applications as well as SaaS and cloud-based services.
Carrier-neutral, data centers often serve as network access points or public peering locations, close to the core of the Internet and public cloud services. Moving closer to the Internet core enables more reliable access to third party SaaS, IaaS and other cloud-based services, even delivering close to "on-net" reliability of cloud services located at the same colocation facility. Furthermore, these facilities are often close, in terms of latency, to a large number of users and businesses connecting to the Internet, enabling more reliable access and service delivery to a broader base of users.
This architectural approach can provide better performance and help to address several of the key WAN factors affecting application performance while delivering enhanced end-to-end network performance, speed and reliability. A next-generation WAN architecture sets the foundation to enable enterprises to better leverage the power of virtualization and gain the efficiencies of the cloud to more effectively distribute enterprise applications and services. A higher performance core network connecting corporate data centers and third-party facilities with more robust WAN connectivity allows enterprises to take advantage of bandwidth costs and application performance benefits today, while providing the ability to cost-effectively scale to meet future demands.
This next-generation WAN architecture is the exact approach that today's leading companies are using to transform their global WAN architectures around highly connected aggregation points or "super nodes". Moving from legacy MPLS networks, these companies are building out their own high capacity, highly connected core backbones, and pushing MPLS to the edge. Once connected to the right network aggregation points, bandwidth costs begin to fall rapidly while bandwidth increases and access to cloud-based infrastructure and applications is streamlined and simplified.
CFN Services works with leading companies to map their legacy WAN to this new cloud world order. To learn more about CFN's network transformation solutions and how next-generation WAN architecture can improve business performance, please visit www.cfnservices.com
Gain additional insights on how leading organizations are utilizing smarter networking strategies to improve network and application performance in the Aberdeen Group's "Building a Smarter Networking Strategy for the Modern Large Enterprise" white paper.
- Cloudyn, AWS Client Research
- The Growing Importance of SaaS as an Application Deployment Model, Aberdeen Group, March 1, 2013
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
Feb. 24, 2017 01:45 AM EST Reads: 3,763
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
Feb. 24, 2017 01:45 AM EST Reads: 1,447
Growth hacking is common for startups to make unheard-of progress in building their business. Career Hacks can help Geek Girls and those who support them (yes, that's you too, Dad!) to excel in this typically male-dominated world. Get ready to learn the facts: Is there a bias against women in the tech / developer communities? Why are women 50% of the workforce, but hold only 24% of the STEM or IT positions? Some beginnings of what to do about it! In her Day 2 Keynote at 17th Cloud Expo, Sandy Ca...
Feb. 24, 2017 01:45 AM EST Reads: 9,521
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
Feb. 24, 2017 01:00 AM EST Reads: 2,528
We call it DevOps but much of the time there’s a lot more discussion about the needs and concerns of developers than there is about other groups. There’s a focus on improved and less isolated developer workflows. There are many discussions around collaboration, continuous integration and delivery, issue tracking, source code control, code review, IDEs, and xPaaS – and all the tools that enable those things. Changes in developer practices may come up – such as developers taking ownership of code ...
Feb. 23, 2017 10:15 PM EST Reads: 2,591
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
Feb. 23, 2017 09:30 PM EST Reads: 909
As organizations realize the scope of the Internet of Things, gaining key insights from Big Data, through the use of advanced analytics, becomes crucial. However, IoT also creates the need for petabyte scale storage of data from millions of devices. A new type of Storage is required which seamlessly integrates robust data analytics with massive scale. These storage systems will act as “smart systems” provide in-place analytics that speed discovery and enable businesses to quickly derive meaningf...
Feb. 23, 2017 06:30 PM EST Reads: 6,292
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Feb. 23, 2017 05:15 PM EST Reads: 1,167
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
Feb. 23, 2017 04:30 PM EST Reads: 5,786
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you...
Feb. 23, 2017 03:45 PM EST Reads: 1,664
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Feb. 23, 2017 03:30 PM EST Reads: 1,635
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
Feb. 23, 2017 03:00 PM EST Reads: 1,891
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
Feb. 23, 2017 01:15 PM EST Reads: 1,496
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his general session at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore...
Feb. 23, 2017 12:57 PM EST Reads: 609
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Feb. 23, 2017 11:45 AM EST Reads: 1,336
DevOps and microservices are permeating software engineering teams broadly, whether these teams are in pure software shops but happen to run a business, such Uber and Airbnb, or in companies that rely heavily on software to run more traditional business, such as financial firms or high-end manufacturers. Microservices and DevOps have created software development and therefore business speed and agility benefits, but they have also created problems; specifically, they have created software securi...
Feb. 23, 2017 11:00 AM EST Reads: 3,455
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions with...
Feb. 23, 2017 10:00 AM EST Reads: 7,257
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Feb. 23, 2017 07:00 AM EST Reads: 5,991
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, will discuss how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He will discuss how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
Feb. 23, 2017 06:00 AM EST Reads: 1,454
True Story. Over the past few years, Fannie Mae transformed the way in which they delivered software. Deploys increased from 1,200/month to 15,000/month. At the same time, productivity increased by 28% while reducing costs by 30%. But, how did they do it? During the All Day DevOps conference, over 13,500 practitioners from around the world to learn from their peers in the industry. Barry Snyder, Senior Manager of DevOps at Fannie Mae, was one of 57 practitioners who shared his real world journe...
Feb. 23, 2017 05:45 AM EST Reads: 1,966