Welcome!

Microservices Expo Authors: Pat Romanski, Thanh Tran, Carmen Gonzalez, Anders Wallgren, Elizabeth White

Related Topics: SDN Journal, Java IoT, Industrial IoT, Microservices Expo, Containers Expo Blog, @CloudExpo

SDN Journal: Article

Software Defined Networking – A Paradigm Shift

Now it's all about orchestrated service delivery

The networking industry has gone through different waves over last 30+ years. In the '80s, the first wave was all about connecting and sharing; how to connect a computer to other peripheral devices and other computers. There were many players who developed technology and services to address that, e.g. Novell, 3Com, Sun, IBM, DEC, Nortel. Across the industry, small islands of various protocols were created with multiple gateways to bridge them.

In 90's and 00's, Cisco dominated the industry and did a brilliant job of pushing the industry towards a common approach built on Ethernet.  They built a hugely successful business and ecosystem and even created new markets like VoIP on the proposition that networking should be on a common highway. We also saw isolation of networks from the rest of the IT infrastructure, in the sense that software innovations continued in the server and storage environments independent of the network area. The focus also remained on different components of the infrastructure and not on the ‘service' delivered by the combination of those infrastructure components, i.e., server, storage and network.

Now, it is all about orchestrated service delivery which requires standards-based open approach. According to Gartner reports on Emerging Technology Analysis and Key Issues for Communications Strategies, a) over 50% workloads will be virtualized by the end of 2012 thanks to Cloud computing, and b) more than 80% of traffic will be server-to-server by 2014 due to federated applications and virtualization.

In this article, I attempt to highlight why we have reached limits of current network technology, how Software Defined Networking will lead the next wave of innovations and its benefits to the IT industry. Today, network elements like switches and routers have resident software in each box. The software in the box provides intelligence using distributed algorithms to decide how each packet should be handled by it. In order for the entire network to function properly, the software in each box must work in coordination with other boxes.  This approach has served us well so far.

The coordinated distributed algorithms however make it difficult to introduce a change on the fly. We have to reconfigure the embedded software on all network components (often called boxes) to implement any change.  On the other hand, the wave of virtualization demands flexible, adaptive and nimble networks. This wave exposes limitations of the current networking approach, which is inflexible and protocol-heavy. As distributed algorithms are used, not one box has a global view of the network. This results in over provisioning at the time of designing and guess-work while trouble-shooting. For large cloud deployments, compute and storage environments can be virtualized and consumed easily but because of the limitations of networks, its full potential is not realized.

Typically, a network administrator spends a lot of time planning and then configuring the network components with changing business requirements and varying network traffic. Network administrators learn a lot by trial and error and the resulting expertise based on experience is limited to the experienced few.

OpenFlow History
Research students at Stanford, Berkley and other universities found it hard to experiment with their networks because the software is embedded in each switch or a router and any change has to be coordinated between vendors to make the distributed algorithms interoperable to provide the functionality they needed for research & experimentation. It is with this simple objective that the idea of OpenFlow was born. The first step that these researchers took was to develop ability to program switches, from a remote controller. The OpenFlow protocol was developed to support communication between a switch and a controller. It allows external control software to control the data path of a switch, bypassing traditional L2 and L3 protocols and associated configurations. OpenFlow protocol defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats. The researchers added OpenFlow support to existing boxes and allowed OpenFlow controller to program part of Flow-Table entries for research and experimentation while rest of the box worked as before. This gave them control over switches from a controller running on a remote industry standard server. This was the start of OpenFlow which basically separated the physical or data layer from the control layer.

ONF Background
OpenFlow and SDN became quite popular in the research community and several service providers and some vendors started to see the value of this approach. Researchers from Stanford and Berkeley took the lead but Open Networking Foundation (ONF) was founded by leading providers (Google, Yahoo!, Microsoft, Facebook, Deutsche Telecom, and Verizon). Some vendors, like HP, expressed their support from the beginning. ONF is the body which defines, standardizes and enhances OpenFlow protocol. ONF has a bigger charter with SDN that goes beyond OpenFlow protocol. It promotes SDN and may standardize different parts of SDN. As a policy, vendors cannot join its board but can become members of ONF and lead some working groups. Vendors have influence over the emerging standard though they don't set the overall agenda and they don't make final decisions on what is standardized and what is not.

Another interesting point is that ONF wants to do as little standardization as possible to encourage creativity. At first it sounded a bit conflicting but ONF looks at the software industry and tries to follow it by taking its best practices. When you look at the software industry, there are fewer standards than the network industry and it has created more innovations and jobs than the network industry. The Network industry has too many protocols defined and standardized, resulting in more complexity and fewer innovations. Academicians are influencing ONF and ensuring that we don't end up with another rigid, inflexible and protocol heavy networking world. ONF has 66 members today and its membership costs $30k/year. This is relatively high compared to other such bodies and the reason could be to ensure that only genuinely interested parties become members. We know that breakthrough innovations would come from small start-ups, some of whom would find it difficult to spend so much for the annual membership.  On the other hand, ONF ensures that the development made as part of their body is made available to all members at no charge or royalty etc. One would end up spending more than $30k in lawyer's fees to get the royalty arrangements sorted out.

Early Adopters
Google, Amazon, Rackspace, etc., have already implemented OpenFlow based networks, using proprietary hardware and in-house developed software. We see many new start-up focused on this new area to develop applications that leverage virtualized network. Most cloud providers manage huge data centers. "Every day Amazon Web Services (AWS) adds enough new capacity to support all of Amazon.com's global infrastructure through the company's first 5 years, when it was a $2.76 billion annual revenue enterprise" according to Jim Hamilton, their VP at large.

Google embraced OpenFlow very early on. Google's inter-datacenter production network, largest in the world by traffic, runs on OpenFlow and SDN. Google proved that OpenFlow based networks can scale and deliver its promise. The biggest use case, according to Google, for Central controllers is the fact that we can do re-routing, anticipating an event, e.g. if we know that we are introducing a new service which will lead to traffic load, we can pre-provision network in a way to best optimize infrastructure resources. If a small business, say a Flower shop, expects more traffic and compute power on a Valentine day, it is easy to have compute and storage power made available with standard virtualization technology available today. But to make network resources available on demand is challenging. This is where an OpenFlow controller controlling switches can easily provide necessary bandwidth and then tear it down or redirect the network resources for other requests. Google example is impressive but one could argue that how many enterprise customers could afford or dare to do what Google can do. Moreover, just because it made a business case for Google does not mean that it can make a business case for everyone. Each customer will have to evaluate their network, future growth requirements etc and see if there is a positive business case.

Flexibility Galore
Software Defined Networking (SDN) can help you make the network ready for Cloud-bursting as and when required. SDN opens up many possibilities. For example;

  1. Packet Flow redirection: There is a lot of video traffic coming from sources we trust. Security services on such traffic are not required for some applications. As security services are extremely infrastructure-hungry and CPU-intensive, passing all data to it leads to a sprawl of security devices (many IDS/ IPS, DPI appliances) to monitor traffic. With OpenFlow we can easily redirect traffic away from the costly resources for trusted traffic.
  2. Policy Management: Because you now have global view of the network and can control the network with software running on OpenFlow controller, defining and implementing business policies become easier, e.g. better bandwidth management: In case of excess traffic which is not anticipated, the controller can make sure to program the network in such a way that higher priority business traffic is given more resources than low priority traffic.
  3. Virtual Application Network: The OpenFlow controller lets us create virtual networks for different applications on one physical network, such that different applications can have different bandwidth and QoS based on their requirements, with auditable network isolation between applications and simpler compliance (a requirement for the financial industry). One can provide each customer a separate virtual domain for them to manage
  4. Network Security: OpenFlow can be used to make networks more secure and agile. The OpenFlow controller allows us to monitor and manage network security and
    -Dynamically insert security services at any point in the network (on-demand firewall or IDS/IPS, for example)
    -Monitor traffic and re-direct suspect flows for full inspection
    -Combine per-flow QoS control with network management systems to leverage traffic and end-user identity information
    -Dynamically detect and mitigate attacks due to infected PCs by using  signature/reputation database to create rules that address specific attacks
  5. Proprietary Appliances: It is very common today to deploy appliances in the network to deliver specific functionalities. These proprietary appliances can be replaced with an OpenFlow controller and a software application delivering the specific functionality. Communication Service Providers have a significant number of network services that can take advantage of virtualization and industry standard servers. Many application specific appliances that are running on custom ASIC (WAN optimization, Firewalls, DPI, SPAM/MAIL appliances, IDS etc) are good candidates for the SDN approach.
  6. As SDN matures, a couple of years down the road, more futuristic use case is to monitor traffic patterns, generate intelligence and then use the intelligence to anticipate traffic patterns and  optimize available resources. Using this kind of intelligence, we can actually reduce power consumption, too. For example, if we know the usage of the network is less during the nights and early mornings, we can shut off parts of the network in such a way that we still get complete connectivity, yet not have the complete network up.

My Take
The list of use cases is growing on a daily basis and will continue to grow even faster as the pace of innovation increases. The number of new start-ups in this area is increasing rapidly. Finally, the networking field, which has been quite dull from the perspective of new innovations, is going to be more vibrant and exciting with new possibilities. Moreover, if ONF is successful in maintaining ‘Open standards', SDN will allow plug and play with multivendor products, empowering IT and Network operators to be more cost-effective and adaptive to agility requirements of a business. We will see that with SDN, the network industry will mirror the innovations and developments seen in the server and storage fields.

Some vendors want to have API's well-defined for applications to leverage OpenFlow controllers or have more protocols supported. It is prudent on the part of ONF not to define and standardize too much and let the market define what an acceptable standard is. It is important to keep OpenFlow protocol unrestricted by defining and standardizing not more than what is absolutely required. This will fuel innovations.

OpenFlow protocol is in its infancy but it has generated tremendous interest from customers, researchers as well as vendors. One can argue that it is not fully matured or ready for prime time but most agree that it will change the network industry fundamentally. It will make the industry more flexible, nimble and drive more innovations. This train has left the station while some debate that its destination is not well-defined or its ETA is not known. The hardware vendors will have to accept the fact that networking hardware will be commoditized just like servers and storage. OpenFlow/SDN, for sure, opens up opportunities for different network based applications. This is where current vendors will have to focus on to continue to play a major role in the future. Network administrators will not be spending hours reconfiguring switches and routers. They will have to get skilled on how to control, manage, test and implement changes from a central controller.

Although the OpenFlow protocol is defined, there are not many vendors in the market supporting its latest version 1.3. Moreover, there is a lack of tools to test, monitor and manage this new environment. HP and other major vendors have openly embraced OpenFlow and are investing in it. HP was one of the first major network vendors to invest in this area, with 60+ deployments of 16 different switches supporting OpenFlow. HP is also leading one of the task forces of ONF to evolve the OpenFlow protocol. With its traditional strength in IT performance & operations (test, monitor and manage) management and telecom OSS, HP is well-positioned to deliver a complete future-proof infrastructure solution, (consisting of server, storage, networking, software, security and analytics) for enterprise IT as well as telecom service providers.

More Stories By Kapil Raval

Kapil Raval is an experienced technology solutions consultant with nearly 20 years of experience in the telecom industry. He thinks ‘the business’ and focuses on linking business challenges to technology solutions. He currently works for HP and drives strategic solutions in the telecom vertical.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
While there has been much ado about interoperability, there are still no real solutions, same as last year and the year before that. The large EHR vendors who continue to dominate the market still maintain that interoperability is all but solved, still can't connect EHRs across the continuum causing frustration by providers and a disservice to patients. The ONC pays lip service to the problem, but that is about it. It is time for the healthcare industry to consider alternatives like middleware w...
Many banks and financial institutions are experimenting with containers in development environments, but when will they move into production? Containers are seen as the key to achieving the ultimate in information technology flexibility and agility. Containers work on both public and private clouds, and make it easy to build and deploy applications. The challenge for regulated industries is the cost and complexity of container security compliance. VM security compliance is already challenging, ...
With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo 2016 in New York and Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty ...
Our CTO, Anders Wallgren, recently sat down to take part in the “B2B Nation: IT” podcast — the series dedicated to serving the IT professional community with expert opinions and advice on the world of information technology. Listen to the great conversation, where Anders shares his thoughts on DevOps lessons from large enterprises, the growth of microservices and containers, and more.
The 19th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit y...
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 19th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world and ThingsExpo New York Call for Papers is now open.
SYS-CON Events announced today the How to Create Angular 2 Clients for the Cloud Workshop, being held June 7, 2016, in conjunction with 18th Cloud Expo | @ThingsExpo, at the Javits Center in New York, NY. Angular 2 is a complete re-write of the popular framework AngularJS. Programming in Angular 2 is greatly simplified. Now it’s a component-based well-performing framework. The immersive one-day workshop led by Yakov Fain, a Java Champion and a co-founder of the IT consultancy Farata Systems and...
IoT generates lots of temporal data. But how do you unlock its value? How do you coordinate the diverse moving parts that must come together when developing your IoT product? What are the key challenges addressed by Data as a Service? How does cloud computing underlie and connect the notions of Digital and DevOps What is the impact of the API economy? What is the business imperative for Cognitive Computing? Get all these questions and hundreds more like them answered at the 18th Cloud Expo...
@DevOpsSummit taking place June 7-9, 2016 at Javits Center, New York City, and Nov 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with the 18th International @CloudExpo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world.
Just last week a senior Hybris consultant shared the story of a customer engagement on which he was working. This customer had problems, serious problems. We’re talking about response times far beyond the most liberal acceptable standard. They were unable to solve the issue in their eCommerce platform – specifically Hybris. Although the eCommerce project was delivered by a system integrator / implementation partner, the vendor still gets involved when things go really wrong. After all, the vendo...
Small teams are more effective. The general agreement is that anything from 5 to 12 is the 'right' small. But of course small teams will also have 'small' throughput - relatively speaking. So if your demand is X and the throughput of a small team is X/10, you probably need 10 teams to meet that demand. But more teams also mean more effort to coordinate and align their efforts in the same direction. So, the challenge is how to harness the power of small teams and yet orchestrate multiples of them...
The demand for organizations to expand their infrastructure to multiple IT environments like the cloud, on-premise, mobile, bring your own device (BYOD) and the Internet of Things (IoT) continues to grow. As this hybrid infrastructure increases, the challenge to monitor the security of these systems increases in volume and complexity. In his session at 18th Cloud Expo, Stephen Coty, Chief Security Evangelist at Alert Logic, will show how properly configured and managed security architecture can...
SYS-CON Events announced today the Docker Meets Kubernetes – Intro into the Kubernetes World, being held June 9, 2016, in conjunction with 18th Cloud Expo | @ThingsExpo, at the Javits Center in New York, NY. Register for 'Docker Meets Kubernetes Workshop' Here! This workshop led by Sebastian Scheele, co-founder of Loodse, introduces participants to Kubernetes (container orchestration). Through a combination of instructor-led presentations, demonstrations, and hands-on labs, participants learn ...
The initial debate is over: Any enterprise with a serious commitment to IT is migrating to the cloud. But things are not so simple. There is a complex mix of on-premises, colocated, and public-cloud deployments. In this power panel at 18th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists will look at the present state of cloud from the C-level view, and how great companies and rock star executives can use cloud computing to meet their most ambitious and disruptive business ...
Last week I had the pleasure of speaking on a panel at Sapphire Ventures Next-Gen Tech Stack Forum in San Francisco. Obviously, I was excited to join the discussion, but as a participant the event crystallized not only where the larger software development market is relative to microservices, container technologies (like Docker), continuous integration and deployment; but also provided insight into where DevOps is heading in the coming years.
Admittedly, two years ago I was a bulk contributor to the DevOps noise with conversations rooted in the movement around culture, principles, and goals. And while all of these elements of DevOps environments are important, I’ve found that the biggest challenge now is a lack of understanding as to why DevOps is beneficial. It’s getting the wheels going, or just taking the next step. The best way to start on the road to change is to take a look at the companies that have already made great headway ...
Agile teams report the lowest rate of measuring non-functional requirements. What does this mean for the evolution of quality in this era of Continuous Everything? To explore how the rise of SDLC acceleration trends such as Agile, DevOps, and Continuous Delivery are impacting software quality, Parasoft conducted a survey about measuring and monitoring non-functional requirements (NFRs). Here's a glimpse at what we discovered and what it means for the evolution of quality in this era of Continuo...
You might already know them from theagileadmin.com, but let me introduce you to two of the leading minds in the Rugged DevOps movement: James Wickett and Ernest Mueller. Both James and Ernest are active leaders in the DevOps space, in addition to helping organize events such as DevOpsDays Austinand LASCON. Our conversation covered a lot of bases from the founding of Rugged DevOps to aligning organizational silos to lessons learned from W. Edwards Demings.
SYS-CON Events announced today BZ Media LLC has been named “Media Sponsor” of SYS-CON's 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. BZ Media LLC is a high-tech media company that produces technical conferences and expositions, and publishes a magazine, newsletters and websites in the software development, SharePoint, mobile development and Commercial Drone markets.
When I talk about driving innovation with self-organizing teams, I emphasize that such self-organization includes expecting the participants to organize their own teams, give themselves their own goals, and determine for themselves how to measure their success. In contrast, the definition of skunkworks points out that members of such teams are “usually specially selected.” Good thing he added the word usually – because specially selecting such teams throws a wrench in the entire works, limiting...