Click here to close now.

Welcome!

Microservices Journal Authors: Liz McMillan, Alena Prokharchyk, Carmen Gonzalez, Roger Strukhoff, SmartBear Blog

Related Topics: SDN Journal, Java, XML, Microservices Journal, Virtualization, Cloud Expo

SDN Journal: Article

Software Defined Networking – A Paradigm Shift

Now it's all about orchestrated service delivery

The networking industry has gone through different waves over last 30+ years. In the '80s, the first wave was all about connecting and sharing; how to connect a computer to other peripheral devices and other computers. There were many players who developed technology and services to address that, e.g. Novell, 3Com, Sun, IBM, DEC, Nortel. Across the industry, small islands of various protocols were created with multiple gateways to bridge them.

In 90's and 00's, Cisco dominated the industry and did a brilliant job of pushing the industry towards a common approach built on Ethernet.  They built a hugely successful business and ecosystem and even created new markets like VoIP on the proposition that networking should be on a common highway. We also saw isolation of networks from the rest of the IT infrastructure, in the sense that software innovations continued in the server and storage environments independent of the network area. The focus also remained on different components of the infrastructure and not on the ‘service' delivered by the combination of those infrastructure components, i.e., server, storage and network.

Now, it is all about orchestrated service delivery which requires standards-based open approach. According to Gartner reports on Emerging Technology Analysis and Key Issues for Communications Strategies, a) over 50% workloads will be virtualized by the end of 2012 thanks to Cloud computing, and b) more than 80% of traffic will be server-to-server by 2014 due to federated applications and virtualization.

In this article, I attempt to highlight why we have reached limits of current network technology, how Software Defined Networking will lead the next wave of innovations and its benefits to the IT industry. Today, network elements like switches and routers have resident software in each box. The software in the box provides intelligence using distributed algorithms to decide how each packet should be handled by it. In order for the entire network to function properly, the software in each box must work in coordination with other boxes.  This approach has served us well so far.

The coordinated distributed algorithms however make it difficult to introduce a change on the fly. We have to reconfigure the embedded software on all network components (often called boxes) to implement any change.  On the other hand, the wave of virtualization demands flexible, adaptive and nimble networks. This wave exposes limitations of the current networking approach, which is inflexible and protocol-heavy. As distributed algorithms are used, not one box has a global view of the network. This results in over provisioning at the time of designing and guess-work while trouble-shooting. For large cloud deployments, compute and storage environments can be virtualized and consumed easily but because of the limitations of networks, its full potential is not realized.

Typically, a network administrator spends a lot of time planning and then configuring the network components with changing business requirements and varying network traffic. Network administrators learn a lot by trial and error and the resulting expertise based on experience is limited to the experienced few.

OpenFlow History
Research students at Stanford, Berkley and other universities found it hard to experiment with their networks because the software is embedded in each switch or a router and any change has to be coordinated between vendors to make the distributed algorithms interoperable to provide the functionality they needed for research & experimentation. It is with this simple objective that the idea of OpenFlow was born. The first step that these researchers took was to develop ability to program switches, from a remote controller. The OpenFlow protocol was developed to support communication between a switch and a controller. It allows external control software to control the data path of a switch, bypassing traditional L2 and L3 protocols and associated configurations. OpenFlow protocol defines messages, such as packet-received, send-packet-out, modify-forwarding-table, and get-stats. The researchers added OpenFlow support to existing boxes and allowed OpenFlow controller to program part of Flow-Table entries for research and experimentation while rest of the box worked as before. This gave them control over switches from a controller running on a remote industry standard server. This was the start of OpenFlow which basically separated the physical or data layer from the control layer.

ONF Background
OpenFlow and SDN became quite popular in the research community and several service providers and some vendors started to see the value of this approach. Researchers from Stanford and Berkeley took the lead but Open Networking Foundation (ONF) was founded by leading providers (Google, Yahoo!, Microsoft, Facebook, Deutsche Telecom, and Verizon). Some vendors, like HP, expressed their support from the beginning. ONF is the body which defines, standardizes and enhances OpenFlow protocol. ONF has a bigger charter with SDN that goes beyond OpenFlow protocol. It promotes SDN and may standardize different parts of SDN. As a policy, vendors cannot join its board but can become members of ONF and lead some working groups. Vendors have influence over the emerging standard though they don't set the overall agenda and they don't make final decisions on what is standardized and what is not.

Another interesting point is that ONF wants to do as little standardization as possible to encourage creativity. At first it sounded a bit conflicting but ONF looks at the software industry and tries to follow it by taking its best practices. When you look at the software industry, there are fewer standards than the network industry and it has created more innovations and jobs than the network industry. The Network industry has too many protocols defined and standardized, resulting in more complexity and fewer innovations. Academicians are influencing ONF and ensuring that we don't end up with another rigid, inflexible and protocol heavy networking world. ONF has 66 members today and its membership costs $30k/year. This is relatively high compared to other such bodies and the reason could be to ensure that only genuinely interested parties become members. We know that breakthrough innovations would come from small start-ups, some of whom would find it difficult to spend so much for the annual membership.  On the other hand, ONF ensures that the development made as part of their body is made available to all members at no charge or royalty etc. One would end up spending more than $30k in lawyer's fees to get the royalty arrangements sorted out.

Early Adopters
Google, Amazon, Rackspace, etc., have already implemented OpenFlow based networks, using proprietary hardware and in-house developed software. We see many new start-up focused on this new area to develop applications that leverage virtualized network. Most cloud providers manage huge data centers. "Every day Amazon Web Services (AWS) adds enough new capacity to support all of Amazon.com's global infrastructure through the company's first 5 years, when it was a $2.76 billion annual revenue enterprise" according to Jim Hamilton, their VP at large.

Google embraced OpenFlow very early on. Google's inter-datacenter production network, largest in the world by traffic, runs on OpenFlow and SDN. Google proved that OpenFlow based networks can scale and deliver its promise. The biggest use case, according to Google, for Central controllers is the fact that we can do re-routing, anticipating an event, e.g. if we know that we are introducing a new service which will lead to traffic load, we can pre-provision network in a way to best optimize infrastructure resources. If a small business, say a Flower shop, expects more traffic and compute power on a Valentine day, it is easy to have compute and storage power made available with standard virtualization technology available today. But to make network resources available on demand is challenging. This is where an OpenFlow controller controlling switches can easily provide necessary bandwidth and then tear it down or redirect the network resources for other requests. Google example is impressive but one could argue that how many enterprise customers could afford or dare to do what Google can do. Moreover, just because it made a business case for Google does not mean that it can make a business case for everyone. Each customer will have to evaluate their network, future growth requirements etc and see if there is a positive business case.

Flexibility Galore
Software Defined Networking (SDN) can help you make the network ready for Cloud-bursting as and when required. SDN opens up many possibilities. For example;

  1. Packet Flow redirection: There is a lot of video traffic coming from sources we trust. Security services on such traffic are not required for some applications. As security services are extremely infrastructure-hungry and CPU-intensive, passing all data to it leads to a sprawl of security devices (many IDS/ IPS, DPI appliances) to monitor traffic. With OpenFlow we can easily redirect traffic away from the costly resources for trusted traffic.
  2. Policy Management: Because you now have global view of the network and can control the network with software running on OpenFlow controller, defining and implementing business policies become easier, e.g. better bandwidth management: In case of excess traffic which is not anticipated, the controller can make sure to program the network in such a way that higher priority business traffic is given more resources than low priority traffic.
  3. Virtual Application Network: The OpenFlow controller lets us create virtual networks for different applications on one physical network, such that different applications can have different bandwidth and QoS based on their requirements, with auditable network isolation between applications and simpler compliance (a requirement for the financial industry). One can provide each customer a separate virtual domain for them to manage
  4. Network Security: OpenFlow can be used to make networks more secure and agile. The OpenFlow controller allows us to monitor and manage network security and
    -Dynamically insert security services at any point in the network (on-demand firewall or IDS/IPS, for example)
    -Monitor traffic and re-direct suspect flows for full inspection
    -Combine per-flow QoS control with network management systems to leverage traffic and end-user identity information
    -Dynamically detect and mitigate attacks due to infected PCs by using  signature/reputation database to create rules that address specific attacks
  5. Proprietary Appliances: It is very common today to deploy appliances in the network to deliver specific functionalities. These proprietary appliances can be replaced with an OpenFlow controller and a software application delivering the specific functionality. Communication Service Providers have a significant number of network services that can take advantage of virtualization and industry standard servers. Many application specific appliances that are running on custom ASIC (WAN optimization, Firewalls, DPI, SPAM/MAIL appliances, IDS etc) are good candidates for the SDN approach.
  6. As SDN matures, a couple of years down the road, more futuristic use case is to monitor traffic patterns, generate intelligence and then use the intelligence to anticipate traffic patterns and  optimize available resources. Using this kind of intelligence, we can actually reduce power consumption, too. For example, if we know the usage of the network is less during the nights and early mornings, we can shut off parts of the network in such a way that we still get complete connectivity, yet not have the complete network up.

My Take
The list of use cases is growing on a daily basis and will continue to grow even faster as the pace of innovation increases. The number of new start-ups in this area is increasing rapidly. Finally, the networking field, which has been quite dull from the perspective of new innovations, is going to be more vibrant and exciting with new possibilities. Moreover, if ONF is successful in maintaining ‘Open standards', SDN will allow plug and play with multivendor products, empowering IT and Network operators to be more cost-effective and adaptive to agility requirements of a business. We will see that with SDN, the network industry will mirror the innovations and developments seen in the server and storage fields.

Some vendors want to have API's well-defined for applications to leverage OpenFlow controllers or have more protocols supported. It is prudent on the part of ONF not to define and standardize too much and let the market define what an acceptable standard is. It is important to keep OpenFlow protocol unrestricted by defining and standardizing not more than what is absolutely required. This will fuel innovations.

OpenFlow protocol is in its infancy but it has generated tremendous interest from customers, researchers as well as vendors. One can argue that it is not fully matured or ready for prime time but most agree that it will change the network industry fundamentally. It will make the industry more flexible, nimble and drive more innovations. This train has left the station while some debate that its destination is not well-defined or its ETA is not known. The hardware vendors will have to accept the fact that networking hardware will be commoditized just like servers and storage. OpenFlow/SDN, for sure, opens up opportunities for different network based applications. This is where current vendors will have to focus on to continue to play a major role in the future. Network administrators will not be spending hours reconfiguring switches and routers. They will have to get skilled on how to control, manage, test and implement changes from a central controller.

Although the OpenFlow protocol is defined, there are not many vendors in the market supporting its latest version 1.3. Moreover, there is a lack of tools to test, monitor and manage this new environment. HP and other major vendors have openly embraced OpenFlow and are investing in it. HP was one of the first major network vendors to invest in this area, with 60+ deployments of 16 different switches supporting OpenFlow. HP is also leading one of the task forces of ONF to evolve the OpenFlow protocol. With its traditional strength in IT performance & operations (test, monitor and manage) management and telecom OSS, HP is well-positioned to deliver a complete future-proof infrastructure solution, (consisting of server, storage, networking, software, security and analytics) for enterprise IT as well as telecom service providers.

More Stories By Kapil Raval

Kapil Raval is an experienced technology solutions consultant with nearly 20 years of experience in the telecom industry. He thinks ‘the business’ and focuses on linking business challenges to technology solutions. He currently works for HP and drives strategic solutions in the telecom vertical.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists will discuss how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations m...
SYS-CON Events announced today the IoT Bootcamp – Jumpstart Your IoT Strategy, being held June 9–10, 2015, in conjunction with 16th Cloud Expo and Internet of @ThingsExpo at the Javits Center in New York City. This is your chance to jumpstart your IoT strategy. Combined with real-world scenarios and use cases, the IoT Bootcamp is not just based on presentations but includes hands-on demos and walkthroughs. We will introduce you to a variety of Do-It-Yourself IoT platforms including Arduino, Ras...
One of the most frequently requested Rancher features, load balancers are used to distribute traffic between docker containers. Now Rancher users can configure, update and scale up an integrated load balancing service to meet their application needs, using either Rancher's UI or API. To implement our load balancing functionality we decided to use HAproxy, which is deployed as a contianer, and managed by the Rancher orchestration functionality. With Rancher's Load Balancing capability, users ...
The only place to be June 9-11 is Cloud Expo & @ThingsExpo 2015 East at the Javits Center in New York City. Join us there as delegates from all over the world come to listen to and engage with speakers & sponsors from the leading Cloud Computing, IoT & Big Data companies. Cloud Expo & @ThingsExpo are the leading events covering the booming market of Cloud Computing, IoT & Big Data for the enterprise. Speakers from all over the world will be hand-picked for their ability to explore the economic...
As we recently previewed (read more about our London PoP in Jesse's post), Blue Box is opening a new Data Center in London, but hadn't announced the provider. Today we're excited to partner with TelecityGroup, whom we've selected as our data center partner in London. We chose their Powergate location, which is one of the U.K.'s most advanced, flexible and energy efficient carrier-neutral data centres. Why does that matter to you? Well, when customers choose Blue Box, they're trusting us with ...
How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers. In his session at 16th Cloud Expo, Haseeb Budhani, CEO and Co-founder of Soha, will share five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the frict...
Cloud Expo New York is happening from June 9 - 11. This event brings together the worlds of Cloud Computing, DevOps, IoT, WebRTC, Big Data and SDDC. We hope to see you there-members of the Blue Box team will exhibit in booth 218 next to the DevOps area. Plus, our Chief Product Officer, Hernan Alvarez, will present his talk "The Cloud Has a Down-and-Dirty Lining" as part of the Operations track in the DevOps Summit portion of the event on June 9 at 11 am. Learn more about his session her...

As a company making software for Continuous Delivery and Devops at scale, at XebiaLabs we’re pretty much always in discussions with users about the benefits and challenges of new development styles, application architectures, and runtime platforms. Unsurprisingly, many of these discussions right now focus on microservices on the application side and containers and related frameworks […]

The post Apr. 25, 2015 10:00 AM EDT  Reads: 1,069

Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 16th Cloud Expo at the Javits Center in New York June 9-11 will find fresh new content in a new track called PaaS | Containers & Microservices Containers are not being considered for the first time by the cloud community, but a current era of re-consideration has pushed them to the top of the cloud agenda. With the launch ...
Financial services organizations were among the earliest enterprise adopters of cloud computing. The ability to leverage massive compute, storage and networking resources via RESTful APIs and automated tools like Chef and Puppet made it possible for their high-horsepower IT users to develop a whole new array of applications. Companies like Wells Fargo, Fidelity and BBVA are visible, vocal and engaged supporters of the OpenStack community, running production clouds for applications ranging from d...
A few weeks ago, SmartBear hosted API Craft Boston with the folks from Akana, Ian Goldsmith and Laura Heritage, to talk about microservices. It was an extremely informative presentation of where microservices came from, what it solves, and considerations around how it might fit into an organizational API strategy. It’s one thing to read everyone else’s opinions on blogs, twitter, etc. It’s great to go to workshops and conferences, but this was so intelligently presented (and for a meetup too)...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? Join this panel of experts as they peel away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud environment, and we must architect and code accordingly. At the very least, you’ll have no problem filling in your buzzword bingo cards.
So I guess we’ve officially entered a new era of lean and mean. I say this with the announcement of Ubuntu Snappy Core, “designed for lightweight cloud container hosts running Docker and for smart devices,” according to Canonical. “Snappy Ubuntu Core is the smallest Ubuntu available, designed for security and efficiency in devices or on the cloud.” This first version of Snappy Ubuntu Core features secure app containment and Docker 1.6 (1.5 in main release), is available on public clouds, ...
SYS-CON Events announced today that Column Technologies, a global technology solutions company, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place June 9-11, 2015, at the Javits Center in New York City, NY. Established in 1998, Column Technologies is a leader in application performance and infrastructure management for commercial and federal markets. The company is headquartered in the United States, with a diverse and talented team of more than 350 employees around th...
SYS-CON Events announced today Sematext Group, Inc., a Brooklyn-based Performance Monitoring and Log Management solution provider, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Sematext is a globally distributed organization that builds innovative Cloud and On Premises solutions for performance monitoring, alerting and anomaly detection (SPM), log management and analytics (Logsene), search analytics (S...
The stack is the hack, Jack. That's my takeaway from several events I attended over the past few weeks in Silicon Valley and Southeast Asia. I listened to and participated in discussions about everything from large datacenter management (think Facebook Open Compute) to enterprise-level cyberfraud (at a seminar in Manila attended by the US State Dept. and Philippine National Police) to the world of entrepreneurial startups, app deployment, and mobility (in a series of meetups and talks in bot...
SYS-CON Events announced today Isomorphic Software, the global leader in high-end, web-based business applications, will exhibit at SYS-CON's DevOps Summit 2015 New York, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Isomorphic Software is the global leader in high-end, web-based business applications. We develop, market, and support the SmartClient & Smart GWT HTML5/Ajax platform, combining the productivity and performance of traditional desktop software ...
SOA Software has changed its name to Akana. With roots in Web Services and SOA Governance, Akana has established itself as a leader in API Management and is expanding into cloud integration as an alternative to the traditional heavyweight enterprise service bus (ESB). The company recently announced that it achieved more than 90% year-over-year growth. As Akana, the company now addresses the evolution and diversification of SOA, unifying security, management, and DevOps across SOA, APIs, microser...
An explosive combination of technology trends will be where ‘microservices’ and the IoT Internet of Things intersect, a concept we can describe by comparing it with a previous theme, the ‘X Internet.' The idea of using small self-contained application components has been popular since XML Web services began and a distributed computing future of smart fridges and kettles was imagined long back in the early Internet years.
SYS-CON Events announced today that Cisco, the worldwide leader in IT that transforms how people connect, communicate and collaborate, has been named “Gold Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Cisco makes amazing things happen by connecting the unconnected. Cisco has shaped the future of the Internet by becoming the worldwide leader in transforming how people connect, communicate and collaborat...