|By Jörg-Peter Elbers, Achim Autenrieth||
|February 1, 2013 10:30 AM EST||
Using OpenFlow to extend software-defined networking (SDN) to the optical layer is a compelling prospect for enterprises seeking to achieve joint orchestration of information technology (IT) and network resources for cloud services, to virtualize the network and to more simply manage interconnections of distributed data centers that require synchronization.
Today's fragmented, specialized management and control approaches are fraught with proprietary protocols and management systems, limited scalability and configuration complexities. With an OpenFlow-enabled transport network, an enterprise could instead engage in a kind of "one-stop shopping" for control of cloud computing, storage and networking resources - all via one, unified application programming interface (API). The benefits could include significantly simplified configuration, management and scaling of large-scale enterprise infrastructures through integration and automation.
That's a new role for OpenFlow, demanding strategic tailoring of the protocol for the optical transport domain. Demonstration and development of the capability are closely watched by enterprises that are under incessant pressure to cost-effectively meet ever-increasing demand for bandwidth and services.
Virtualization's New Frontier
Servers and storage have been virtualized in the enterprise; the next great frontier for virtualization is the network.
Because of the substantial cost savings and performance benefits that it can deliver, SDN-based virtualization is of prime interest to enterprises for a wide range of applications. OpenFlow has emerged as one of the most popular SDN protocols. Web 2.0 network operators and national research and education network (NREN) operators, especially, like OpenFlow.
With OpenFlow, an abstraction of the network's packet switches can be generated and flow-forwarding behavior can be specified across an infrastructure via an external controller. Operations can be substantially automated and streamlined by breaking up the monolithically integrated control and forwarding paradigm of today's switches.
Using OpenFlow, could SDN be extended across layers and create a scenario in which - with a single instruction - the controller could jointly create virtual machines and enable enterprise network administrators to reserve computing, networking and storage resources in one stroke?
It is an obviously compelling notion for enterprise network staffs who desperately need to simplify operations. However, the problem is OpenFlow deployment and development has largely been limited to the electrical packet layer, whereas the interconnection beyond the data center is typically comprised of optical transport technology. Furthermore, the optical domain is where things get hazy for many enterprise network administrators. Their comfort zone tends to be packets - not wavelengths and optics.
The result is that cloud computing is currently decoupled from the transport networking control and operation. The network exists as a static, separated entity in today's cloud implementations. There is no interaction between cloud computing processes and the statically configured network. The two are not in any way interoperable; they speak different languages.
Converging cloud computing and networking requires a more dynamic mode of control and operation, but enterprises largely have judged integrating management of the optical network into the data-center environment to be too complex.
To extend OpenFlow from its established role in the electrical packet domain to the optical layer (and, thereby, extend SDN across multiple network layers), a range of optical-specific concerns must be tackled.
Crafting and Experimenting
Within the European Commission's FP7 ICT Work Programme is a collaborative project, "OpenFlow in Europe - Linking Infrastructure and Applications" (OFELIA), that provides researchers with a test bed in which to experiment with SDN applications and virtual multi-layer networks over shared network infrastructure.
Via standardized, secure interfaces through GÉANT, a high-bandwidth interconnection of European R&E networks, researchers develop, run and control experiments using packet switches and application servers at the University of Essex and seven other test-bed facilities throughout Europe.
OFELIA hosts a prototype implementation of dynamic control of wavelength-switched optical networks via OpenFlow. Bandwidth, latency and power consumption can be adjusted to meet the specific requirements of specific applications.
To make it happen, key OpenFlow additions had to be engineered in order for the protocol to effectively control the optical domain. Optical-specific considerations were required to adapt OpenFlow from the packet world. A packet can travel from any ingress to any egress port in an electrical switch or from any time slot in a time-division multiplexing (TDM) device. The optical domain, however, introduces strict switching constraints, with regard to wavelength continuity, optical impairments, optical power leveling on the line side, etc.
Augmenting OpenFlow to address those optical-specific concerns has resulted in an OFELIA prototype that demonstrates a truly transparent, wavelength-switched optical network. The research community is able to experiment with the capability via a flexible, Web-services approach; commercial enterprises, too, are interested in trialing the capability for their specific applications and environments.
OpenFlow is not sufficient in itself to enable the complete transformation that enterprise network administrators envision, to SDN-enable virtualization across all layers of their infrastructures. The additions to OpenFlow that were engineered for the OFELIA test bed provide only the bridge between the optical layer and packet layer and allow integration into a cloud operating system such as OpenStack.
But that is one very important bridge, and the promise for enterprise network administrators is considerable. The OpenFlow innovation could seamlessly integrate the optical transport network under a common management umbrella with an enterprise's routers and switches - all via one familiar interface. Management of the optical domain could become as simple as the management of Ethernet boxes - using an encapsulation of virtual resources that enterprise network administrators could manage via typical and familiar infrastructure. That's a significant breakthrough. With many enterprises already considering usage of an OpenFlow-based control for their packet networks, extending the framework to the wavelength-switched optical layer would be a natural migration.
Virtualization has developed over phases in enterprise networking. First, resource virtualization inside data centers delivered economic savings through enhanced utilization, scalability and redundancy. Data-center virtualization conveyed greater infrastructure flexibility, higher availability and better workload balancing. The next frontier, network virtualization, promises true platform agility and, with it, a host of long-sought-after enterprise capabilities: capacity on-demand, adaptive infrastructure and dynamic service automation, among them. Adapting OpenFlow and extending SDN to the optical transport domain comprise an important step toward that vision.
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Dec. 3, 2016 11:30 AM EST Reads: 2,064
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
Dec. 3, 2016 11:15 AM EST Reads: 1,624
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Dec. 3, 2016 09:30 AM EST Reads: 826
Application transformation and DevOps practices are two sides of the same coin. Enterprises that want to capture value faster, need to deliver value faster – time value of money principle. To do that enterprises need to build cloud-native apps as microservices by empowering teams to build, ship, and run in production. In his session at @DevOpsSummit at 19th Cloud Expo, Neil Gehani, senior product manager at HPE, discussed what every business should plan for how to structure their teams to delive...
Dec. 3, 2016 08:30 AM EST Reads: 1,356
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
Dec. 3, 2016 08:30 AM EST Reads: 714
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
Dec. 3, 2016 04:00 AM EST Reads: 2,715
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...
Dec. 3, 2016 02:15 AM EST Reads: 766
Much of the value of DevOps comes from a (renewed) focus on measurement, sharing, and continuous feedback loops. In increasingly complex DevOps workflows and environments, and especially in larger, regulated, or more crystallized organizations, these core concepts become even more critical. In his session at @DevOpsSummit at 18th Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, showed how, by focusing on 'metrics that matter,' you can provide objective, transparent, and meaningful f...
Dec. 3, 2016 01:45 AM EST Reads: 4,537
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
Dec. 3, 2016 12:15 AM EST Reads: 1,761
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Dec. 2, 2016 10:30 PM EST Reads: 1,737
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Dec. 2, 2016 08:00 PM EST Reads: 1,553
Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more business becomes digital the more stakeholders are interested in this data including how it relates to business. Some of these people have never used a monitoring tool before. They have a question on their mind like “How is my application doing” but no id...
Dec. 2, 2016 04:45 PM EST Reads: 2,123
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
Dec. 2, 2016 03:30 PM EST Reads: 3,212
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Dec. 2, 2016 03:15 PM EST Reads: 1,456
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Dec. 2, 2016 01:45 PM EST Reads: 5,454
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Dec. 2, 2016 01:30 PM EST Reads: 5,706
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
Dec. 2, 2016 01:00 PM EST Reads: 2,461
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Dec. 2, 2016 12:00 PM EST Reads: 1,848
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
Dec. 2, 2016 11:30 AM EST Reads: 1,791
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Dec. 1, 2016 09:00 PM EST Reads: 1,724