Click here to close now.




















Welcome!

Microservices Expo Authors: Pat Romanski, Ian Khan, Elizabeth White, Liz McMillan, Roger Strukhoff

Related Topics: SDN Journal, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, @CloudExpo

SDN Journal: Blog Post

Open Compute, Open Switch API and Open Network Install Environment

The focus of OCP has been mostly around hardware designs and specifications

Much has been published about the Open Compute Project. Initiated by Facebook, it has become an industry effort focused on standardization of many parts and components in the datacenter. Initially focused on racks, power and server design, it has also added storage and now networking to its fold. Its goal is fairly straightforward: “how can we design the most efficient compute infrastructure possible”, a direct quote from its web site.

The focus of OCP has been mostly around hardware designs and specifications. If you look at the networking arm of OCP, you find several Top of Rack (ToR) ethernet switch hardware designs donated by the likes of Broadcom, Mellanox and Intel. By creating open specifications of hardware designs for fairly standard ethernet switches, the industry can standardize on these designs and economics of scale would drive down the cost to create and maintain this hardware. A noble goal and there are many opinions on both sides of this effort. Mostly referred to as “bare metal” and “commodity”, you can easily spend days reading up on many opinions. Mike Bushong yesterday discussed pricing implications for resellers in this blog post.

An interesting development however is that the network group within OCP has added a few software projects to its scope. Mellanox has initiated an Open Switch API specification, which attempts to standardize the lowest layer of interaction with the actual ethernet switching hardware. As a hardware vendor today, the choice of switching chipset is an important one beyond just the raw capabilities of the chipset. Each chip vendor that has a portfolio of chipsets also has a Software Development Kit or SDK that provides the initial layer of software to instruct the chip what to do. Ultimately every chip is manipulated with a (rather large) set of registers that need to be set, but that complexity is hidden in this SDK. The SDK presents a set of higher level APIs to program against that control the functioning of the chipset. This is what we (Plexxi) and all other hardware vendors write our software against, this is how we glue our software to the ethernet switching hardware.

Different chipset vendors of course have different SDKs, which makes changing chipsets a non-trivial step to take, your software has to adjust to a different SDK with all its functional and sometimes architectural differences. By creating a common switch API, life for us would be better, we would have a single API to develop against and could easily support multiple chipsets since the differences would be hidden behind this API. With so much functional difference between the chipset vendors, even these APIs will have lots of vendor specific extensions in them, but as recipients of such standardization, we can only welcome the effort.

The second software component the networking arm of OCP has taken on is the Open Network Install Environment. ONIE is a boot loader and installer based on a complete, but narrow linux distribution. ONIE was created as an open source, vendor agnostic boot loader with the desire to have these commodity switches leave their factory with only ONIE installed. Driven by Cumulus it creates a convenient and standard method to get software installed and loaded onto bare metal switches from commodity hardware providers. It’s a necessary piece for Cumulus’ business to ensure easy installation of their software onto third party hardware.

ONIE is pretty straight forward and those that use net booting using DHCP, TFTP and a variety of other means will probably say “what’s the big deal”? At a high level, it really provides the same functionality, but with some small, but very relevant differences.

ONIE is built on a full linux distribution. So out of the gate, switches with ONIE installed will boot to a complete linux before they even attempt to find the actual switching software image they ultimately will run. Having a full linux means you have full networking support, you can access the device using telnet, ssh, you name it. Once booted, ONIE will follow a set of search rules to find an image to download and install. There is no pre-conceived notion of what that image is, or what needs to be done to install it, the image itself contains the instructions of how this image needs to be installed, how the internal disk should be formatted,which partition to install into, you name it. ONIE has none of that knowledge, it just has the linux tools to do it. Once a switching software image has been installed, ONIE will reboot the switch again and now boot right through to the software image installed (with of course a chance to interrupt that).

A bit of boot loader, installer and some may even want to call it BIOS, all wrapped together in a linux distribution, ONIE is not magic, it’s not radically innovative. But it is a very convenient and open source way to get switches to boot, find, install and run software all with the safety-net of linux to get into the switch remotely if the software loading, installation or execution really goes bad. I bet most of you have had switches in crash and boot loops before, and they are hard to recover from without physical intervention.

At Plexxi we do not create standard switching hardware, that should be clear. We also don’t create standard switching software for commodity switches. But we are implementing ONIE on our switches and packaging our software to be ONIE installable, available a bit later this year.

For us this is a very convenient and (on its way to being industry) standardized way to implement a boot loading and installation environment that fits well with our open source and linux roots and beliefs. For us this is about providing convenience to our customers and logistics teams. We will have the option to ship our switches with ONIE so that the customer can simply power the switch on, plug in the management ethernet connection, connect our LightRail optical cables and the switch becomes part of a Plexxi ring loaded with the right software, visible to our controller.

It’s not a huge innovative leap, but it’s a significant convenience. And there is lots of value in creating convenience and simplicity, they are significant drivers in our overall solution. Which makes ONIE a nice fit.

The post Open Compute, Open Switch API and Open Network Install Environment appeared first on Plexxi.

Read the original blog entry...

More Stories By Marten Terpstra

Marten Terpstra is a Product Management Director at Plexxi Inc. Marten has extensive knowledge of the architecture, design, deployment and management of enterprise and carrier networks.

@MicroservicesExpo Stories
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usag...
Alibaba, the world’s largest ecommerce provider, has pumped over a $1 billion into its subsidiary, Aliya, a cloud services provider. This is perhaps one of the biggest moments in the global Cloud Wars that signals the entry of China into the main arena. Here is why this matters. The cloud industry worldwide is being propelled into fast growth by tremendous demand for cloud computing services. Cloud, which is highly scalable and offers low investment and high computational capabilities to end us...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could ...
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out ...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Our guest on the podcast this week is Adrian Cockcroft, Technology Fellow at Battery Ventures. We discuss what makes Docker and Netflix highly successful, especially through their use of well-designed IT architecture and DevOps.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...
MuleSoft has announced the findings of its 2015 Connectivity Benchmark Report on the adoption and business impact of APIs. The findings suggest traditional businesses are quickly evolving into "composable enterprises" built out of hundreds of connected software services, applications and devices. Most are embracing the Internet of Things (IoT) and microservices technologies like Docker. A majority are integrating wearables, like smart watches, and more than half plan to generate revenue with ...