Welcome!

Microservices Expo Authors: Derek Weeks, PagerDuty Blog, Kevin Benedict, Yeshim Deniz, Elizabeth White

Related Topics: SDN Journal, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, @CloudExpo

SDN Journal: Article

DIY vs DIFY Networking

If you were to base your guess on industry chatter, you would have to conclude that DIY has the upper hand

There is probably never going to be a perfect balance in the industry between Do-it-yourself (DIY) and Do-it-for-you (DIFY) networking. It seems exceedingly unlikely that there is a one-size-fits-all type of solution out there. And so we will invariably end up with a bifurcated market that requires multiple solutions for its constituents. But if there is not a perfect balance, which one of these is likely to see the most action?

If you were to base your guess on industry chatter, you would have to conclude that DIY has the upper hand.

There is a ton of momentum right now with both SDN and bare metal switching. On the SDN front, it is all about orchestration and automation. The ability to streamline customized workflows is attractive, especially for the large IT shops that sink tens of millions of dollars into managing their monstrosities. Once you get into anything that is customized, there is a degree of DIY-ness that is required. No product is designed expressly for your particular environment, so you need to the ability to customize what you buy to do what you want. Beyond that, there is an awful lot of talk about APIs and programmability.

Bare metal switching is a different initiative with different objectives that end up in a similar DIY framework. The move towards a more server-like environment allows users to customize their switching solution. There is great power in having absolute control over how a device behaves. It allows users to pick and choose tools they are already familiar with, extending their functionality into the networking realm.

However, the challenge in using industry dialogue to conclude where things will end up is that the chatter does not always match exactly the buying patterns. Indeed, public discourse most typically leads broad deployment – sometimes by several years or more (think IPv6, Internet of Things, or even electric vehicles).

The DIY movement in networking is real, but what is it about? The ability to tailor specific networking applications to the infrastructure is about eking out performance or customizing experience. It is about modifying a base set of functionality to fit better into your specific context.

For this to matter, you have to be pushing the envelope in terms of performance or capability. But the truth is that the bulk of the networking space is simply not there. Their issues are not in customization. They want to be spending less time with the network, not more. The problem they need solved is more about operating their infrastructure and less about creating substrates to connect it all together in some unique configuration.

But you don’t hear from these people in industry forums and on social media. They lack the interest, time, and sometimes confidence to express a point of view that is less visionary and more functional. As a result, we only hear one side of the story. It plays out in blogs, on Twitter, in press articles, and on conferences stages. And with every word and unapproachable idea, we collectively push the majority of users further into the background.

The solution here isn’t to retreat from change. But we need to make sure that new technology is usable for the legions of people for whom the network is primarily a means of enabling their business. We need to advance with equal enthusiasm DIFY networking.

So why don’t we do this naturally as an industry?

There are two major dynamics at play. First, incumbents tend to be capability-driven. Customer X needs something, so they build whatever widget is required. The focus is on the capability, not necessarily on how that capability is inserted into a widely consumable workflow. If there is any doubt here, ask yourself if networking workflows today are more arcane or intuitive. And then ask yourself why certifications are so important. The only way to validate that you have mastered the arcane is to produce your certificate as proof.

The second dynamic is that new initiatives (be they new companies or just new projects) tend to target the hot spots. Those hot spots are identified by the vocal minority. And networking’s vocal bunch consists of strong proponents for customization, primarily through tooling and development frameworks.

But even here, customization is rarely the outright goal. Unless your business requires differentiated network services (a la service or cloud providers), you likely don’t want to be customized for the sake of being customized. Rather, the customization trends are a response to a broad deficiency in the networking industry. More directly, if my vendor cannot give me what I need, at least give me the tools so I can do it myself.

Both SDN and white box switching are great movements, but they are responses to a long-time issue with legacy networks: the equipment is needlessly expensive, and networks are ridiculously hard to manage. When these issues go unaddressed for decades, what are customers to do? They stand up and collectively say “Screw it. I’ll do it myself.”

When the DIY trends exist long enough, we end up fooling ourselves into thinking customization is the goal when all along it was merely a workaround. We replace intuitive networking with There’s an API for that networking. Essentially, we have shifted the cost from procurement (you can buy cheaper equipment) to development (but you have to customize everything around it).

This doesn’t seem right. I suspect the right outcome for the industry is to take the technological advances, develop them to completeness, and deliver an infrastructure that delivers. Such an infrastructure could still have the hooks for the DIYers, but it would be functional for the DIFYers as well.

[Today’s fun fact: If all of the oceans in the world evaporated, Hawaii would be the tallest mountain in the world. Take that, Everest!]

The post DIY vs DIFY networking appeared first on Plexxi.

More Stories By Michael Bushong

The best marketing efforts leverage deep technology understanding with a highly-approachable means of communicating. Plexxi's Vice President of Marketing Michael Bushong has acquired these skills having spent 12 years at Juniper Networks where he led product management, product strategy and product marketing organizations for Juniper's flagship operating system, Junos. Michael spent the last several years at Juniper leading their SDN efforts across both service provider and enterprise markets. Prior to Juniper, Michael spent time at database supplier Sybase, and ASIC design tool companies Synopsis and Magma Design Automation. Michael's undergraduate work at the University of California Berkeley in advanced fluid mechanics and heat transfer lend new meaning to the marketing phrase "This isn't rocket science."

@MicroservicesExpo Stories
When building DevOps or continuous delivery practices you can learn a great deal from others. What choices did they make, what practices did they put in place, and how did they connect the dots? At Sonatype, we pulled together a set of 21 reference architectures for folks building continuous delivery and DevOps practices using Docker. Why? After 3,000 DevOps professionals attended our webinar on "Continuous Integration using Docker" discussing just one reference architecture example, we recogn...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
The evolution of JavaScript and HTML 5 to support a genuine component based framework (Web Components) with the necessary tools to deliver something close to a native experience including genuine realtime networking (UDP using WebRTC). HTML5 is evolving to offer built in templating support, the ability to watch objects (which will speed up Angular) and Web Components (which offer Angular Directives). The native level support will offer a massive performance boost to frameworks having to fake all...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of Soli...
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, will discuss solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, Cloud Expo and @ThingsExpo are two of the most important technology events of the year. Since its launch over eight years ago, Cloud Expo and @ThingsExpo have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, I provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading the...
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
DevOps and microservices are permeating software engineering teams broadly, whether these teams are in pure software shops but happen to run a business, such Uber and Airbnb, or in companies that rely heavily on software to run more traditional business, such as financial firms or high-end manufacturers. Microservices and DevOps have created software development and therefore business speed and agility benefits, but they have also created problems; specifically, they have created software securi...
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.