Welcome!

Microservices Expo Authors: Yeshim Deniz, AppNeta Blog, Elizabeth White, Pat Romanski, Liz McMillan

Related Topics: SDN Journal, Microservices Expo, Containers Expo Blog, @CloudExpo, @BigDataExpo, @DevOpsSummit

SDN Journal: Blog Feed Post

When Programmability Extends Beyond the Platform

A programmable platform should be more than just lipstick on a CLI

The term programmability is, like every other term associated with hype-driven trends, used to describe a wide range of capabilities. In general, when applied to "the network", programmability implies one of two things: an API through which a network element can be configured and managed and/or a scripting mechanism that enables direct interaction with the data path.

These two aspects of programmability provide integration between data center elements (control plane) as well as enabling extensibility through the creation of new services on the platform (data plane). But they don't speak to a third capability - the invocation of external services as a reaction to some data center event.

For example, a data center "event" might be the depletion of capacity in the face of high demand. Elasticity (the automatic scaling up and down of a service or application) demands action be taken. That action, ideally, is automated. The overall system should understand when capacity limits are about to be breached (both in the upward and downward direction) and be able to take the appropriate action - provision additional capacity or decommission capacity.

Most "network" elements are not capable of taking this action directly. Load balancing platforms for the most part are the first service to recognize a need to change capacity, but are rarely able to take action other than sharing that information with an external system. For most load balancing platforms that's because they lack the programmability necessary to do so. They may be programmatic at the control plane and even on the data plane, but they are not imbued with the ability to execute logic on an event-driven basis.

Platforms that are so imbued (and they do exist) are able to do so.

Let me illustrate...

The vCloud API Programming Guide has a robust set of interfaces through which a vAPP (which in VMware jargon describes a complete "application") can be managed. You can, through this API, power on (provision, launch, etc...) and shutdown (decommission, power off, etc...) a given vAPP. A programmable platform can leverage those APIs to enable rapid elasticity.

HERE COMES the (COMPUTER) SCIENCE
Interestingly, a scalable application consists of at least two things: a load balancing service and a pool of application resources. How far you can take that to implement actual elasticity depends entirely on the availability of APIs through which you can manage application instances (virtual machines) and the programmability of the platform upon which the load balancing service is deployed.

Assuming you have what you need, your environment looks something like this:

capacity-limits-reached

Health monitoring enables the service platform to know when there's a problem, that's critical for high-availability. The platform also understands the capacity limits of each virtual instance. Now, let's say demand is suddenly spiking (the source is irrelevant). Given the strategic location of the service platform, it is the first system in the data center able to recognize that capacity limits are about to be breached.

The question is, what does the service platform do about that?

Well, in most systems it's just going to share that information with an external orchestration/management platform. In most cases, in fact, it's going to share it passively; that is, the information won't reach the management platform until the management platform asks for it. It's a polling system, not a proactive push from the service platform.

That means that it's possible that capacity will be reached and users will start experiencing delays or even time-outs before the management platform even knows there's a problem.

In order to realize elasticity that actually ensures high-availability and responsiveness of applications, we need something more proactive. The service platform must participate either by pushing a capacity reached notification to the management platform or, if it's able to, simply instructing the management platform to provision the additional resources necessary to ensure capacity is expanded before limits are breached.

capacity-limits-expanded

This requires that the service platform is not only programmable at the control and data path planes, but also at the configuration layer. It must be able to not only recognize a breach of thresholds but then act on that breach. In this case, acting means initiating a process that will result in the provisioning of additional resources required to ensure continued availability and performance. As demand wanes, another threshold can trigger a reverse process in which an instance is decommissioned*.

Now, you can certainly design your data center in myriad ways to deal with elasticity and availability. If you've got the right service platform - one that's programmable in more than just the traditional two-pronged approach - then you've got another option available to you. An option that's more proactive and leverages what is an existing strategic point of control in your network architecture.

* Provisioning is easier and less complex than decommissioning, as the latter requires careful attention to existing connections and essentially means the load balancing service must manage the process of stopping new connections while maintaining existing ones until users complete their session (quiescence).

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm.
The IT industry is undergoing a significant evolution to keep up with cloud application demand. We see this happening as a mindset shift, from traditional IT teams to more well-rounded, cloud-focused job roles. The IT industry has become so cloud-minded that Gartner predicts that by 2020, this cloud shift will impact more than $1 trillion of global IT spending. This shift, however, has left some IT professionals feeling a little anxious about what lies ahead. The good news is that cloud computin...
We've all had that feeling before: The feeling that you're missing something that everyone else is in on. For today's IT leaders, that feeling might come up when you hear talk about cloud brokers. Meanwhile, you head back into your office and deal with your ever-growing shadow IT problem. But the cloud-broker whispers and your shadow IT issues are linked. If you're wondering "what the heck is a cloud broker?" we've got you covered.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In his Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, will explore t...
What if you could build a web application that could support true web-scale traffic without having to ever provision or manage a single server? Sounds magical, and it is! In his session at 20th Cloud Expo, Chris Munns, Senior Developer Advocate for Serverless Applications at Amazon Web Services, will show how to build a serverless website that scales automatically using services like AWS Lambda, Amazon API Gateway, and Amazon S3. We will review several frameworks that can help you build serverle...
Everyone wants to use containers, but monitoring containers is hard. New ephemeral architecture introduces new challenges in how monitoring tools need to monitor and visualize containers, so your team can make sense of everything. In his session at @DevOpsSummit, David Gildeh, co-founder and CEO of Outlyer, will go through the challenges and show there is light at the end of the tunnel if you use the right tools and understand what you need to be monitoring to successfully use containers in your...
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace. Traditional approaches for driving innovation are now woefully inadequate for keeping up with the breadth of disruption and change facing...
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
Microservices are a very exciting architectural approach that many organizations are looking to as a way to accelerate innovation. Microservices promise to allow teams to move away from monolithic "ball of mud" systems, but the reality is that, in the vast majority of organizations, different projects and technologies will continue to be developed at different speeds. How to handle the dependencies between these disparate systems with different iteration cycles? Consider the "canoncial problem" ...
SYS-CON Events announced today that HTBase will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. HTBase (Gartner 2016 Cool Vendor) delivers a Composable IT infrastructure solution architected for agility and increased efficiency. It turns compute, storage, and fabric into fluid pools of resources that are easily composed and re-composed to meet each application’s needs. With HTBase, companies can quickly prov...
The rise of containers and microservices has skyrocketed the rate at which new applications are moved into production environments today. While developers have been deploying containers to speed up the development processes for some time, there still remain challenges with running microservices efficiently. Most existing IT monitoring tools don’t actually maintain visibility into the containers that make up microservices. As those container applications move into production, some IT operations t...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership abi...
The essence of cloud computing is that all consumable IT resources are delivered as services. In his session at 15th Cloud Expo, Yung Chou, Technology Evangelist at Microsoft, demonstrated the concepts and implementations of two important cloud computing deliveries: Infrastructure as a Service (IaaS) and Platform as a Service (PaaS). He discussed from business and technical viewpoints what exactly they are, why we care, how they are different and in what ways, and the strategies for IT to transi...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
As Enterprise business moves from Monoliths to Microservices, adoption and successful implementations of Microservices become more evident. The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Documenting hurdles and problems for the use of Microservices will help consultants, architects and specialists to avoid repeating the same mistakes and learn how and when to use (or not use) Microservices at the enterprise level. The circumstance w...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...