Welcome!

Microservices Expo Authors: Stackify Blog, Automic Blog, Simon Hill, Pat Romanski, Liz McMillan

Related Topics: SDN Journal, Microservices Expo, Containers Expo Blog, @CloudExpo, @DXWorldExpo, @DevOpsSummit

SDN Journal: Blog Feed Post

When Programmability Extends Beyond the Platform

A programmable platform should be more than just lipstick on a CLI

The term programmability is, like every other term associated with hype-driven trends, used to describe a wide range of capabilities. In general, when applied to "the network", programmability implies one of two things: an API through which a network element can be configured and managed and/or a scripting mechanism that enables direct interaction with the data path.

These two aspects of programmability provide integration between data center elements (control plane) as well as enabling extensibility through the creation of new services on the platform (data plane). But they don't speak to a third capability - the invocation of external services as a reaction to some data center event.

For example, a data center "event" might be the depletion of capacity in the face of high demand. Elasticity (the automatic scaling up and down of a service or application) demands action be taken. That action, ideally, is automated. The overall system should understand when capacity limits are about to be breached (both in the upward and downward direction) and be able to take the appropriate action - provision additional capacity or decommission capacity.

Most "network" elements are not capable of taking this action directly. Load balancing platforms for the most part are the first service to recognize a need to change capacity, but are rarely able to take action other than sharing that information with an external system. For most load balancing platforms that's because they lack the programmability necessary to do so. They may be programmatic at the control plane and even on the data plane, but they are not imbued with the ability to execute logic on an event-driven basis.

Platforms that are so imbued (and they do exist) are able to do so.

Let me illustrate...

The vCloud API Programming Guide has a robust set of interfaces through which a vAPP (which in VMware jargon describes a complete "application") can be managed. You can, through this API, power on (provision, launch, etc...) and shutdown (decommission, power off, etc...) a given vAPP. A programmable platform can leverage those APIs to enable rapid elasticity.

HERE COMES the (COMPUTER) SCIENCE
Interestingly, a scalable application consists of at least two things: a load balancing service and a pool of application resources. How far you can take that to implement actual elasticity depends entirely on the availability of APIs through which you can manage application instances (virtual machines) and the programmability of the platform upon which the load balancing service is deployed.

Assuming you have what you need, your environment looks something like this:

capacity-limits-reached

Health monitoring enables the service platform to know when there's a problem, that's critical for high-availability. The platform also understands the capacity limits of each virtual instance. Now, let's say demand is suddenly spiking (the source is irrelevant). Given the strategic location of the service platform, it is the first system in the data center able to recognize that capacity limits are about to be breached.

The question is, what does the service platform do about that?

Well, in most systems it's just going to share that information with an external orchestration/management platform. In most cases, in fact, it's going to share it passively; that is, the information won't reach the management platform until the management platform asks for it. It's a polling system, not a proactive push from the service platform.

That means that it's possible that capacity will be reached and users will start experiencing delays or even time-outs before the management platform even knows there's a problem.

In order to realize elasticity that actually ensures high-availability and responsiveness of applications, we need something more proactive. The service platform must participate either by pushing a capacity reached notification to the management platform or, if it's able to, simply instructing the management platform to provision the additional resources necessary to ensure capacity is expanded before limits are breached.

capacity-limits-expanded

This requires that the service platform is not only programmable at the control and data path planes, but also at the configuration layer. It must be able to not only recognize a breach of thresholds but then act on that breach. In this case, acting means initiating a process that will result in the provisioning of additional resources required to ensure continued availability and performance. As demand wanes, another threshold can trigger a reverse process in which an instance is decommissioned*.

Now, you can certainly design your data center in myriad ways to deal with elasticity and availability. If you've got the right service platform - one that's programmable in more than just the traditional two-pronged approach - then you've got another option available to you. An option that's more proactive and leverages what is an existing strategic point of control in your network architecture.

* Provisioning is easier and less complex than decommissioning, as the latter requires careful attention to existing connections and essentially means the load balancing service must manage the process of stopping new connections while maintaining existing ones until users complete their session (quiescence).

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Digital transformation has changed the way users interact with the world, and the traditional healthcare experience no longer meets rising consumer expectations. Enterprise Health Clouds (EHCs) are designed to easily and securely deliver the smart and engaging digital health experience that patients expect today, while ensuring the compliance and data integration that care providers require. Jikku Venkat
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"We are an integrator of carrier ethernet and bandwidth to get people to connect to the cloud, to the SaaS providers, and the IaaS providers all on ethernet," explained Paul Mako, CEO & CTO of Massive Networks, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
From our perspective as consumers, perhaps the best thing about digital transformation is how consumerization is making technology so much easier to use. Sure, our television remote controls still have too many buttons, and I have yet to figure out the digital display in my Honda, but all in all, tech is getting easier for everybody. Within companies – even very large ones – the consumerization of technology is gradually taking hold as well. There are now simple mobile apps for a wide range of ...