Welcome!

Microservices Expo Authors: Liz McMillan, Elizabeth White, Pat Romanski, Carmen Gonzalez, JP Morgenthal

Related Topics: @CloudExpo, Microservices Expo, Microsoft Cloud, Containers Expo Blog, Agile Computing, Cloud Security

@CloudExpo: Article

Licensed to Print Money (In the Cloud)

New approaches are emerging with the goal of minimizing upfront investments in both hardware and software

One of the major issues facing cloud service providers is the expense of building out infrastructure without knowing how or when revenues will follow. As a result, cloud providers are reevaluating their approach to hardware and software investments and engaging with technology and networking vendors to develop creative pricing models that are aligned with cloud business principles and engineered to reduce risks.

In a perfect world, cloud service providers would pay for infrastructure only after a customer has made a purchase - in order to maintain a tight correlation between revenues and expenses. In the real world, however, implementing this type of model is easier said than done. This was especially true during the ‘iron' age, when hardware and software were highly coupled and there were very few alternatives to big vendors that focused on selling high-dollar, high-performance networking gear.

Today, however, new approaches are emerging with the goal of minimizing upfront investments in both hardware and software. One of the most compelling new approaches takes advantage of performance improvements in server virtualization. Leveraging virtualization has several benefits, one of which is the ability to maximize the use of compute resources. Although specialized hardware can provide some performance advantages, the downside is that unused compute resources cannot be utilized or shared by other functions. For many service providers operating on thin margins, idle resources that cannot be monetized can mean the difference between profit and loss.

In contrast, virtualized server resources can be spun up and spun down to perform a wide range of functions and enable a wide selection of services. While hardware will always remain a large expense, standardizing on server virtualization generates savings in the form of volume discounts and also provides the flexibility to ensure maximum ROI is being extracted from the infrastructure. In other words, a virtual machine that ran load balancing yesterday for one customer could just as easily run application services today for a different customer.

The other major benefit of server virtualization is the ability to decouple expensive software and networking functionality from the underlying hardware. While hardware needs to be purchased up front to provide the foundation for services, software does not. Software can be purchased on demand. In fact, software can enable service providers to be real-time technology resellers - seamlessly making products from a range of technology vendors available to their customers. Or software may be purchased on demand in direct proportion to the provider's need to scale or support services purchased by their customers.

A notable example of this trend is software-based networking products - as exemplified by virtual load balancing, secure access and WAN optimization. Instead of focusing only on high-performance dedicated hardware, networking vendors now give cloud service providers the ability to run essential functions in virtualized environments on commodity servers. By giving service providers the ability to move application networking functions onto a common virtualized server infrastructure - and focusing on integration with orchestration and cloud management systems - networking vendors are giving service providers the ability to significantly lower the cost of service creation.

Although significant, this shift is only the beginning. Service providers are demanding even greater creativity from technology and networking vendors. Although virtualized solutions lower capex and opex and provide the ability to respond quickly to customer needs, they still require service providers to invest up front in costly perpetual licensing. Even with the advent of lower-cost and time-bound subscription models, service providers are still required to purchase licenses up front on the expectation of turning a profit as they resell services.

In response, some technology and networking vendors are breaking new ground and turning traditional pricing models on their heads. As an example, a vendor may charge service providers a nominal amount to run software load balancing in a virtualized environment and forgo traditional licensing schemes. In this new model, service providers pay nothing in the way of user licenses until such time as they ramp customers.

Whether the load balancing is used "under-the-cloud" to support and scale software services, or made available "over-the-cloud" as infrastructure services, these new models offer service providers "map-of-the-earth" pricing. In other words, the service provider pays the networking vendor in direct proportion to demand.

Customers simply purchase services on-demand and get charged by the service provider according to metrics such as time or bandwidth. As customers pay, the cloud service provider and application delivery networking vendor generate revenue simultaneously based upon a predetermined arrangement. Depending on the type of networking service being resold and depending on the nature of the service provider's business model, billing could be driven by throughput, users, connections or transactions per second or any variation that provides the most compelling offering to customers and the greatest margin to the service provider.

On the flip side, for these new business models to gain traction, billing and provisioning for customers, providers and vendors must be automated and integrated with service provider cloud management systems. For technology and networking vendors, that means putting as much effort into orchestration as they put into core capabilities - an effort that is well underway.

By standardizing on a common hardware architecture, leveraging virtualized to maximize ROI, decoupling hardware from software and evolving to new on-demand software business models, cloud providers gain a massive improvement in their ability to be profitable. With less risk and less outlay, service providers are better able to manage and grow their business and a rising tide lifts both service providers and technology and networking vendors. As business increases for the cloud provider, technology and networking vendors can also build automated volume discounts into the business model, which may be triggered to encourage growth and further control costs.

As cloud computing and cloud services continue to accelerate, cloud providers - whether offering software, platform or infrastructure services - are well advised to seek out technology and networking vendors with a broad suite of virtualized offerings and a willingness to work with providers on pricing strategies and business models that enhance profitability for both parties.

More Stories By Paul Andersen

Paul Andersen is the Marketing Manager at Array Networks. He has over 15 years’ experience in networking, and has served in various marketing capacities for Cisco Systems, Tasman Networks and Sun Microsystems. Mr. Andersen holds a Bachelor’s Degree in Marketing from San Jose State University.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
Updating DevOps to the latest production data slows down your development cycle. Probably it is due to slow, inefficient conventional storage and associated copy data management practices. In his session at @DevOpsSummit at 20th Cloud Expo, Dhiraj Sehgal, in Product and Solution at Tintri, will talk about DevOps and cloud-focused storage to update hundreds of child VMs (different flavors) with updates from a master VM in minutes, saving hours or even days in each development cycle. He will also...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, will discuss how to use Kubernetes to setup a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace....
SYS-CON Events announced today that CA Technologies has been named “Platinum Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY, and the 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business – from apparel to energy – is being rewritten by software. From ...
SYS-CON Events announced today that Fusion, a leading provider of cloud services, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Fusion, a leading provider of integrated cloud solutions to small, medium and large businesses, is the industry’s single source for the cloud. Fusion’s advanced, proprietary cloud service platform enables the integration of leading edge solutions in the cloud, including cloud...
SYS-CON Events announced today that Outlyer, a monitoring service for DevOps and operations teams, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Outlyer is a monitoring service for DevOps and Operations teams running Cloud, SaaS, Microservices and IoT deployments. Designed for today's dynamic environments that need beyond cloud-scale monitoring, we make monitoring effortless so you...
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo 2017. The @DevOpsSummit at Cloud Expo New York will take place on June 6-8, 2017, at the Javits Center in New York City, New York, and @DevOpsSummit at Cloud Expo Silicon Valley will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, Cloud Expo and @ThingsExpo are two of the most important technology events of the year. Since its launch over eight years ago, Cloud Expo and @ThingsExpo have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, I provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading the...
TechTarget storage websites are the best online information resource for news, tips and expert advice for the storage, backup and disaster recovery markets. By creating abundant, high-quality editorial content across more than 140 highly targeted technology-specific websites, TechTarget attracts and nurtures communities of technology buyers researching their companies' information technology needs. By understanding these buyers' content consumption behaviors, TechTarget creates the purchase inte...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo | @ThingsExpo, June 6-8, 2017, at the Javits Center in New York City, NY and October 31 - November 2, 2017, Santa Clara Convention Center, CA. Learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
@DevOpsSummit at Cloud taking place June 6-8, 2017, at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to wait for long developm...
Microservices (μServices) are a fascinating evolution of the Distributed Object Computing (DOC) paradigm. Initial design of DOC attempted to solve the problem of simplifying developing complex distributed applications by applying object-oriented design principles to disparate components operating across networked infrastructure. In this model, DOC “hid” the complexity of making this work from the developer regardless of the deployment architecture through the use of complex frameworks, such as C...
In today's enterprise, digital transformation represents organizational change even more so than technology change, as customer preferences and behavior drive end-to-end transformation across lines of business as well as IT. To capitalize on the ubiquitous disruption driving this transformation, companies must be able to innovate at an increasingly rapid pace. Traditional approaches for driving innovation are now woefully inadequate for keeping up with the breadth of disruption and change facing...
When building large, cloud-based applications that operate at a high scale, it’s important to maintain a high availability and resilience to failures. In order to do that, you must be tolerant of failures, even in light of failures in other areas of your application. “Fly two mistakes high” is an old adage in the radio control airplane hobby. It means, fly high enough so that if you make a mistake, you can continue flying with room to still make mistakes. In his session at 18th Cloud Expo, Lee A...
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud: This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and micro services. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your contain...
Thanks to Docker and the DevOps revolution, microservices have emerged as the new way to build and deploy applications — and there are plenty of great reasons to embrace the microservices trend. If you are going to adopt microservices, you also have to understand that microservice architectures have many moving parts. When it comes to incident management, this presents an important difference between microservices and monolithic architectures. More moving parts mean more complexity to monitor an...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.