Welcome!

Microservices Expo Authors: Karthick Viswanathan, Pat Romanski, Stackify Blog, Elizabeth White, Mehdi Daoudi

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Microsoft Cloud, Containers Expo Blog

@CloudExpo: Article

Cloud Computing: Knowledge Leads to a Change in Thinking

An exclusive Q&A with Chetan Patwardhan, CEO of Stratogent

"The basic premise for any central computing system optimized for mass consumption is the 80/20 rule. It can be built only to serve 80% of the needs in an economized and optimized fashion," noted Chetan Patwardhan, CEO of Stratogent, in this exclusive Q&A with Cloud Expo Conference Chair Jeremy Geelan. "Having said that," Patwardhan continued, "the so-called cloud economics work only for a certain type of system and is outright prohibitively expensive for most enterprise setups where a typical three-year timeframe cost view is more dependent on human labor than on the infrastructure."

Cloud Computing Journal: Just having the enterprise data is good. Extracting meaningful information out of this data is priceless. Agree or disagree?

Chetan Patwardhan: Agree 100%. Let's look at the value creation process: data is nothing but innumerable floating points of reference. The gathering of data is the very first step. Creating useful information out of data is truly a daunting task because it's not based on the complexity of data, but the simplicity of information that leads to the creation of knowledge. For the CEO of a large company, a dozen key information sets presented in up/down or chart format can create knowledge of how the company is performing. Knowledge leads to a change in thinking, sometimes creating paradigm shifts in how companies approach challenges. Changes in thinking bring about decision making and changes in the behavior of the organization. This chain reaction finally leads to success.

One key here lies in the ability of the information system to let consumers of data effortlessly traverse from data sets to concise, simple information and vice versa. For example, if a simple graph shows the market share of a product worldwide, that's great information. Then, there should be the ability to click on that graph and keep drilling down to continents, regions, countries, states, cities, and finally stores. In other words, neither the data by itself nor the one way extraction of information can be answers in themselves without this ability to traverse back and forth, pivot, report, represent, and share with the ease of point and click.

Finally, let's revise the chain reaction: collection of good data leads to meaningful information. Information leads to knowledge, which in turn leads to changes in behavior and critical decision making. Not just the success, but the survival of enterprises more ever than before will be dictated by their ability to collect and covert data into precisely timed good decision making!

Cloud Computing Journal: Forrester's James Staten: "Not everything will move to the cloud as there are many business processes, data sets and workflows that require specific hardware or proprietary solutions that can't take advantage of cloud economics. For this reason we'll likely still have mainframes 20 years from now." Agree or disagree?

Patwardhan: Well, define mainframe and cloud. I thought they were synonymous :). Before the concept of cloud, the mainframe was the cloud. Back in the day, we connected to that cloud via the so-called dumb terminals. For those old enough to have used IBM PROFS messaging, it was the first instant email and instant messenger system in one. And it worked really well! Well, the limitation of the cloud and mainframe then is the same.

The basic premise for any central computing system optimized for mass consumption is the 80/20 rule. It can be built only to serve 80% of the needs in an economized and optimized fashion. Having said that, the so-called cloud economics work only for a certain type of system and is outright prohibitively expensive for most enterprise setups where a typical three-year timeframe cost view is more dependent on human labor than on the infrastructure.

Now, to the point, can cloud replace, say, 99% of all conventional computing. Certainly not any time in the near future. There are several reasons for this. First, let's admit that most applications were never designed and, as a matter of fact, can't be designed from scratch to run on the cloud. Why? Because fundamentally, there is no standardized, here-to-stay cloud infrastructure that enterprise applications can be written to. Second, as someone who has installed and managed systems for enterprises from startups to Fortune 500 companies, I can tell you that no two sets of information systems look alike, let alone 80% of them. Third, many enterprises need a level of security, customization, and back-end connections (XML, EDI, VPN) that can't be in the cloud without the cloud looking the same as the conventional system. Fourth, there is little transparency and answerability in the cloud when it comes to the ability to audit and maintain compliance levels. And last but not least, if the cloud finally maintains almost the same burden (minus keeping up the physical servers) on human engineers, where are the economies to be had?

From my perspective, consolidation of human resource talent pools, combined with the ability of leveraging the most economical cloud options (IAAS, PAAS and SAAS) as well as the conventional datacenter setups - essentially a super-hybrid approach - will be the way to go.

Cloud Computing Journal: The price of cloud computing will go up - so will the demand. Agree or disagree or....?

Patwardhan: I don't understand why the price of cloud computing will go up. I expect it to remain flat over the next few years. While the efficiencies in hardware will reduce the price and/or increase the processing capabilities, the overhead of maintaining availability and pressure to provide human interface will also increase. As a result, the prices will probably remain flat. As for the demand, it will increase, but one must first factor in the overall demand for computing, which is constantly on the rise. Since the cloud is one way to satiate that demand, cloud subscriptions will rise too. Of the three types of cloud, to me PaaS and SaaS should generate more demand than the pure IaaS, because they both address the problem of cost of human labor. As long as the PaaS and SaaS providers get it right in terms of addressing user needs, demand for those services should rise.

Cloud Computing Journal: Rackspace is reporting an 80% growth from cloud computing, Amazon continues to innovate and make great strides, and Microsoft, Dell and other big players are positioning themselves as big leaders. Are you expecting in the next 18 months to see the bottom fall out and scores of cloud providers failing or getting gobbled up by bigger players? Or what?

Patwardhan: The news of Rackspace reporting 80% growth in cloud computing needs a special lens for viewing! Rackspace's model from day one has been to lease hosted servers, networking gear, and storage. With their cloud solution, they are, hmmm, leasing hosted servers, networking gear and storage! Essentially, the only change in their offering (pre-cloud and cloud) is flexibility and elasticity. It's important to take into account the trajectory of their overall demand, then contrast that against how much of that demand was served by their conventional model versus their cloud model. The cloud model for Rackspace customers is nothing but a little cheaper way to get started.

As for the small and large companies setting up infrastructure farms, adding their differentiated layers of service, it's a phenomenon not unique to the cloud. It happens every time a new bandwagon arrives. For now, it seems that a variety of local, regional and national players are thriving. Again, this is a common phenomenon in any cycle.

What does the landscape look like five years from now? There are three big factors unique to cloud providers. First, it takes real infrastructure (datacenter, equipment) to create a cloud service. Second, infrastructure ages and becomes obsolete quickly in this industry. Third, smaller companies once past the first or second installation will struggle because either they will stagnate and die from attrition (easy in cloud) or from cash flow challenges if they find a way to grow.

Wait a minute, is that familiar to you? It's after all not that unique is it? All infra companies, from telecom to trucking, suffer the same fate. Ergo, some will die and for some customers their cloud will disappear with little notice. Yet some others will find bigger fish that will gobble them up at low prices. It's unlikely for an IaaS cloud provider to be bought by a big player for a handsome price unless they have great momentum, brand, and profitability. I don't expect dramatic events in the next 18 months, but do expect the law of the jungle to prevail over the next five years.

Cloud Computing Journal: Please name one thing that - despite what we all may have heard or read - you are certain is not going to happen in the future, with Cloud and BigData? ;-)

Patwardhan: I find this question amusing because it tempts me to put things in here like a telco or a bank will never use the cloud to provide their core telecom or banking service. I would have preferred to answer a question that mentioned a few things that are possible candidates for the cloud, but in my opinion will not happen. Let me leave this thought behind - if there are things that you think will never happen in the cloud, think again. It is a matter of time before the evolution of secure virtualized and orchestrated platforms, ingenuity of service providers, and the lack of qualified human engineers will move things to the cloud in a manner we are not willing to think about today.

More Stories By Jeremy Geelan

Jeremy Geelan is Chairman & CEO of the 21st Century Internet Group, Inc. and an Executive Academy Member of the International Academy of Digital Arts & Sciences. Formerly he was President & COO at Cloud Expo, Inc. and Conference Chair of the worldwide Cloud Expo series. He appears regularly at conferences and trade shows, speaking to technology audiences across six continents. You can follow him on twitter: @jg21.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network. In practical terms, this ...
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
As today's digital disruptions bounce and smash their way through conventional technologies and conventional wisdom alike, predicting their path is a multifaceted challenge. So many areas of technology advance on Moore's Law-like exponential curves that divining the future is fraught with danger. Such is the problem with artificial intelligence (AI), and its related concepts, including cognitive computing, machine learning, and deep learning.
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
In our first installment of this blog series, we went over the different types of applications migrated to the cloud and the benefits IT organizations hope to achieve by moving applications to the cloud. Unfortunately, IT can’t just press a button or even whip up a few lines of code to move applications to the cloud. Like any strategic move by IT, a cloud migration requires advanced planning.
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
“Why didn’t testing catch this” must become “How did this make it to testing?” Traditional quality teams are the crutch and excuse keeping organizations from making the necessary investment in people, process, and technology to accelerate test automation. Just like societies that did not build waterways because the labor to keep carrying the water was so cheap, we have created disincentives to automate. In her session at @DevOpsSummit at 20th Cloud Expo, Anne Hungate, President of Daring System...