Click here to close now.

Welcome!

@MicroservicesE Blog Authors: Elizabeth White, Pat Romanski, Lori MacVittie, Liz McMillan, Cloud Best Practices Network

Related Topics: @CloudExpo Blog, Java IoT, @MicroservicesE Blog, Microsoft Cloud, @ContainersExpo

@CloudExpo Blog: Article

Cloud Computing: Knowledge Leads to a Change in Thinking

An exclusive Q&A with Chetan Patwardhan, CEO of Stratogent

"The basic premise for any central computing system optimized for mass consumption is the 80/20 rule. It can be built only to serve 80% of the needs in an economized and optimized fashion," noted Chetan Patwardhan, CEO of Stratogent, in this exclusive Q&A with Cloud Expo Conference Chair Jeremy Geelan. "Having said that," Patwardhan continued, "the so-called cloud economics work only for a certain type of system and is outright prohibitively expensive for most enterprise setups where a typical three-year timeframe cost view is more dependent on human labor than on the infrastructure."

Cloud Computing Journal: Just having the enterprise data is good. Extracting meaningful information out of this data is priceless. Agree or disagree?

Chetan Patwardhan: Agree 100%. Let's look at the value creation process: data is nothing but innumerable floating points of reference. The gathering of data is the very first step. Creating useful information out of data is truly a daunting task because it's not based on the complexity of data, but the simplicity of information that leads to the creation of knowledge. For the CEO of a large company, a dozen key information sets presented in up/down or chart format can create knowledge of how the company is performing. Knowledge leads to a change in thinking, sometimes creating paradigm shifts in how companies approach challenges. Changes in thinking bring about decision making and changes in the behavior of the organization. This chain reaction finally leads to success.

One key here lies in the ability of the information system to let consumers of data effortlessly traverse from data sets to concise, simple information and vice versa. For example, if a simple graph shows the market share of a product worldwide, that's great information. Then, there should be the ability to click on that graph and keep drilling down to continents, regions, countries, states, cities, and finally stores. In other words, neither the data by itself nor the one way extraction of information can be answers in themselves without this ability to traverse back and forth, pivot, report, represent, and share with the ease of point and click.

Finally, let's revise the chain reaction: collection of good data leads to meaningful information. Information leads to knowledge, which in turn leads to changes in behavior and critical decision making. Not just the success, but the survival of enterprises more ever than before will be dictated by their ability to collect and covert data into precisely timed good decision making!

Cloud Computing Journal: Forrester's James Staten: "Not everything will move to the cloud as there are many business processes, data sets and workflows that require specific hardware or proprietary solutions that can't take advantage of cloud economics. For this reason we'll likely still have mainframes 20 years from now." Agree or disagree?

Patwardhan: Well, define mainframe and cloud. I thought they were synonymous :). Before the concept of cloud, the mainframe was the cloud. Back in the day, we connected to that cloud via the so-called dumb terminals. For those old enough to have used IBM PROFS messaging, it was the first instant email and instant messenger system in one. And it worked really well! Well, the limitation of the cloud and mainframe then is the same.

The basic premise for any central computing system optimized for mass consumption is the 80/20 rule. It can be built only to serve 80% of the needs in an economized and optimized fashion. Having said that, the so-called cloud economics work only for a certain type of system and is outright prohibitively expensive for most enterprise setups where a typical three-year timeframe cost view is more dependent on human labor than on the infrastructure.

Now, to the point, can cloud replace, say, 99% of all conventional computing. Certainly not any time in the near future. There are several reasons for this. First, let's admit that most applications were never designed and, as a matter of fact, can't be designed from scratch to run on the cloud. Why? Because fundamentally, there is no standardized, here-to-stay cloud infrastructure that enterprise applications can be written to. Second, as someone who has installed and managed systems for enterprises from startups to Fortune 500 companies, I can tell you that no two sets of information systems look alike, let alone 80% of them. Third, many enterprises need a level of security, customization, and back-end connections (XML, EDI, VPN) that can't be in the cloud without the cloud looking the same as the conventional system. Fourth, there is little transparency and answerability in the cloud when it comes to the ability to audit and maintain compliance levels. And last but not least, if the cloud finally maintains almost the same burden (minus keeping up the physical servers) on human engineers, where are the economies to be had?

From my perspective, consolidation of human resource talent pools, combined with the ability of leveraging the most economical cloud options (IAAS, PAAS and SAAS) as well as the conventional datacenter setups - essentially a super-hybrid approach - will be the way to go.

Cloud Computing Journal: The price of cloud computing will go up - so will the demand. Agree or disagree or....?

Patwardhan: I don't understand why the price of cloud computing will go up. I expect it to remain flat over the next few years. While the efficiencies in hardware will reduce the price and/or increase the processing capabilities, the overhead of maintaining availability and pressure to provide human interface will also increase. As a result, the prices will probably remain flat. As for the demand, it will increase, but one must first factor in the overall demand for computing, which is constantly on the rise. Since the cloud is one way to satiate that demand, cloud subscriptions will rise too. Of the three types of cloud, to me PaaS and SaaS should generate more demand than the pure IaaS, because they both address the problem of cost of human labor. As long as the PaaS and SaaS providers get it right in terms of addressing user needs, demand for those services should rise.

Cloud Computing Journal: Rackspace is reporting an 80% growth from cloud computing, Amazon continues to innovate and make great strides, and Microsoft, Dell and other big players are positioning themselves as big leaders. Are you expecting in the next 18 months to see the bottom fall out and scores of cloud providers failing or getting gobbled up by bigger players? Or what?

Patwardhan: The news of Rackspace reporting 80% growth in cloud computing needs a special lens for viewing! Rackspace's model from day one has been to lease hosted servers, networking gear, and storage. With their cloud solution, they are, hmmm, leasing hosted servers, networking gear and storage! Essentially, the only change in their offering (pre-cloud and cloud) is flexibility and elasticity. It's important to take into account the trajectory of their overall demand, then contrast that against how much of that demand was served by their conventional model versus their cloud model. The cloud model for Rackspace customers is nothing but a little cheaper way to get started.

As for the small and large companies setting up infrastructure farms, adding their differentiated layers of service, it's a phenomenon not unique to the cloud. It happens every time a new bandwagon arrives. For now, it seems that a variety of local, regional and national players are thriving. Again, this is a common phenomenon in any cycle.

What does the landscape look like five years from now? There are three big factors unique to cloud providers. First, it takes real infrastructure (datacenter, equipment) to create a cloud service. Second, infrastructure ages and becomes obsolete quickly in this industry. Third, smaller companies once past the first or second installation will struggle because either they will stagnate and die from attrition (easy in cloud) or from cash flow challenges if they find a way to grow.

Wait a minute, is that familiar to you? It's after all not that unique is it? All infra companies, from telecom to trucking, suffer the same fate. Ergo, some will die and for some customers their cloud will disappear with little notice. Yet some others will find bigger fish that will gobble them up at low prices. It's unlikely for an IaaS cloud provider to be bought by a big player for a handsome price unless they have great momentum, brand, and profitability. I don't expect dramatic events in the next 18 months, but do expect the law of the jungle to prevail over the next five years.

Cloud Computing Journal: Please name one thing that - despite what we all may have heard or read - you are certain is not going to happen in the future, with Cloud and BigData? ;-)

Patwardhan: I find this question amusing because it tempts me to put things in here like a telco or a bank will never use the cloud to provide their core telecom or banking service. I would have preferred to answer a question that mentioned a few things that are possible candidates for the cloud, but in my opinion will not happen. Let me leave this thought behind - if there are things that you think will never happen in the cloud, think again. It is a matter of time before the evolution of secure virtualized and orchestrated platforms, ingenuity of service providers, and the lack of qualified human engineers will move things to the cloud in a manner we are not willing to think about today.

More Stories By Jeremy Geelan

Jeremy Geelan is Chairman & CEO of the 21st Century Internet Group, Inc. and an Executive Academy Member of the International Academy of Digital Arts & Sciences. Formerly he was President & COO at Cloud Expo, Inc. and Conference Chair of the worldwide Cloud Expo series. He appears regularly at conferences and trade shows, speaking to technology audiences across six continents. You can follow him on twitter: @jg21.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction. ...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of...
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend...
Conferences agendas. Event navigation. Specific tasks, like buying a house or getting a car loan. If you've installed an app for any of these things you've installed what's known as a "disposable mobile app" or DMA. Apps designed for a single use-case and with the expectation they'll be "thrown away" like brochures. Deleted until needed again. These apps are necessarily small, agile and highly volatile. Sometimes existing only for a short time - say to support an event like an election, the Wor...
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
Data center models are changing. A variety of technical trends and business demands are forcing that change, most of them centered on the explosive growth of applications. That means, in turn, that the requirements for application delivery are changing. Certainly application delivery needs to be agile, not waterfall. It needs to deliver services in hours, not weeks or months. It needs to be more cost efficient. And more than anything else, it needs to be really, dc infra axisreally, super focus...
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
Many people recognize DevOps as an enormous benefit – faster application deployment, automated toolchains, support of more granular updates, better cooperation across groups. However, less appreciated is the journey enterprise IT groups need to make to achieve this outcome. The plain fact is that established IT processes reflect a very different set of goals: stability, infrequent change, hands-on administration, and alignment with ITIL. So how does an enterprise IT organization implement change...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations migh...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Mashape is bringing real-time analytics to microservices with the release of Mashape Analytics. First built internally to analyze the performance of more than 13,000 APIs served by the mashape.com marketplace, this new tool provides developers with robust visibility into their APIs and how they function within microservices. A purpose-built, open analytics platform designed specifically for APIs and microservices architectures, Mashape Analytics also lets developers and DevOps teams understand w...
Sumo Logic has announced comprehensive analytics capabilities for organizations embracing DevOps practices, microservices architectures and containers to build applications. As application architectures evolve toward microservices, containers continue to gain traction for providing the ideal environment to build, deploy and operate these applications across distributed systems. The volume and complexity of data generated by these environments make monitoring and troubleshooting an enormous chall...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud envir...
Containers and Docker are all the rage these days. In fact, containers — with Docker as the leading container implementation — have changed how we deploy systems, especially those comprised of microservices. Despite all the buzz, however, Docker and other containers are still relatively new and not yet mainstream. That being said, even early Docker adopters need a good monitoring tool, so last month we added Docker monitoring to SPM. We built it on top of spm-agent – the extensible framework f...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.