Welcome!

Microservices Expo Authors: Liz McMillan, Ed Witkovic, Pat Romanski, Elizabeth White, Stackify Blog

Related Topics: @CloudExpo

@CloudExpo: Article

Cloud Computing: Creating a Generic (Internal) Cloud Architecture

Do Cloud-like architectures have to remain external to the enterprise? No.

Kenneth Oestriech's Blog

I've been taken aback lately by the tacit assumption that cloud-like (IaaS and PaaS) services have to be provided by folks like Amazon, Terremark and others. It's as if these providers do some black magic that enterprises can't touch or replicate. However, history has taught the IT industry that what starts in the external domain eventually makes its way into the enterprise, and vice-versa.

I've been taken aback lately by the tacit assumption that cloud-like (IaaS and PaaS) services have to be provided by folks like Amazon, Terremark and others. It's as if these providers do some black magic that enterprises can't touch or replicate.

However, history's taught the IT industry that what starts in the external domain eventually makes its way into the enterprise, and vice-versa. Consider Google beginning with internet search, and later offering an enterprise search appliance. Then, there's the reverse: An application, say a CRM system, leaves the enterprise to be hosted externally as SaaS, such as SalesForce.com. But even in this case, the first example then recurs -- as SalesForce.com begins providing internal Salesforce.com appliances back to its large enterprise customers!

I am simply trying to challenge the belief that cloud-like architectures have to remain external to the enterprise. They don't. I believe it's inevitable that they will soon find their way into the enterprise, and become a revolutionary paradigm of how *internal* IT infrastructure is operated and managed.

With each IT management conversation I've had, the concept that I recently put forward is becoming clearer and more inevitable. That an "internal cloud" (call it a cloud architecture or utility computing) will penetrate enterprise datacenters.

Limitations of "external" cloud computing architectures

Already, a number of authorities have pretty clearly outlined the pros and cons of using external service providers as "cloud" providers. For reference, there is the excellent "10 reasons enterprises aren't ready to trust the cloud" by Stacey Higginbotham of GigaOM, as well as a piece by Mike Walker of MSDN regarding "Challenges of moving to the cloud”. So it stands that innovation will work around these limitations, borrowing from the positive aspects of external service providers, omitting the negatives, and offering the result to IT Ops.

Is an "internal" cloud architecture possible and repeatable?

So here is my main thesis: that there are software IT management products available today (and more to come) that will operate *existing* infrastructure in a manner identical to the operation of IaaS and PaaS. Let me say that again -- you don't have to outsource to an "external" cloud provider as long as you already own legacy infrastructure that can be re-purposed for this new architecture.

This statement -- and associated enabling software technologies -- is beginning to spell the beginning of the final commoditization of compute hardware. (BTW, I find it amazing that some vendors continue to tout that their hardware is optimized for cloud computing. That is a real oxymoron)

As time passes, cloud-computing infrastructures (ok, Utility Computing architectures if you must) coupled with the trend toward architecture standardization, will continue to push the importance of specialized HW out of the picture. Hardware margins will continue to be squeezed. (BTW, you can read about the "cheap revolution" in Forbes, featuring our CEO Bill Coleman).

As the VINF blog also observed, regarding cloud-based architectures:

You can build your own cloud, and be choosy about what you give to others. Building your own cloud makes a lot of sense, it’s not always cheap but its the kind of thing you can scale up (or down..) with a bit of up-front investment, in this article I’ll look at some of the practical; and more infrastructure focused ways in which you can do so.

Your “cloud platform” is essentially an internal shared services system where you can actually and practically implement a “platform” team that operates and capacity plans for the cloud platform; they manage its availability and maintenance day-day and expansion/contraction.
Even back in February, Mike Nygard observed reasons and benefits for this trend:
Why should a company build its own cloud, instead of going to one of the providers?

On the positive side, an IT manager running a cloud can finally do real chargebacks to the business units that drive demand. Some do today, but on a larger-grained level... whole servers. With a private cloud, the IT manager could charge by the compute-hour, or by the megabit of bandwidth. He could charge for storage by the gigabyte, and with tiered rates for different availability/continuity guarantees. Even better, he could allow the business units to do the kind of self-service that I can do today with a credit card and The Planet. (OK, The Planet isn't a cloud provider, but I bet they're thinking about it. Plus, I like them.)
We are seeing the beginning of an inflection point in the way IT is managed, brought on by (1) the interest (though not yet adoption) of cloud architectures, (2) the increasing willingness to accept shared IT assets (thanks to VMware and others), and (3) the budding availability of software that allows “cloud-like” operation of existing infrastructure, but in a whole new way.

How might these "internal clouds" first be used?

Let's be real: there are precious few green-field opportunities where enterprises will simply decide to change their entire IT architecture and operations into this "internal cloud" -- i.e. implement a Utility Computing model out-of-the-gate. But there are some interesting starting points that are beginning to emerge:

  • Creating a single-service utility: by this mean that an entire service tier (such as a web farm, application server farm, etc.) moves to being managed in a "cloud" infrastructure, where resources ebb-and-flow as needed by user demand.
  • Power-managing servers: using utility computing IT management automation to control power states of machines that are temporarily idle, but NOT actually dynamically provisioning software onto servers. Firms are getting used to the idea of using policy-governed control to save on IT power consumption as they get comfortable with utility-computing principles. They can then selectively activate the dynamic provisioning features as they see fit.
  • Using utility computing management/automation to govern virtualized environments: it's clear that once firms virtualize/consolidate, they later realize that there are more objects to manage (virtual sprawl) , rather than fewer; plus, they've created "virtual silos", distinct from the non-virtualized infrastructure they own. Firms will migrate toward an automated management approach to virtualization where -- on the fly -- applications are virtualized, hosts are created, apps are deployed/scaled, failed hosts are automatically re-created, etc. etc. Essentially a services cloud.

It is inevitable that the simplicity, economics, and scalability of externally-provided "clouds" will make their way into the enterprise. The question isn't if, but when.

More Stories By Kenneth Oestreich

Ken Oestreich is VP of Product Marketing with Egenera, and has spent over 20 years developing and introducing new products into new markets. Recently, he’s been involved in bringing utility- and cloud-computing technologies to market. Previously, Ken was with Cassatt, and held a number of developer, alliance and strategy positions with Sun Microsystems.

Comments (2) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Most Recent Comments
sajai krishnan 08/26/08 09:53:44 PM EDT

Ken
Very much on topic. In our parallel area around cloud storage we see interest in internal/private storage clouds as much as with external/public storage clouds. Bandwidth, security are clearly reasons to go with a private cloud, whereas getting offsite copies is certainly one reason to consider a public cloud. There is the additional reason that by building your own storage cloud you can tune the performance characteristics of your cloud by having, for example, beefy, hi-performing nodes for streaming or inexpensive nodes with a lot of disks for archival applications.

As for service providers - I think we will see service providers delivering the typical public service like S3, but could also provide "insourcing" services ... i.e. a service provider managing an dedicated internal cloud for Fortune100 data center in a colo model. I think AT&T's recent Synaptic Hosting is probably headed in that direction.

There are a few different ways to skin this cat in terms of implementation. The key is that the technology matures, and customers get familiar with the commodity scale-out economics, and easy management model that is at the core of this approach.

Regards,
Sajai Krishnan, CEO ParaScale

amuletc 08/25/08 08:14:58 PM EDT

By Dan D. Gutierrez
CEO of HostedDatabase.com

I really like your concept of an "internal cloud"! When my firm launched the web's first Database-as-a-Service offering in 1999, we had a sales option to create a special instance of our product for an enterprise that wanted the convenience of SaaS, but was concerned about privacy and security issues. Bringing in our service as an internal cloud solved these issues. Fast forward nearly 10 years, it is great to see this concept mentioned in this timely article.

@MicroservicesExpo Stories
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The purpose of this article is draw attention to key SaaS services that are commonly overlooked during contact signing that are essential to ensuring they meet the expectations and requirements of the organization and provide guidance and recommendations for process and controls necessary for achieving quality SaaS contractual agreements.
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud ...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Archi...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The next XaaS is CICDaaS. Why? Because CICD saves developers a huge amount of time. CD is an especially great option for projects that require multiple and frequent contributions to be integrated. But… securing CICD best practices is an emerging, essential, yet little understood practice for DevOps teams and their Cloud Service Providers. The only way to get CICD to work in a highly secure environment takes collaboration, patience and persistence. Building CICD in the cloud requires rigorous ar...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
JetBlue Airways uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-time monitoring of mobile applications. The next BriefingsDirect Voice of the Customer performance engineering case study discussion examines how JetBlue Airways in New York uses virtual environments to reduce software development costs, centralize performance testing, and create a climate for continuous integration and real-tim...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...