Welcome!

Microservices Expo Authors: Pat Romanski, Liz McMillan, Mamoon Yunus, Elizabeth White, Mehdi Daoudi

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, @CloudExpo, Cloud Security, @BigDataExpo

Containers Expo Blog: Article

2013 Predictions: Private Cloud Is Really "Cloud-Washed Virtualization"

Private Cloud exposed as a fraud

If you're an IT manager calling your internal VMware or other virtualization farm a "Private Cloud" in an attempt to prove to your leadership that "public cloud is insecure" or "I built the same thing as Amazon Web Services (AWS)", you need to get ready for a dose of reality in the coming year.

Server-huggers beware, you might have been able to get away with it until now, but 2013 will mark a turning point in which the term Private Cloud will be permanently exposed for what it is...  a capital intensive, server stacking, virtualization game.

Just because you might have flexibility to decide how much RAM you can assign to a VM, doesn't give you the right to "cloud-wash" your internal IT operation and call it something that it's not... because although it may be Private (can someone tell me again why it's important to be able to touch your servers?), it's certainly not Cloud.

Not that there's anything wrong with that...

Just as Jerry Seinfeld so famously quoted... I'm not saying there is anything wrong with running an IT shop where you still spend lump sums of capital (CapEx) for physical resources, especially if you are working to make those resources flexible and reliable by optimizing your data center, using virtualizing, and invoking best practices like continuous monitoring and agile development.

Just don't use the word "Cloud" because your business users and C-level leadership are getting smarter every day on the incredible economic advantages, real security story, and global scalability benefits of public cloud.

In short, selling them a story like "my private cloud is the same as AWS, but more secure because it's on-premise" is going to begin to look childish.  And worse, it will discount the credibility of the (probably pretty good and still very useful) internal IT environment that you've worked so hard to build.

If you physically touched it, estimated your peak demand before buying, and/or don't have a re-occurring OpEx fee... IT'S NOT CLOUD.

Tightening definitions

The definition of  "Cloud" will also further tighten in 2013, where it will be reserved only for systems that allow you to:

• transform your IT into only operational expenditures (OpEx)

• go global in minutes

• never have to guess your initial or future capacity

Despite all the marketing from old guard IT and large virtualization software companies that claim building your own Cloud is the best way to go, your Private Cloud still:

• is a large capital expense (CapEx)

• rarely allows even the largest installs to go global in minutes

• makes you commit to a upfront minimum and requires you to predict future capacity

In his recent keynote at Amazon Web Services Re:Invent conference, SVP Andy Jassy put it in the best perspective I've heard yet, giving these six simple items that differentiate the burden of private, from value of public.  You can watch his keynote on Youtube here.  Check out around minute 32 for the best Private Cloud bashing.

It's okay, just try a little bit... it won't hurt you.

Remember those drug prevention classes in middle school (was it called D.A.R.E. everywhere or was that just an Ohio thing?) where the police officers would come and tell you the dangers of drugs and how they get you hooked by getting you to just try a little bit?

"Don't even do it once," they would say, "Because if you try it once, you'll be hooked for life!"

Well, it seems the private cloud loving internal IT folks were all sitting in the front row during those officer presentations, because they took this advice a little too seriously and have applied it to public cloud adoption too.

"The best thing about public cloud is it's cheaper to fail than belabor conversations about whether to try it or not." - Me

Internal IT will remain greatly relevant

Don't worry internal IT, you'll still be greatly needed by your company in 2013 and well beyond because there absolutely is a place for flexible, private infrastructure in today's IT.

Organizations that have invested millions in capital on IT hardware, software, networking, and human resources would be completely insane to throw it all away today and move everything to public cloud tomorrow; however, in the same breath, I would also call these organizations insane to keep piling investment into more private resources given the extreme economic, scalability, and functionality advantages of public cloud.

Over the coming years, even very large internal IT groups, simply won't be able to keep up with the rate of innovation, security, and scale that public cloud operations will achieve.

Internal IT will also face tough competition from rogue business users going outside of their internal IT to get what they need from public cloud with something as simple as a credit card swipe.  Of course, internal IT may think the best weapon against this is a strict lock-down policy where business users get punished for going rouge; but, a moratorium on public cloud only hampers corporate innovation and creates animosity between the teams.  I suggest there is another answer for internal IT... Embrace, broker, and support.

Although easier said than executed correctly, cloud brokering both public and private IT services, while supporting business users on both,will be the key function for internal IT groups staying relevant to the business and even thriving in 2013 and beyond.

Disclaimer:  These predictions are based on the fact that world does not end on December 21, 2012 as the Mayan calendar predicts. If we never reach 2013, I reserve all rights to drastically modify these predictions.

More Stories By Ryan Hughes

Ryan Hughes, blogging at www.RyHug.com, is the Co-founder and Chief Strategy Officer (CSO) of Skygone (www.skygoneinc.com), a Cloud Computing solution provider to SI's, ISV's, Commercial, and Government. Education: MBA in Project Management from Penn State University; BS in GIS from Bowling Green State University Ryan currently has 10 years in Enterprise-level IT Program Management and Operations Management, as well as vast experience in Enterprise System Design and Cloud implementation methodology.

@MicroservicesExpo Stories
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network. In practical terms, this ...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
We have Continuous Integration and we have Continuous Deployment, but what’s continuous across all of what we do is people. Even when tasks are automated, someone wrote the automation. So, Jayne Groll evangelizes about Continuous Everyone. Jayne is the CEO of the DevOps Institute and the author of Agile Service Management Guide. She talked about Continuous Everyone at the 2016 All Day DevOps conference. She describes it as "about people, culture, and collaboration mapped into your value streams....
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.