Click here to close now.

Welcome!

@MicroservicesE Blog Authors: Liz McMillan, Elizabeth White, Pat Romanski, Cloud Best Practices Network, Lori MacVittie

Related Topics: @CloudExpo Blog, @MicroservicesE Blog, Open Source Cloud, @ContainersExpo, Agile Computing, Apache, Government Cloud

@CloudExpo Blog: Article

Cloud Computing: The Next Generation of Computing & Sustainable IT

The next generation of cloud computing will be the increase in clouds for vertical market

I have been asked to moderate a cloud computing discussion at Green Gov 2012. The title of the session is “Cloud Computing: The Next Generation of Computing and Sustainable IT”. It is a great honor to be selected to participate as moderator. I believe this is my second go around. As National Director of Cloud Services with Core BTS, Inc. it is my job to articulate the value of cloud computing. I have been pondering the title a bit and for me to actually discuss the next generation of Cloud, we have to identify the current situation. The cloud has gone way beyond Google Mail and SalesForce (CRM), into other areas like Cloud Security, Cloud Storage, and Cloud Back Up. Furthermore, we actually must define our idea of cloud computing and sustainable IT. Not everyone is on the same page.

What Is Cloud Computing?
NIST defines cloud computing as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. My own definition is slightly to the point, I consider cloud computing as Information Technology as a Utility Service. To be clear, I find Cloud Computing no different than Managed Services. It doesn’t matter if you utilize software as a service, platform as a service, or infrastructure as a service, the idea is to treat IT as a utility service to save overall costs.

What Is Sustainable IT?
I define Sustainable IT as energy efficient computing from the desktop to the data center, from hardware to software, from the network to the virtual cloud. Today I will focus mainly on Cloud Computing. For all intents and purposes, Cloud Computing is Sustainable IT. How can I say that? It’s simple math. Cloud computing, done right, can save an organization 50% to 80% in TCO. The timing could not be better. With a struggling economy, corporations are looking for ways to cut costs. When you get past the internal politics, the cloud hype cycle, and take a deep dive into the total cost of running an IT shop, you will be enlightened.

A very unique thing has occurred in the past 4 years with Sustainability and IT. CEO’s and CFO’s have been getting involved with IT budgets. The server sprawl and data center energy costs have become a major factor in the cost of doing business. A big mistake C-Level execs make is the fuzzy math used to calculate TCO for the enterprise. There is a strong tendency to calculate hardware and software costs only. To get the accurate TCO, you must take into consideration the following items:

  • Hardware
  • Software
  • Maintenance
  • People
  • Facilities
  • Power & Cooling
  • Redundancy
  • Storage
  • Bandwidth

When all is said and done, you may pay only a third of the cost of running your own IT shop. A classic example is Google saving the General Services Administration (GSA) $15M over a five year period. GSA had 17,000 employees using Lotus Notes. Imagine the upgrade path if they did not consider going with Gmail. That would be a logistical nightmare. They would have to have several skill sets that are, most likely, obsolete. Never the less, they managed to cut their budget in half for email across the entire agency. Because the new technology Google offers, they were able to integrate video chat, and document-sharing capabilities, as well as, mobile devices. The USDA reduced it’s per user cost for email from $150 to $100. The Department of Homeland Security (DHS) cut it’s per user cost for email from $300 to $100.

Just with email we start to see significant savings in the cloud. So what next?

Next Generation Cloud Computing
We are currently seeing industry specific applications going to the cloud. Cloud commoditization is creeping up and down the stack, into different industries, causing a great deal of collaboration. Forrester Research predicts all cloud markets will continue to grow, and the total cloud market will reach about $61B by the end of 2012. With this continual increase in cloud usage, we will run unto cloud sprawl. This has gotten me excited with my position here at Core BTS. We specialize in two key areas that every organization on the planet will need to meet compliance. One being security the other being disaster recovery. Cyber-attacks are a fact of life in the world of today. Natural disasters, terrorist attacks, and system failures are common place.

Cloud Security
What are the biggest predictions for information security? We will need more. Just think about all the areas which prompt a call to action: cloud sprawl, mobile devices, social media, malware, wireless. Information Security is no longer a niche market, it is a must have. It has to go main stream because the market demands it. Larger organizations will purchase boutique firms to shore up their share of the market. We partner with Trustwave. Trustwave allows us to offer a four compelling solutions:

  1. Compliance
  2. Managed Security Services
  3. Spiderlabs
  4. Unified Security

Just to keep up with compliance is a monumental task. Our partnership allows us to help our clients with a strong strategy to address your regulatory requirements, such as PCI, HIPAA, SOX, GLBA, FISMA, ISO, and DLP. The demand for Information Security Governance has prompted a document called 20 Critical Security Controls for Effective Cyber Defense: Consensus Audit Guideline. This guideline alone should be all the more reason to put your security in the cloud. The cost to manage information security and the following 20 Critical Security Controls is staggering. You would need specialized hardware, software, people, and infrastructure.

20 Critical Security Controls – Version 3.1

  • Critical Control 1: Inventory of Authorized and Unauthorized Devices
  • Critical Control 2: Inventory of Authorized and Unauthorized Software
  • Critical Control 3: Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers
  • Critical Control 4: Continuous Vulnerability Assessment and Remediation
  • Critical Control 5: Malware Defenses
  • Critical Control 6: Application Software Security
  • Critical Control 7: Wireless Device Control
  • Critical Control 8: Data Recovery Capability
  • Critical Control 9: Security Skills Assessment and Appropriate Training to Fill Gaps
  • Critical Control 10: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches
  • Critical Control 11: Limitation and Control of Network Ports, Protocols, and Services
  • Critical Control 12: Controlled Use of Administrative Privileges
  • Critical Control 13: Boundary Defense
  • Critical Control 14: Maintenance, Monitoring, and Analysis of Security Audit Logs
  • Critical Control 15: Controlled Access Based on the Need to Know
  • Critical Control 16: Account Monitoring and Control
  • Critical Control 17: Data Loss Prevention
  • Critical Control 18: Incident Response Capability
  • Critical Control 19: Secure Network Engineering
  • Critical Control 20: Penetration Tests and Red Team Exercises

According to National Defense Magazine, we may be on the verge of a cyber-war in 2012. There have been numerous, almost daily, reports about China and other adversaries penetrating U.S. networks. Indeed, cyber security has been gaining lots of media attention. Targeted, zero day attacks will be the norm. Cybercriminals will adapt to the new cloud based protections looking for new ways to exploit networks. It’s a never ending battle. Smartphones will be a target, simply because it’s connected. Rogue Android and iPhone apps are just the beginning. Cyber Security is here to stay.

Cloud Back Up & Disaster Recovery
If you have sat around a computer in a corporate atmosphere as long as I have, chances are you have suffered panic or frustration with systems going down. Wondering whether you lost customer information, or whether that draft document you were working on was saved. It doesn’t have to be an event brought on by Mother Nature, it can be something simple like a server crashing. Disaster Recovery is changing to adapt to the overall changes in IT. IT as a commodity is fast becoming the de facto standard. So merely backing up data is not enough, we need to secure it and make it readily available. We also have to do that in the most secure effective way. In the past, DR was a very costly measure to keep systems up and running. We had to duplicate existing hardware, which is costly. We had to test that the DR plan, which was time consuming.

Our partnership with EVault helps us help our clients back up data to the DR site without violating standards for privacy and security. The HIPAA regulations regarding the security of digitally stored information are complex and difficult to follow. Outsourcing this function to the cloud helps you meet compliance, while saving on cost.

In summary, the next generation of cloud computing will be the increase in clouds for vertical markets, increase in cloud services up and down the stack, and the market demand for Cloud Security and Cloud Disaster Recovery.

More Stories By Terell Jones

Mr. Jones is the National Director of Cloud Services with Core BTS, Inc., a $180M corporation. He is based out of Fairfax, VA and handles the eastern region for cloud computing. After serving in the first Gulf War in the U.S. Navy Mr. Jones entered the IT field in 1995. He has over 17 years in Information Technology in the fields of Green IT, Cloud Computing, Virtualization, and Managed Services. He is internationally known as “the Green IT Guy” specializing in energy efficient computing from the desktop to the data center, from hardware to software, from the network to the virtual cloud. He has served as the Deputy Director at the Green IT Council since 2010.

@MicroservicesExpo Stories
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
Data center models are changing. A variety of technical trends and business demands are forcing that change, most of them centered on the explosive growth of applications. That means, in turn, that the requirements for application delivery are changing. Certainly application delivery needs to be agile, not waterfall. It needs to deliver services in hours, not weeks or months. It needs to be more cost efficient. And more than anything else, it needs to be really, dc infra axisreally, super focus...
Sharding has become a popular means of achieving scalability in application architectures in which read/write data separation is not only possible, but desirable to achieve new heights of concurrency. The premise is that by splitting up read and write duties, it is possible to get better overall performance at the cost of a slight delay in consistency. That is, it takes a bit of time to replicate changes initiated by a "write" to the read-only master database. It's eventually consistent, and it'...
Many people recognize DevOps as an enormous benefit – faster application deployment, automated toolchains, support of more granular updates, better cooperation across groups. However, less appreciated is the journey enterprise IT groups need to make to achieve this outcome. The plain fact is that established IT processes reflect a very different set of goals: stability, infrequent change, hands-on administration, and alignment with ITIL. So how does an enterprise IT organization implement change...
Conferences agendas. Event navigation. Specific tasks, like buying a house or getting a car loan. If you've installed an app for any of these things you've installed what's known as a "disposable mobile app" or DMA. Apps designed for a single use-case and with the expectation they'll be "thrown away" like brochures. Deleted until needed again. These apps are necessarily small, agile and highly volatile. Sometimes existing only for a short time - say to support an event like an election, the Wor...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of...
The cloud has transformed how we think about software quality. Instead of preventing failures, we must focus on automatic recovery from failure. In other words, resilience trumps traditional quality measures. Continuous delivery models further squeeze traditional notions of quality. Remember the venerable project management Iron Triangle? Among time, scope, and cost, you can only fix two or quality will suffer. Only in today's DevOps world, continuous testing, integration, and deployment upend...
While DevOps most critically and famously fosters collaboration, communication, and integration through cultural change, culture is more of an output than an input. In order to actively drive cultural evolution, organizations must make substantial organizational and process changes, and adopt new technologies, to encourage a DevOps culture. Moderated by Andi Mann, panelists discussed how to balance these three pillars of DevOps, where to focus attention (and resources), where organizations migh...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Mashape is bringing real-time analytics to microservices with the release of Mashape Analytics. First built internally to analyze the performance of more than 13,000 APIs served by the mashape.com marketplace, this new tool provides developers with robust visibility into their APIs and how they function within microservices. A purpose-built, open analytics platform designed specifically for APIs and microservices architectures, Mashape Analytics also lets developers and DevOps teams understand w...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud envir...
Sumo Logic has announced comprehensive analytics capabilities for organizations embracing DevOps practices, microservices architectures and containers to build applications. As application architectures evolve toward microservices, containers continue to gain traction for providing the ideal environment to build, deploy and operate these applications across distributed systems. The volume and complexity of data generated by these environments make monitoring and troubleshooting an enormous chall...
Containers and Docker are all the rage these days. In fact, containers — with Docker as the leading container implementation — have changed how we deploy systems, especially those comprised of microservices. Despite all the buzz, however, Docker and other containers are still relatively new and not yet mainstream. That being said, even early Docker adopters need a good monitoring tool, so last month we added Docker monitoring to SPM. We built it on top of spm-agent – the extensible framework f...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
SYS-CON Events announced today that the "Second Containers & Microservices Conference" will take place November 3-5, 2015, at the Santa Clara Convention Center, Santa Clara, CA, and the “Third Containers & Microservices Conference” will take place June 7-9, 2016, at Javits Center in New York City. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
The causality question behind Conway’s Law is less about how changing software organizations can lead to better software, but rather how companies can best leverage changing technology in order to transform their organizations. Hints at how to answer this question surprisingly come from the world of devops – surprising because the focus of devops is ostensibly on building and deploying better software more quickly. Be that as it may, there’s no question that technology change is a primary fac...