Welcome!

Microservices Expo Authors: SmartBear Blog, Simon Hill, Liz McMillan, Elizabeth White, Mehdi Daoudi

Related Topics: @CloudExpo, Microservices Expo, Open Source Cloud, Containers Expo Blog, Agile Computing, Apache, Government Cloud

@CloudExpo: Article

Cloud Computing: The Next Generation of Computing & Sustainable IT

The next generation of cloud computing will be the increase in clouds for vertical market

I have been asked to moderate a cloud computing discussion at Green Gov 2012. The title of the session is “Cloud Computing: The Next Generation of Computing and Sustainable IT”. It is a great honor to be selected to participate as moderator. I believe this is my second go around. As National Director of Cloud Services with Core BTS, Inc. it is my job to articulate the value of cloud computing. I have been pondering the title a bit and for me to actually discuss the next generation of Cloud, we have to identify the current situation. The cloud has gone way beyond Google Mail and SalesForce (CRM), into other areas like Cloud Security, Cloud Storage, and Cloud Back Up. Furthermore, we actually must define our idea of cloud computing and sustainable IT. Not everyone is on the same page.

What Is Cloud Computing?
NIST defines cloud computing as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. My own definition is slightly to the point, I consider cloud computing as Information Technology as a Utility Service. To be clear, I find Cloud Computing no different than Managed Services. It doesn’t matter if you utilize software as a service, platform as a service, or infrastructure as a service, the idea is to treat IT as a utility service to save overall costs.

What Is Sustainable IT?
I define Sustainable IT as energy efficient computing from the desktop to the data center, from hardware to software, from the network to the virtual cloud. Today I will focus mainly on Cloud Computing. For all intents and purposes, Cloud Computing is Sustainable IT. How can I say that? It’s simple math. Cloud computing, done right, can save an organization 50% to 80% in TCO. The timing could not be better. With a struggling economy, corporations are looking for ways to cut costs. When you get past the internal politics, the cloud hype cycle, and take a deep dive into the total cost of running an IT shop, you will be enlightened.

A very unique thing has occurred in the past 4 years with Sustainability and IT. CEO’s and CFO’s have been getting involved with IT budgets. The server sprawl and data center energy costs have become a major factor in the cost of doing business. A big mistake C-Level execs make is the fuzzy math used to calculate TCO for the enterprise. There is a strong tendency to calculate hardware and software costs only. To get the accurate TCO, you must take into consideration the following items:

  • Hardware
  • Software
  • Maintenance
  • People
  • Facilities
  • Power & Cooling
  • Redundancy
  • Storage
  • Bandwidth

When all is said and done, you may pay only a third of the cost of running your own IT shop. A classic example is Google saving the General Services Administration (GSA) $15M over a five year period. GSA had 17,000 employees using Lotus Notes. Imagine the upgrade path if they did not consider going with Gmail. That would be a logistical nightmare. They would have to have several skill sets that are, most likely, obsolete. Never the less, they managed to cut their budget in half for email across the entire agency. Because the new technology Google offers, they were able to integrate video chat, and document-sharing capabilities, as well as, mobile devices. The USDA reduced it’s per user cost for email from $150 to $100. The Department of Homeland Security (DHS) cut it’s per user cost for email from $300 to $100.

Just with email we start to see significant savings in the cloud. So what next?

Next Generation Cloud Computing
We are currently seeing industry specific applications going to the cloud. Cloud commoditization is creeping up and down the stack, into different industries, causing a great deal of collaboration. Forrester Research predicts all cloud markets will continue to grow, and the total cloud market will reach about $61B by the end of 2012. With this continual increase in cloud usage, we will run unto cloud sprawl. This has gotten me excited with my position here at Core BTS. We specialize in two key areas that every organization on the planet will need to meet compliance. One being security the other being disaster recovery. Cyber-attacks are a fact of life in the world of today. Natural disasters, terrorist attacks, and system failures are common place.

Cloud Security
What are the biggest predictions for information security? We will need more. Just think about all the areas which prompt a call to action: cloud sprawl, mobile devices, social media, malware, wireless. Information Security is no longer a niche market, it is a must have. It has to go main stream because the market demands it. Larger organizations will purchase boutique firms to shore up their share of the market. We partner with Trustwave. Trustwave allows us to offer a four compelling solutions:

  1. Compliance
  2. Managed Security Services
  3. Spiderlabs
  4. Unified Security

Just to keep up with compliance is a monumental task. Our partnership allows us to help our clients with a strong strategy to address your regulatory requirements, such as PCI, HIPAA, SOX, GLBA, FISMA, ISO, and DLP. The demand for Information Security Governance has prompted a document called 20 Critical Security Controls for Effective Cyber Defense: Consensus Audit Guideline. This guideline alone should be all the more reason to put your security in the cloud. The cost to manage information security and the following 20 Critical Security Controls is staggering. You would need specialized hardware, software, people, and infrastructure.

20 Critical Security Controls – Version 3.1

  • Critical Control 1: Inventory of Authorized and Unauthorized Devices
  • Critical Control 2: Inventory of Authorized and Unauthorized Software
  • Critical Control 3: Secure Configurations for Hardware and Software on Laptops, Workstations, and Servers
  • Critical Control 4: Continuous Vulnerability Assessment and Remediation
  • Critical Control 5: Malware Defenses
  • Critical Control 6: Application Software Security
  • Critical Control 7: Wireless Device Control
  • Critical Control 8: Data Recovery Capability
  • Critical Control 9: Security Skills Assessment and Appropriate Training to Fill Gaps
  • Critical Control 10: Secure Configurations for Network Devices such as Firewalls, Routers, and Switches
  • Critical Control 11: Limitation and Control of Network Ports, Protocols, and Services
  • Critical Control 12: Controlled Use of Administrative Privileges
  • Critical Control 13: Boundary Defense
  • Critical Control 14: Maintenance, Monitoring, and Analysis of Security Audit Logs
  • Critical Control 15: Controlled Access Based on the Need to Know
  • Critical Control 16: Account Monitoring and Control
  • Critical Control 17: Data Loss Prevention
  • Critical Control 18: Incident Response Capability
  • Critical Control 19: Secure Network Engineering
  • Critical Control 20: Penetration Tests and Red Team Exercises

According to National Defense Magazine, we may be on the verge of a cyber-war in 2012. There have been numerous, almost daily, reports about China and other adversaries penetrating U.S. networks. Indeed, cyber security has been gaining lots of media attention. Targeted, zero day attacks will be the norm. Cybercriminals will adapt to the new cloud based protections looking for new ways to exploit networks. It’s a never ending battle. Smartphones will be a target, simply because it’s connected. Rogue Android and iPhone apps are just the beginning. Cyber Security is here to stay.

Cloud Back Up & Disaster Recovery
If you have sat around a computer in a corporate atmosphere as long as I have, chances are you have suffered panic or frustration with systems going down. Wondering whether you lost customer information, or whether that draft document you were working on was saved. It doesn’t have to be an event brought on by Mother Nature, it can be something simple like a server crashing. Disaster Recovery is changing to adapt to the overall changes in IT. IT as a commodity is fast becoming the de facto standard. So merely backing up data is not enough, we need to secure it and make it readily available. We also have to do that in the most secure effective way. In the past, DR was a very costly measure to keep systems up and running. We had to duplicate existing hardware, which is costly. We had to test that the DR plan, which was time consuming.

Our partnership with EVault helps us help our clients back up data to the DR site without violating standards for privacy and security. The HIPAA regulations regarding the security of digitally stored information are complex and difficult to follow. Outsourcing this function to the cloud helps you meet compliance, while saving on cost.

In summary, the next generation of cloud computing will be the increase in clouds for vertical markets, increase in cloud services up and down the stack, and the market demand for Cloud Security and Cloud Disaster Recovery.

More Stories By Terell Jones

Mr. Jones is the National Director of Cloud Services with Core BTS, Inc., a $180M corporation. He is based out of Fairfax, VA and handles the eastern region for cloud computing. After serving in the first Gulf War in the U.S. Navy Mr. Jones entered the IT field in 1995. He has over 17 years in Information Technology in the fields of Green IT, Cloud Computing, Virtualization, and Managed Services. He is internationally known as “the Green IT Guy” specializing in energy efficient computing from the desktop to the data center, from hardware to software, from the network to the virtual cloud. He has served as the Deputy Director at the Green IT Council since 2010.

@MicroservicesExpo Stories
The Toyota Production System, a world-renowned production system is based on the "complete elimination of all waste". The "Toyota Way", grounded on continuous improvement dates to the 1860s. The methodology is widely proven to be successful yet there are still industries within and tangential to manufacturing struggling to adopt its core principles: Jidoka: a process should stop when an issue is identified prevents releasing defective products
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Defining the term ‘monitoring’ is a difficult task considering the performance space has evolved significantly over the years. Lately, there has been a shift in the monitoring world, sparking a healthy debate regarding the definition and purpose of monitoring, through which a new term has emerged: observability. Some of that debate can be found in blogs by Charity Majors and Cindy Sridharan.
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
We seem to run this cycle with every new technology that comes along. A good idea with practical applications is born, then both marketers and over-excited users start to declare it is the solution for all or our problems. Compliments of Gartner, we know it generally as “The Hype Cycle”, but each iteration is a little different. 2018’s flavor will be serverless computing, and by 2018, I mean starting now, but going most of next year, you’ll be sick of it. We are already seeing people write such...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...