Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Dalibor Siroky, Scott Davis, Stackify Blog

Related Topics: @CloudExpo, Microservices Expo, @DXWorldExpo

@CloudExpo: Article

The Human Body and @Cisco's #DataCenter Automation | @CloudExpo #AI #ML

How self-defense and self-healing capabilities of our human body is similar to firewalls and intelligent monitoring capabilities

Disclaimer : I am an IT guy and my knowledge on human body is limited to my daughter's high school biology class book and information obtained from search engines. So, excuse me if any of the information below is not represented accurately !

The human body is the most complex machine ever created. With a complex network of interconnected organs, millions of cells and the most advanced processor, human body is the most automated system in this planet. In this article, we will draw comparisons between the working of a human body to that of a data center. We will learn how self-defense and self-healing capabilities of our human body is similar to firewalls and intelligent monitoring capabilities in our data centers. We will draw parallels between human body automation to data center automation and explain different levels of automation we need to drive in data centers. This article is divided into four parts covering each of body main functions and drawing parallels on automation

Have you ever felt sick? How do you figure out that you are going to get sick and you need to call it a day. Can you control how fast your heart should beat or can you control your breath as per your wish? Human body is the most automated system we have in the entire universe. It's the most advance machine with the fastest microprocessor and a lightning network which powers us every day. There is lot to learn on how the architect of our body has designed our body and how using the same design principals we should automate the data center of the future.

Human body compare

The fundamental principal of automation is to use the data to do intelligent analytics that enables us to take action. When we are about to fell sick, our body gives us some indicators (alerts) which tells us things are not going per plan and we need to take action. Such indicators can be in the form of developing fever or chills, feeling cold, or having pain. Once we get these alerts either we take action, i.e., take medication or we let our body self-heal if the alert is not to worry about, e.g., a small cut.

Our body like our systems (compute, network, etc.) have a way to read these alerts and take appropriate actions. In addition, our body has tremendous and most advance security system always working to defend ourselves from various malicious attacks!  An example when the virus strikes the human body, it attacks the body cellular structure and begins to destroy it. Our body defense mechanism immediately sends white blood cells to attacks the invading virus and tries to destroys it. All this happens 24x7 and without us telling our body to do so! If the body fails to defend on its own then it gives signals to help it out and that is when we either go to a doctor to get us some medicine or take some other external remedies to help our body. Now imagine if we can develop similar advanced security system to defend our data centers from all the attacks. There are several things we can learn from how our body works and incorporate the same in creating highly automated data center of the future. Let's examine each of the body systems and how we can leverage it for our benefit. While this is not biology lesson it is time to go back to your school days.

The Immune System
This is perhaps the most intelligent and automated system in our body and most relevant to the way we should automate our data center security. Our immune (security) system is a collection of structures and processes who job is to protect against disease or other potentially damaging foreign bodies.  These diseases and/or foreign bodies is equivalent to virus, malware or other type of security threats we see in our data center.  Our immune system consists of various parts (hardware) and systems (software) which allows our body to self-defend and self-heal against attacks, which happens 24x7.

Immune

Image courtesy:Flexablog.com

There are six main components of our immune system.

  1. Lymph Nodes: This is a small bean shape structures that produce and store cells to fight infection and diseases. Lymph nodes contains lymph, a clear liquid that carries those cells to various parts of our body.
  2. Spleen: This is located on your left-hand side of your body under your ribs and above your stomach. The spleen contains white blood cells that fight infection
  3. Bone-Marrow: The yellow tissue in the center of bones that produced white blood cells
  4. Lymphocytes: These small white blood cells play a large role in defending the body against disease. The two types of lymphocytes are B-cells, which make antibodies that attack bacteria and toxins, and T-cells, which help destroy infected or cancerous cells
  5. Thymus: Responsible to trigger and maintain production of antibodies
  6. Leukocytes: These are disease fighting white blood cells that identifies and eliminates pathogens

Together all the above components make up our immune system. Think these of various security devices like physical access card readers, firewalls, anti-virus software, anti-spam and other security mechanism we deploy in our data center. The immune system can be further divided in two systems.

The Innate Immune System
The innate immune response is the first step in protecting our bodies from foreign particles. It is an immediate response that's "hard-wired" into our immune system. It's a generalized system which protects against any type of virus attacks and not tied to specific immunity. For example, general barriers to infection include:

  • Physical (skin, mucous, tears, saliva, and stomach acid)
  • Chemical (specific proteins found in tears or saliva that attack foreign particles)
  • Biological (microbiota or good bacteria in the gut that prevents overgrowth of bad bacteria)

The innate immune system is general i.e. anything that is identified as a foreign or non-self becomes target for the innate immune system

The Adaptive Immune Response
The innate immune response leads to the pathogen-specific adaptive immune response. While this response is more effective, it takes time to develop-generally about a week after the infection has occurred. This system is called adaptive because it's a self-learning system which adapts itself to new threats and creates a self-defense mechanism to neutralize such threats in the future much faster. A good example we all know from birth is vaccinations. We are injected with a weakened or dead virus to enable our body learn on how to defend against a particular type of virus. Our body then remembers this all its life and protects us 24x7 from this particular virus.

Thus, the immune system is both reactive and adaptive. It reacts when a pathogen enters our body to neutralizes it, it also is constantly learning and adapting to new threats. It's also intelligent to know what is self - Anything naturally in the body, e.g., our own cells to non-self-Anything that is not naturally present in the body. The system also is a quick reacting system and has inbuilt messaging system which passes signal from one cell to another to act on incoming threat all at lightning speed. In addition, its layered security system with multiple types of cells playing particular role to defend. While some cells are located at the entry point of our body like mouth, nose, ear, etc., and act as security guards, others are located in our circulatory systems or in our bone marrow and gets released as and when required.

Enough of biology. Let's get into our IT world. Imagine our data center having similar innate and adaptive capabilities. The innate or generalized security systems are our firewalls, emails scanners etc. which can neutralize generalized threats in our data center. They are not tied to specific threats like DoS or Dirty cow type OS vulnerability. These systems are continuously watching for any threats and neutralizes once they find known and familiar threats. E.g. email spam filters, anti-virus software, etc.  Much like our body has physical, chemical and biological defense layers, our data center needs to have different security layers to product us from various types of attacks. At a minimum, we four level of security in our DC. Physical security (Access card readers, Security guards), network security (DNS, DMZ/Internal, Firewalls), component level (Compute, Storage) and application level (email, OS, Java, Oracle, etc.). There are lot of technologies available today which provides various layers of security including those provide by industry leaders like Cisco.

While we have innate defense capabilities, what we need to protect us against increasing sophistication of attacks is the adaptive self-defense capabilities. The system should self-learn various signatures and patterns from past attacks and can automatically create self-healing code (white blood cells) to defend against new threats. In other words, systems should be able to self-heal itself. Such a system will create new defense signatures based on previous attacks and adapt to new type of attacks.

Humans intervene only when the system fails to do its job. Let's take an example. Let us assume a new type of virus is released, it's an enhanced version of previously known virus, so the signature is different. If the virus pattern is not known, humans have to develop anti-virus signatures and then update anti-virus software to fix the exposure. This is like taking an external dose of antibiotics to heal your body. This can take days if not weeks to get the updated software from vendor and apply it across all vulnerable systems. Now what if we have systems in the future which can create required antibiotics on its own and try to fix the exposure? Such systems much like our body learns from previous attacks, modify its current software to adapt to new threat and tries to defend itself all without human intervention! Seems unreal. Our body is capable for doing this with to do this with 75% or more success rate. Can we aim for 80%?

Another capability we need in our data center is the self-healing capability. Much like how a human body detects abnormalities in the human body and attacks the problem without asking for your permission J, data center security mechanism as well as fault detection system should work in similar way. Imagine your body waiting for your instruction to defend from invading virus!! What if you were sleeping. When an abnormality is detected in the data center, we need to act immediately. Today, while many of data center security products are designed to detect malicious attacks and take appropriate action without human intervention, we need to extend this inside every component (compute/storage/network) in the data center. We should have intelligence at every layer to protect against increasing form of attacks and everything needs to be connected together. An end point device which detected a threat can alert all the security components at all layers about incoming threat. Each system notifies other systems on the status of threat and there is constant communication between fire-walls, compute, storage system based on type and level of attack.

As an example, imagine we discover a new super critical vulnerability in our operating system which allows an authorized user to get root privileges. Today in most enterprises it takes weeks if not days to detect and remediate the vulnerability. In tomorrow's world system should be smart enough to take detect such gaps and apply the fix immediately. Why wait when we know waiting can have adverse impact on our business and yes did I mentioned it should be done without downtime to business. After all your body does not need downtime to fix YOU.

To summarize we need following capabilities for our data center security

  1. Multi-layered inter-connected security system. There should be common messaging bus between different infrastructure components to detect and notify status of threats
  2. Should be both innate and adaptive to react to different type of threats
  3. Self-learning with self-healing capabilities.  Should continuous learn and adapt to new threats
  4. Ability to react at the speed of light

In the next article, we will focus on the body's nervous system, which is the most complex but also the most intelligent sensor system in the planet.

Until next time....

More Stories By Ashish Nanjiani

Ashish Nanjiani is a Senior IT Manager within Cisco IT managing Cisco worldwide IT data centers as an operations manager. With 20 years of IT experience, he is an expert in data center operations and automation. He has spoken in many inter-company events on data center automation and helps IT professionals digitize their IT operations. He is also an entrepreneur and has been successfully running a website business for 10+ years.

Ashish holds a Bachelor of Science degree in Electrical and Electronics and a Masters in Business Administration. He is a certified PMP, Scrum master. He is married and has two lovely daughters. He enjoys playing with technology during his free time. [email protected]

@MicroservicesExpo Stories
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...
DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
With continuous delivery (CD) almost always in the spotlight, continuous integration (CI) is often left out in the cold. Indeed, it's been in use for so long and so widely, we often take the model for granted. So what is CI and how can you make the most of it? This blog is intended to answer those questions. Before we step into examining CI, we need to look back. Software developers often work in small teams and modularity, and need to integrate their changes with the rest of the project code b...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software," explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...