Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Liz McMillan, Carmen Gonzalez, Kalyan Ramanathan

Blog Feed Post

Move over Reliability, Resilience has arrived

[This article was originally written as a guest post for Puppet Labs and published at their blog on January 9th, 2014.]

If you haven’t yet noticed that prioritization of non-functional requirements (NFRs) is changing amongst your user base, you will soon. For decades, we have held to the same familiar set of NFRs. Every team had its own definition and particular spin on NFRs, but the usual suspects are accessibility, availability, extensibility, interoperability, maintainability, performance, reliability, scalability, security, and usability.

But new priorities have surfaced, as IT has experienced a sea change over the past few years. Some organizations have even adopted completely new NFRs. The rise of DevOps has coincided with these changes, and the movement’s principles enable IT teams to more readily adapt to rapidly changing requirements.

Your grandfather’s mainframe was very reliable

Historically, IT system designs were praised for reliability. Robust and stable systems could “take a licking and keep on ticking.” As computing became more pervasive, scalability became the watchword. Systems should be able to grow and expand to meet increasing demands.

Scalability as an NFR priority represents just a slight shift from reliability as an NFR. Both operated off the mindset that the original system design was valid. Reliability ensures that the system continues to provide the stated functionality over time, and scalability ensures that you can do so for an increasing demand set.

Roughly 10 years ago, things began to shift as more and more organizations embraced movements like agile or XP, and architectural models like Service Oriented Architecture (SOA). These initiatives promoted adaptation and response to change as desirable system qualities. Next, cloud computing introduced us to the notion of elasticity, further promoting the values of flexibility and responsiveness to change.

A resilient system is a happy system

The state of the art for system design is always evolving, and we see noticeable leaps forward every few years. The current phase of evolution is toward resilient systems.

Legacy system designs relied upon expensive infrastructure with multiple-redundant-hot-swappable-live-backup-standby-continuity-generators (or whatever vendors are peddling lately). In contrast, resilient system designs embrace failure and promote the use of cheap, commodity hardware, coupled with distributed data management, parallel processing, eventual consistency, and self-healing operational nodes.

Some portion of your system is likely to go down at some point, and resilient systems are designed with that expectation. Resilient systems and resilient processes are able to continue operation (albeit at diminished capacity) in the face of failure.

The prioritization of resilience over reliability as an NFR can be seen within the DevOps movement, the development of the Netflix Simian Army, and the rise of NoSQL data management solutions.

DevOps and resiliency

DevOps is a multi-headed beast, more a movement guided by a set of principles than a tangible and well-defined construct. While organizations are free to adopt aspects of DevOps that suit their needs, one common thread is that of resilience. Failure is seen as an opportunity to improve processes and communication, rather than as a threat.

The principles of continuous integration and continuous delivery that are core to most DevOps practices exemplify a resilient mindset. Where the classic waterfall model relies upon detailed front-end design and planning with an all-or-nothing development phase and late-stage testing, DevOps teams are more agile, embracing a “fail early, fail often” model. This approach results in more resilient and adaptable applications.

Netflix Simian Army

Netflix gained world renown when the company broadcast details of its Simian Army work in 2010 and 2011. Through the automated efforts of Chaos Monkey, Chaos Gorilla, and a slew of other similar utilities, failure is simulated in order to develop more resilient processes, tools, and capabilities.

John Ciancutti of Netflix writes, “If we aren’t constantly testing our ability to succeed despite failure, then it isn’t likely to work when it matters most — in the event of an unexpected outage.”

NoSQL

A third illustration of the growing fascination with resilient, self-healing systems is the transformation now going on in the data realm. Data and metadata management have evolved considerably from the relational databases of yore. Modern data management strategies tend to be distributed, fault-tolerant, and in some cases even self-heal by spawning new nodes as needed. Examples include Google FS / Bigtable, in-memory datastores like Hazelcast or SAP’s HANA, and distributed data management solutions like Apache Cassandra.

Miko Matsumura of Hazelcast notes, “Virtualization and scale-out power new ways of thinking about system stability, including a shift away from ‘reliability,’ where giant expensive systems never fail (until they do, catastrophically), and towards ‘resiliency,’ where thousands of inexpensive systems constantly fail—but in ways that don’t materially impact running applications.”

Keeping pace with the cool kids

It’s often said that the only constant is change. The DevOps movement positions organizations to embrace change, rather than fear it. Continuous integration, continuous delivery, and continuous feedback loops between dev teams and ops teams facilitate an enhanced degree of agility and responsiveness.

As business and society evolve, our system design priorities must adapt in parallel. The cool kids will change the game again at some point, but for right now, “change” means designing systems and supporting processes that are responsive and adaptable by prioritizing resilience over reliability.

Read the original blog entry...

More Stories By Kyle Gabhart

Kyle Gabhart is a subject matter expert specializing in strategic planning and tactical delivery of enterprise technology solutions, blending EA, BPM, SOA, Cloud Computing, and other emerging technologies. Kyle currently serves as a director for Web Age Solutions, a premier provider of technology education and mentoring. Since 2001 he has contributed extensively to the IT community as an author, speaker, consultant, and open source contributor.

@MicroservicesExpo Stories
Between 2005 and 2020, data volumes will grow by a factor of 300 – enough data to stack CDs from the earth to the moon 162 times. This has come to be known as the ‘big data’ phenomenon. Unfortunately, traditional approaches to handling, storing and analyzing data aren’t adequate at this scale: they’re too costly, slow and physically cumbersome to keep up. Fortunately, in response a new breed of technology has emerged that is cheaper, faster and more scalable. Yet, in meeting these new needs they...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Get deep visibility into the performance of your databases and expert advice for performance optimization and tuning. You can't get application performance without database performance. Give everyone on the team a comprehensive view of how every aspect of the system affects performance across SQL database operations, host server and OS, virtualization resources and storage I/O. Quickly find bottlenecks and troubleshoot complex problems.
Cloud Expo, Inc. has announced today that Andi Mann returns to 'DevOps at Cloud Expo 2017' as Conference Chair The @DevOpsSummit at Cloud Expo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "DevOps is set to be one of the most profound disruptions to hit IT in decades," said Andi Mann. "It is a natural extension of cloud computing, and I have seen both firsthand and in independent research the fantastic results DevOps delivers. So I am excited to help the great t...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his session at @DevOpsSummit 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, showed how customers are able to achieve a level of transparency that enables everyone fro...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
Logs are continuous digital records of events generated by all components of your software stack – and they’re everywhere – your networks, servers, applications, containers and cloud infrastructure just to name a few. The data logs provide are like an X-ray for your IT infrastructure. Without logs, this lack of visibility creates operational challenges for managing modern applications that drive today’s digital businesses.
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
Join Impiger for their featured webinar: ‘Cloud Computing: A Roadmap to Modern Software Delivery’ on November 10, 2016, at 12:00 pm CST. Very few companies have not experienced some impact to their IT delivery due to the evolution of cloud computing. This webinar is not about deciding whether you should entertain moving some or all of your IT to the cloud, but rather, a detailed look under the hood to help IT professionals understand how cloud adoption has evolved and what trends will impact th...
SYS-CON Events has announced today that Roger Strukhoff has been named conference chair of Cloud Expo and @ThingsExpo 2017 New York. The 20th Cloud Expo and 7th @ThingsExpo will take place on June 6-8, 2017, at the Javits Center in New York City, NY. "The Internet of Things brings trillions of dollars of opportunity to developers and enterprise IT, no matter how you measure it," stated Roger Strukhoff. "More importantly, it leverages the power of devices and the Internet to enable us all to im...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
Information technology is an industry that has always experienced change, and the dramatic change sweeping across the industry today could not be truthfully described as the first time we've seen such widespread change impacting customer investments. However, the rate of the change, and the potential outcomes from today's digital transformation has the distinct potential to separate the industry into two camps: Organizations that see the change coming, embrace it, and successful leverage it; and...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
Financial Technology has become a topic of intense interest throughout the cloud developer and enterprise IT communities. Accordingly, attendees at the upcoming 20th Cloud Expo at the Javits Center in New York, June 6-8, 2017, will find fresh new content in a new track called FinTech.
"Dice has been around for the last 20 years. We have been helping tech professionals find new jobs and career opportunities," explained Manish Dixit, VP of Product and Engineering at Dice, in this SYS-CON.tv interview at 19th Cloud Expo, held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA.
Rapid innovation, changing business landscapes, and new IT demands force businesses to make changes quickly. In the eyes of many, containers are at the brink of becoming a pervasive technology in enterprise IT to accelerate application delivery. In this presentation, attendees learned about the: The transformation of IT to a DevOps, microservices, and container-based architecture What are containers and how DevOps practices can operate in a container-based environment A demonstration of how ...
Keeping pace with advancements in software delivery processes and tooling is taxing even for the most proficient organizations. Point tools, platforms, open source and the increasing adoption of private and public cloud services requires strong engineering rigor – all in the face of developer demands to use the tools of choice. As Agile has settled in as a mainstream practice, now DevOps has emerged as the next wave to improve software delivery speed and output. To make DevOps work, organization...
Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Phil Hombledal, Solution Architect at CollabNet, discussed how customers are able to achieve a level of transparency that e...