Welcome!

Microservices Expo Authors: Dalibor Siroky, Elizabeth White, Pat Romanski, John Katrick, Liz McMillan

Related Topics: Microservices Expo, @CloudExpo

Microservices Expo: Article

BASE Jumping in the Cloud

Rethinking data consistency

Your CIO is all fired up about moving your legacy inventory management app to the Cloud. Lower capital costs! Dynamic provisioning! Outsourced infrastructure! So you get out your shoehorn, provision some storage and virtual machine instances, and forklift the whole mess into the stratosphere. (OK, there's more to it than that, but bear with me.)

Everything seems to work at first. But then the real test comes: the Holiday season, when you do most of your online business. You breathe a sigh of relief as your Cloud provider seamlessly scales up to meet the spikes in demand. But then your boss calls, irate. Turns out customers are swamping the call center with complaints of failed transactions.

You frantically dive into the log files and diagnostic reports to see what the problem is. Apparently, the database has not been keeping an accurate count of your inventory-which is pretty much what an inventory management system is all about. You check the SQL, and you can't find the problem. Now you're really beginning to sweat.

You dig deeper, and you find the database is frequently in an inconsistent state. When the app processes orders, it decrements the product count. When the count for a product drops to zero, it's supposed to show customers that you've run out. But sometimes, the count is off. Not always, and not for every product. And the problem only seems to occur in the afternoons, when you normally experience your heaviest transaction volume.

The Problem: Consistency in the Cloud
The problem is that while it may appear that your database is running in a single storage partition, in reality the Cloud provider is provisioning multiple physical partitions as needed to provide elastic capacity. But when you look at the fine print in your contract with the Cloud provider, you realize they offer eventual consistency, not immediate consistency. In other words, your data may be inconsistent for short periods of time, especially when your app is experiencing peak load. It may only be a matter of seconds for the issue to resolve, but in the meantime, customers are placing orders for products that aren't available. You're charging their credit cards and all they get for their money is an error page.

From the perspective of the Cloud provider, however, nothing is broken. Eventual consistency is inherent to the nature of Cloud computing, a principle we call the CAP Theorem: no distributed computing system can guarantee (immediate) consistency, availability, and partition tolerance at the same time. You can get any two of these, but not all three at once.

Of these three characteristics, partition tolerance is the least familiar. In essence, a distributed system is partition tolerant when it will continue working even in the case of a partial network failure. In other words, bits and pieces of the system can fail or otherwise stop communicating with the other bits and pieces, and the overall system will continue to function.

With on-premise distributed computing, we're not particularly interested in partition tolerance: transactional environments run in a single partition. If we want ACID transactionality (atomic, consistent, isolated, and durable transactions), then we should stick with a partition intolerant approach like a two-phase commit infrastructure. In essence, ACID implies that a transaction runs in a single partition.

But in the Cloud, we require partition tolerance, because the Cloud provider is willing to allow that each physical instance cannot necessarily communicate with every other physical instance at all times, and each physical instance may go down unpredictably. And if your underlying physical instances aren't communicating or working properly, then you have either an availability or a consistency issue. But since the Cloud is architected for high availability, consistency will necessarily suffer.

The Solution: Rethink Your Priorities
The kneejerk reaction might be that since consistency is nonnegotiable, we need to force the Cloud providers to give up partition tolerance. But in reality, that's entirely the wrong way to think about the problem. Instead, we must rethink our priorities.

As any data specialist will tell you, there are always performance vs. flexibility tradeoffs in the world of data. Every generation of technology suffers from this tradeoff, and the Cloud is no different. What is different about the Cloud is that we want virtualization-based elasticity-which requires partition tolerance.

If we want ACID transactionality then we should stick with an on-premise partition intolerant approach. But in the Cloud, ACID is the wrong priority. We need a different way of thinking about consistency and reliability. Instead of ACID, we need BASE (catchy, eh?)

BASE stands for Basic Availability (supports partial failures without leading to a total system failure), Soft-state (any change in state must be maintained through periodic refreshment), and Eventual consistency (the data will be consistent after a set amount of time passes since an update). BASE has been around for several years and actually predates the notion of Cloud computing; in fact, it underlies the telco world's notion of "best effort" reliability that applies to the mobile phone infrastructure. But today, understanding the principles of BASE is essential to understanding how to architect applications for the Cloud.

Thinking in a BASE Way
Let's put the BASE principles in simple terms.

Basic availability: stuff happens. We're using commodity hardware in the Cloud. We're expecting and planning for failure. But hey, we've got it covered.

Soft state: the squeaky wheel gets the grease. If you don't keep telling me where you are or what you're doing, I'll assume you're not there anymore or you're done doing whatever it is you were doing. So if any part of the infrastructure crashes and reboots, it can bootstrap itself without any worries about it being in the wrong state.

Eventual consistency: It's OK to use stale data some of the time. It'll all come clean eventually. Accountants have followed this principle since Babylonian times. It's called "closing the books."

So, how would you address your inventory app following BASE best effort principles? First, assume that any product quantity is approximate. If the quantity isn't near zero you don't have much of a problem. If it is near zero, set the proper expectation with the customer. Don't charge their credit card in a synchronous fashion. Instead, let them know that their purchase has probably completed successfully. Once the dust settles, let them know if they got the item or not.

Of course, this inventory example is an oversimplification, and every situation is different. The bottom line is that you can't expect the same kind of transactionality in the Cloud as you could in a partition intolerant on-premise environment. If you erroneously assume that you can move your app to the Cloud without reworking how it handles transactionality, then you are in for an unpleasant surprise. On the other hand, rearchitecting your app for the Cloud will improve it overall.

The ZapThink Take
Intermittently stale data? Unpredictable counts? States that expire? Your computer science profs must be rolling around in their graves. That's no way to write a computer program! Data are data, counts are counts, and states are states! How could anything work properly if we get all loosey-goosey about such basics?

Welcome to the twenty-first century, folks. Bank account balances, search engine results, instant messaging buddy lists-if you think about it, all of these everyday elements of our wired lives follow BASE principles in one way or another.

And now we have Cloud computing, where we're bundling together several different modern distributed computing trends into one neat package. But if we mistake the Cloud for being nothing more than a collection of existing trends then we're likely to fall into the "horseless carriage" trap, where we fail to recognize what's special about the Cloud.

The Cloud is much more than a virtual server in the sky. You can't simply migrate an existing app into the Cloud and expect it to work properly, let alone take advantage of the power of the Cloud. Instead, application migration and application modernization necessarily go hand in hand, and architecting your app for the Cloud is more important than ever.

More Stories By Jason Bloomberg

Jason Bloomberg is the leading expert on architecting agility for the enterprise. As president of Intellyx, Mr. Bloomberg brings his years of thought leadership in the areas of Cloud Computing, Enterprise Architecture, and Service-Oriented Architecture to a global clientele of business executives, architects, software vendors, and Cloud service providers looking to achieve technology-enabled business agility across their organizations and for their customers. His latest book, The Agile Architecture Revolution (John Wiley & Sons, 2013), sets the stage for Mr. Bloomberg’s groundbreaking Agile Architecture vision.

Mr. Bloomberg is perhaps best known for his twelve years at ZapThink, where he created and delivered the Licensed ZapThink Architect (LZA) SOA course and associated credential, certifying over 1,700 professionals worldwide. He is one of the original Managing Partners of ZapThink LLC, the leading SOA advisory and analysis firm, which was acquired by Dovel Technologies in 2011. He now runs the successor to the LZA program, the Bloomberg Agile Architecture Course, around the world.

Mr. Bloomberg is a frequent conference speaker and prolific writer. He has published over 500 articles, spoken at over 300 conferences, Webinars, and other events, and has been quoted in the press over 1,400 times as the leading expert on agile approaches to architecture in the enterprise.

Mr. Bloomberg’s previous book, Service Orient or Be Doomed! How Service Orientation Will Change Your Business (John Wiley & Sons, 2006, coauthored with Ron Schmelzer), is recognized as the leading business book on Service Orientation. He also co-authored the books XML and Web Services Unleashed (SAMS Publishing, 2002), and Web Page Scripting Techniques (Hayden Books, 1996).

Prior to ZapThink, Mr. Bloomberg built a diverse background in eBusiness technology management and industry analysis, including serving as a senior analyst in IDC’s eBusiness Advisory group, as well as holding eBusiness management positions at USWeb/CKS (later marchFIRST) and WaveBend Solutions (now Hitachi Consulting).

@MicroservicesExpo Stories
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...