Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Simon Hill, Madhavan Krishnan, VP, Cloud Solutions, Virtusa, John Rauser

Related Topics: @CloudExpo, Java IoT, Mobile IoT, Microservices Expo, Linux Containers, Agile Computing

@CloudExpo: Article

MBaaS, Cloud Computing and Architectures for Enterprise Mobility

What your mother never told you

My friend and Cognizant colleague, the ever opinionated Peter Rogers, shares more of his insights into the world of IoT (Internet of Things), enterprise mobility, geekdom and how they really works under-the-covers.  By "they" I mean technology, not geeks.
____

I believe the trends away from mainframes to the Cloud will have a large impact on enterprise mobility architecture. If we believe that going forward, enterprise mobility architectures will be closely tied to the Cloud, then we need to take a serious look at architectural design.  I have written about MBaaS (Mobile Back End as a Service) which is a new form of Cloud offering, but today I want share my opinions on best practices.

I have been working on an MBaaS project recently, and we ran into some interesting challenges when it came to submitting the App to the Apple App Store. In the middle of the night there was some server maintenance going on which was obviously considered out of hours in the UK. The point I reminded everyone was that Apple Valley is actually GMT-7, and so what is considered out of hours in the UK is not the case where Apple does their testing.

We then got onto some interesting questions:

  • "Do we have availability monitoring?"
  • "How do you get the Node service working again if it falls over?"
  • "Do we have High Availability (HA) and Disaster Recovery (DR)?"

This led to me take a deep look at the architecture of building Cloud native applications on top of Amazon Web Services (AWS). MBaaS does abstract a lot of the underlying details away from you, but at the end of the day if the underlying Cloud provider is Amazon EC2 (which is a very common option) then maybe it is worth understanding exactly how AWS works. Abstraction is a great concept for simplification but it is always better if you start off with a core understanding before you start such a process. Furthermore, it used to be the case that most server side development was performed by specialists in server side development but the popularity of Node has meant that client-side JavaScript developers are now faced with developing server side applications that run on the Cloud for the first time.

One of the best articles that I found underpinning architectural design for Cloud native applications on AWS was written back in 2011 (but is still referenced today) and genuinely changed my architectural philosophy on the matter.

In a nutshell, Amazon Web Services uses a UDP-cloud model because it doesn't guarantee reliability at the infrastructure level.

This is a very interesting concept so I want to take the rest of the Blog to explain it, starting with a quick reminder of TCP and UDP.

  • TCP is a reliable connection oriented protocol with segment sequencing and acknowledgments
  • UDP is an unreliable connectionless protocol with no sequencing or acknowledgments

During a few large AWS outages then a number of Bloggers (such as George Reese) outlined the differences between the "design for fail" model and the "traditional" model. The traditional model, among other things, has high-availability (HA) and disaster recovery (DR) characteristics built right into the infrastructure and these features are typically application-agnostic. An alternative view of "design for fail" and "traditional" is therefore TCP-clouds and UDP-clouds.

  • A TCP Cloud has the application in the consumer space and the HA / DR policies and Cloud Compute in the provider space.
  • A UDP Cloud has the application and the HA/DR policies in the consumer space and only the Cloud Compute in the provider space.

This is obviously a vast oversimplification and AWS offers far more than just cloud computing, but the key components in this equation are the ones to focus on. AWS doesn't have high availability built into the EC2 service, instead they suggest to deploy in multiple "Availability Zones" simply to avoid concurrent failures. In other words, if you deploy your application in a given "Availability Zone," there is nothing that will "fail it over" to another "Availability Zone."

Some of AWS customers, therefore, developed tools to test the resiliency of their applications such as a Chaos Monkey tool. These are software programs that are designed to break things randomly. In a TCP-cloud it would be the cloud provider to run traditional tests to make sure the infrastructure could self-recover. In a UDP-cloud it is the developer that must run a Chaos Monkey in order to test if the application could self-recover since it's been designed for fail.

A different view on this is cattle and pets.

vSphere servers are likened to pets:

  • They are given names (such as pussinboots.cern.ch)
  • Uniquely hand raised and cared for
  • Nursed back to health when sick

OpenStack servers are likened to cattle:

  • They get random identification numbers (vm0042.cern.ch)
  • They are almost identical to each other
  • They are cared for as a group
  • They and basically just replaced when ill

The conclusion being that "Future application architectures should use Cattle, but Pets with strong configuration management are viable and still needed".  If you haven't made the connection yet, then Cattle are UDP Clouds and Pets are the TCP Clouds.

I have always classed MBaaS as somewhere between Cloud PaaS and Cloud SaaS to my colleagues but I have been quite wrong in this regard. I want to update that definition to the following:

"MBaaS is the combination of Cloud SaaS and EITHER Cloud PaaS or Cloud IaaS, which depends on both the underlying Cloud provider and the supporting service model".

That means if you have an underlying Cloud provider of AWS, and your MBaaS vendor isn't giving you additional support in HA/DR, availability monitoring or Chaos Monkey tools, then you are basically sitting on a Cloud IaaS which is acting as a UDP Cloud. That is an important thing to be aware of in terms of what you need to bring to the party, and is the potential danger of not really understanding the underlying Cloud model that you are working with.

When we finally move away from mainframes and fully embrace the Cloud then we need to look at how we architect Cloud native applications. That means considering that your Node service tier could fall over and looking at tools like Node-Forever and PM2 (http://devo.ps/blog/2013/06/26/goodbye-node-forever-hello-pm2.html). Node-Forever is a popular option to bring Node services back to life again (Keep Alive) and also supports CoffeeScript. PM2 adds the following: log aggregation; API; terminal monitoring (including CPU usage and memory consumption by cluster); native clustering; and JSON configuration.

There are also plenty of ways to monitor availability of the Cloud instance. You could subscribe to a twitter feed of your particular Cloud. There are quite a few services that offer a ping service to check availability. If you are using Appcelerator Cloud Services then there is a great tool called Relic available on their Market Place.

In terms of HA then you need to look into deploying a High Availability Proxy. HAProxy (High Availability Proxy) is an open source load balancer which can load balance any TCP service. It is particularly suited for HTTP load balancing as it supports session persistence and layer 7 processing. I am not sure how many Cloud developers actually use Chaos Monkey tools to test DR but the option is certainly there. Certainly you should be designing your applications to be stateless as much as possible and looking into NoSQL databases.

I hope this article has helped you to understand that you cannot just assume your MBaaS vendor is providing a full Cloud PaaS and all of this stuff just comes out of the box. I hope you will also consider designing your Cloud services with a general consideration of the underlying infrastructure. You should have this discussion early on in the project and work out which tools you need to be providing and which enterprise architectural principles need to be applied.

Of course there is nothing to stop you having two or three different underlying Cloud providers or just having the mission critical features running on a private local Cloud. It is an important point to remember though, Amazon EC2 and other Cloud providers can go down for 48 hours. It is very rare but it is not unheard of in the history of the Cloud.

"Design for failure and you won't ever be surprised"

I would like to thank Massimo and Douglas Lin for their exceptional Blogs that I have referenced throughout this article.
*************************************************************

Kevin Benedict Senior Analyst, Digital Transformation Cognizant View my profile on LinkedIn Learn about mobile strategies at MobileEnterpriseStrategies.com Follow me on Twitter @krbenedict Browse the Mobile Solution Directory Join the Linkedin Group Strategic Enterprise Mobility Join the Google+ Community Mobile Enterprise Strategies

***Full Disclosure: These are my personal opinions. No company is silly enough to claim them. I am a mobility and digital transformation analyst, consultant and writer. I work with and have worked with many of the companies mentioned in my articles.

More Stories By Kevin Benedict

Kevin Benedict is an opinionated futurist, Principal Analyst at the Center for Digital Intelligence™, C4DIGI.com, emerging technologies analyst, and digital transformation and business strategy consultant. In the past 8 years he has taught workshops for large enterprises and government agencies in 18 different countries, and is a keynote speaker at conferences worldwide. He spent nearly 5 years working as a Senior Analyst at Cognizant (CTSH), and 2 years serving in Cognizant's Center for the Future of Work where he wrote many reports, hundreds of articles, interviewed technology experts, and produced videos on the future of digital technologies and their impact on industries. He has written articles published in The Guardian, wrote the Forward to SAP Press' book titled "Mobilizing Your Enterprise with SAP", published over 3,000 articles and was featured as thought leader and digital strategist in the Department of Defense's IQT intelligence journal. Kevin lectures and leads workshops, teaches and consults with companies and government agencies around the world to help develop digital transformation and business strategies. Visit his website at C4DIGI.com.

@MicroservicesExpo Stories
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to close th...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
"We are an integrator of carrier ethernet and bandwidth to get people to connect to the cloud, to the SaaS providers, and the IaaS providers all on ethernet," explained Paul Mako, CEO & CTO of Massive Networks, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
From our perspective as consumers, perhaps the best thing about digital transformation is how consumerization is making technology so much easier to use. Sure, our television remote controls still have too many buttons, and I have yet to figure out the digital display in my Honda, but all in all, tech is getting easier for everybody. Within companies – even very large ones – the consumerization of technology is gradually taking hold as well. There are now simple mobile apps for a wide range of ...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.