Welcome!

Microservices Expo Authors: Kelly Burford, Derek Weeks, Elizabeth White, Liz McMillan, John Katrick

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Understanding #Serverless at @CloudExpo | #DevOps #NoCode #LowCode #AI

Understanding the pitfalls and disadvantages of serverless will make it much easier to identify use cases that are a good fit

Understanding Serverless Cloud and Clear
By Martijn van Dongen

Serverless is considered the successor to containers. And while it’s heavily promoted as the next great thing, it’s not the best fit for every use case. Understanding the pitfalls and disadvantages of serverless will make it much easier to identify use cases that are a good fit. This post offers some technology perspectives on the maturity of serverless today.

First, note how we use the word serverless here. Serverless is a combination of “Function as a Service” (FaaS) and “Platform as a Service” (PaaS). Namely, those categories of cloud services where you don’t know what the servers look like. For example, RDS or Beanstalk are “servers managed by AWS,” where you still see the context of server(s). DynamoDB and S3 are just some kind of NoSQL and storage solution with an API, where you do not see the servers. Not seeing the servers means there’s no provisioning, hardening or maintenance involved, hence they are server “less.” A serverless platform works with “events.” Examples of events are the action behind a button on a website, backend processing of a mobile app, or the moment a picture is being uploaded to the cloud and the storage service triggers the function.

Performance
All services involved in a serverless architecture can scale virtually infinitely. This means when something triggers a function, let’s say, 1000 times in one second, it is guaranteed that all executions will finish one second later. In the old container world, you have to provision and tune enough container applications to handle this amount of instant requests. Sounds like serverless is going to win in this performance challenge, right? Sometimes the serverless container with your function is not running and needs to start. This causes slight overhead in the total execution of the “cold” functions, which is undesirable if you want to ensure that your users (or “things”) get 100% fast response. To get predictable responses, you have to provision a container platform, leaving you to wonder if it’s worth the cost, not just for running the containers, but also for related investments in time, complexity and risk.

Cost Predictability
With container platforms or servers, you’re billed per running hour, or, in exceptional cases, per minute or second. If you have a very predictable and steady workload, you might utilize at around 70%, which is still a lot of waste. At the same time, you always need to over-provision because of the possibility of sudden spikes in traffic. One option would be to increase utilization, which would come with fewer costs, but also higher risk. With serverless, in contrast, you pay by code execution to the nearest 100 milliseconds, which is much more granular and close to 100% utilization. This makes serverless a great choice for traffic that is unpredictable and very spiky because you pay only for what you use.

Security
You would expect cloud services to be fully secure. Unfortunately, this isn’t the case for functions. With most cloud services, the “attack surface” is limited and therefore possible to fully protect. With serverless, however, this surface is really thin and broad and runs on shared servers with less protection than, for instance, EC2 or DynamoDB. For that reason, information such as credit card details are not permitted in functions. That does not mean it’s insecure, but it does mean that it can’t pass a strict and required audit…yet. Given the high expectations for serverless, security will likely improve, so it’s good to get some experience with it now so you’re ready for the future.

Start with backend systems with less sensitive data, like gaming progress, shopping lists, analytics, and so on. Or process orders of groceries, but outsource the payment to a provider. Like credit card numbers, these things are on their own sensible piece of data, but if data in memory is leaked to other users of the same underlying server, a credit card number exposure can be exploited, but an identifier like id: 3h7L8r bought tomatoes cannot.

Reliability
Another thing to think about with security is the availability of services. A relatively “slow” service that can’t go down is generally better than a service that is fast but unavailable. Often in a Disaster Recovery setup, all on-premise servers are replicated to the cloud, which adds a lot of complexity. In most cases, it’s better to turn off your on-premise and go all-in cloud. If you’re not ready for this step, you can also use serverless as a failover platform to keep particular functionalities highly available, not all functionalities of course, but those that are mission critical, or can facilitate temporary storage and process in a batch after recovery. It’s less costly and very reliable.

Cloud and Clear
Until recently, it was quite tricky to launch and update a live function. More and more frameworks, like Serverless.com and SAM, are solving the main issues. Combined with automated CICD, it’s easy to deploy and test your serverless platform in a secured environment. This ensures the deployment to production will succeed every time and without downtime. With cloudformation or terraform you “develop” the cloud native services and configure functions. With programming languages like nodejs, python, java or C#, you develop the functions themselves. Even logging and monitoring has become really mature over the last few months. The whole source gives you a “cloud and clear” overview of what’s under the hood of your serverless application: how it’s provisioned, built, deployed, tested and monitored and how it runs.

Conclusion
AWS started in 2014 with the launch of Lambda, and although this post is mainly about AWS, Google and Microsoft are investing highly in their functions, and in the serverless approach as well. Over the last couple of months, they’ve shown very promising offerings and demos. The world is not ready to go all-in on serverless, but we’re already seeing increasing interest from developers and startups, who are building secure, reliable, high-performing and cost-effective solutions, and easily mitigating the issues mentioned earlier. You can look forward to waking up one day and finding out that serverless is now fully secured, provides reliable performance (pre-warmed), and has been adopted by many competitors. So be prepared and start investing in this technology today.

This blog was originally published by Xebia at Understanding Serverless Cloud and Clear.

The post Understanding Serverless Cloud and Clear appeared first on XebiaLabs.

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

@MicroservicesExpo Stories
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things c...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
Many IT organizations have come to learn that leveraging cloud infrastructure is not just unavoidable, it’s one of the most effective paths for IT organizations to become more responsive to business needs. Yet with the cloud comes new challenges, including minimizing downtime, decreasing the cost of operations, and preventing employee burnout to name a few. As companies migrate their processes and procedures to their new reality of a cloud-based infrastructure, an incident management solution...
Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.