Welcome!

Microservices Expo Authors: Liz McMillan, Elizabeth White, Matt Brickey, Pat Romanski, Wayne Salpietro

Related Topics: @DevOpsSummit, Containers Expo Blog, @CloudExpo

@DevOpsSummit: Article

Understanding #Serverless at @CloudExpo | #DevOps #NoCode #LowCode #AI

Understanding the pitfalls and disadvantages of serverless will make it much easier to identify use cases that are a good fit

Understanding Serverless Cloud and Clear
By Martijn van Dongen

Serverless is considered the successor to containers. And while it’s heavily promoted as the next great thing, it’s not the best fit for every use case. Understanding the pitfalls and disadvantages of serverless will make it much easier to identify use cases that are a good fit. This post offers some technology perspectives on the maturity of serverless today.

First, note how we use the word serverless here. Serverless is a combination of “Function as a Service” (FaaS) and “Platform as a Service” (PaaS). Namely, those categories of cloud services where you don’t know what the servers look like. For example, RDS or Beanstalk are “servers managed by AWS,” where you still see the context of server(s). DynamoDB and S3 are just some kind of NoSQL and storage solution with an API, where you do not see the servers. Not seeing the servers means there’s no provisioning, hardening or maintenance involved, hence they are server “less.” A serverless platform works with “events.” Examples of events are the action behind a button on a website, backend processing of a mobile app, or the moment a picture is being uploaded to the cloud and the storage service triggers the function.

Performance
All services involved in a serverless architecture can scale virtually infinitely. This means when something triggers a function, let’s say, 1000 times in one second, it is guaranteed that all executions will finish one second later. In the old container world, you have to provision and tune enough container applications to handle this amount of instant requests. Sounds like serverless is going to win in this performance challenge, right? Sometimes the serverless container with your function is not running and needs to start. This causes slight overhead in the total execution of the “cold” functions, which is undesirable if you want to ensure that your users (or “things”) get 100% fast response. To get predictable responses, you have to provision a container platform, leaving you to wonder if it’s worth the cost, not just for running the containers, but also for related investments in time, complexity and risk.

Cost Predictability
With container platforms or servers, you’re billed per running hour, or, in exceptional cases, per minute or second. If you have a very predictable and steady workload, you might utilize at around 70%, which is still a lot of waste. At the same time, you always need to over-provision because of the possibility of sudden spikes in traffic. One option would be to increase utilization, which would come with fewer costs, but also higher risk. With serverless, in contrast, you pay by code execution to the nearest 100 milliseconds, which is much more granular and close to 100% utilization. This makes serverless a great choice for traffic that is unpredictable and very spiky because you pay only for what you use.

Security
You would expect cloud services to be fully secure. Unfortunately, this isn’t the case for functions. With most cloud services, the “attack surface” is limited and therefore possible to fully protect. With serverless, however, this surface is really thin and broad and runs on shared servers with less protection than, for instance, EC2 or DynamoDB. For that reason, information such as credit card details are not permitted in functions. That does not mean it’s insecure, but it does mean that it can’t pass a strict and required audit…yet. Given the high expectations for serverless, security will likely improve, so it’s good to get some experience with it now so you’re ready for the future.

Start with backend systems with less sensitive data, like gaming progress, shopping lists, analytics, and so on. Or process orders of groceries, but outsource the payment to a provider. Like credit card numbers, these things are on their own sensible piece of data, but if data in memory is leaked to other users of the same underlying server, a credit card number exposure can be exploited, but an identifier like id: 3h7L8r bought tomatoes cannot.

Reliability
Another thing to think about with security is the availability of services. A relatively “slow” service that can’t go down is generally better than a service that is fast but unavailable. Often in a Disaster Recovery setup, all on-premise servers are replicated to the cloud, which adds a lot of complexity. In most cases, it’s better to turn off your on-premise and go all-in cloud. If you’re not ready for this step, you can also use serverless as a failover platform to keep particular functionalities highly available, not all functionalities of course, but those that are mission critical, or can facilitate temporary storage and process in a batch after recovery. It’s less costly and very reliable.

Cloud and Clear
Until recently, it was quite tricky to launch and update a live function. More and more frameworks, like Serverless.com and SAM, are solving the main issues. Combined with automated CICD, it’s easy to deploy and test your serverless platform in a secured environment. This ensures the deployment to production will succeed every time and without downtime. With cloudformation or terraform you “develop” the cloud native services and configure functions. With programming languages like nodejs, python, java or C#, you develop the functions themselves. Even logging and monitoring has become really mature over the last few months. The whole source gives you a “cloud and clear” overview of what’s under the hood of your serverless application: how it’s provisioned, built, deployed, tested and monitored and how it runs.

Conclusion
AWS started in 2014 with the launch of Lambda, and although this post is mainly about AWS, Google and Microsoft are investing highly in their functions, and in the serverless approach as well. Over the last couple of months, they’ve shown very promising offerings and demos. The world is not ready to go all-in on serverless, but we’re already seeing increasing interest from developers and startups, who are building secure, reliable, high-performing and cost-effective solutions, and easily mitigating the issues mentioned earlier. You can look forward to waking up one day and finding out that serverless is now fully secured, provides reliable performance (pre-warmed), and has been adopted by many competitors. So be prepared and start investing in this technology today.

This blog was originally published by Xebia at Understanding Serverless Cloud and Clear.

The post Understanding Serverless Cloud and Clear appeared first on XebiaLabs.

More Stories By XebiaLabs Blog

XebiaLabs is the technology leader for automation software for DevOps and Continuous Delivery. It focuses on helping companies accelerate the delivery of new software in the most efficient manner. Its products are simple to use, quick to implement, and provide robust enterprise technology.

@MicroservicesExpo Stories
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
"Peak 10 is a hybrid infrastructure provider across the nation. We are in the thick of things when it comes to hybrid IT," explained Michael Fuhrman, Chief Technology Officer at Peak 10, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
If you read a lot of business and technology publications, you might think public clouds are universally preferred over all other cloud options. To be sure, the numbers posted by Amazon Web Services (AWS) and Microsoft’s Azure platform are nothing short of impressive. Statistics reveal that public clouds are growing faster than private clouds and analysts at IDC predict that public cloud growth will be 3 times that of private clouds by 2019.
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Data reduction delivers compelling cost reduction that substantially improves the business case in every cloud deployment model. No matter which cloud approach you choose, the cost savings benefits from data reduction should not be ignored and must be a component of your cloud strategy. IT professionals are finding that the future of IT infrastructure lies in the cloud. Data reduction technologies enable clouds — public, private, and hybrid — to deliver business agility and elasticity at the lo...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
From personal care products to groceries and movies on demand, cloud-based subscriptions are fulfilling the needs of consumers across an array of market sectors. Nowhere is this shift to subscription services more evident than in the technology sector. By adopting an Everything-as-a-Service (XaaS) delivery model, companies are able to tailor their computing environments to shape the experiences they want for customers as well as their workforce.
"We focus on SAP workloads because they are among the most powerful but somewhat challenging workloads out there to take into public cloud," explained Swen Conrad, CEO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"Outscale was founded in 2010, is based in France, is a strategic partner to Dassault Systémes and has done quite a bit of work with divisions of Dassault," explained Jackie Funk, Digital Marketing exec at Outscale, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"I think DevOps is now a rambunctious teenager – it’s starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
For over a decade, Application Programming Interface or APIs have been used to exchange data between multiple platforms. From social media to news and media sites, most websites depend on APIs to provide a dynamic and real-time digital experience. APIs have made its way into almost every device and service available today and it continues to spur innovations in every field of technology. There are multiple programming languages used to build and run applications in the online world. And just li...
If you are thinking about moving applications off a mainframe and over to open systems and the cloud, consider these guidelines to prioritize what to move and what to eliminate. On the surface, mainframe architecture seems relatively simple: A centrally located computer processes data through an input/output subsystem and stores its computations in memory. At the other end of the mainframe are printers and terminals that communicate with the mainframe through protocols. For all of its appare...
"I will be talking about ChatOps and ChatOps as a way to solve some problems in the DevOps space," explained Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Busine...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.