Welcome!

Microservices Expo Authors: LeanTaaS Blog, Derek Weeks, Don MacVittie, Karthick Viswanathan, Gopala Krishna Behara

Related Topics: Microservices Expo, Containers Expo Blog, @CloudExpo

Microservices Expo: Blog Feed Post

SeaMicro: Atom and the Ants

How the meek shall inherit the data center, change the way we build & deploy applications, & kill public cloud virtualization

The tiny ant. Capable of lifting up to 50 times its body weight, an ant is an amazing workhorse with by far the highest “power to weight” ratio of any living creature. Ants are also among the most populous creatures on the planet. They do the most work as well – a bit at a time Ants can move mountains.

Atom chips (and ARM chips too) are the new ants of the data center. They are what power our smartphones, tablets and ever more consumer electronics devices. They are now very fast, but surprisingly thrifty with energy – giving them the highest computing power to energy weight ratio of any microprocessor.

I predict that significantly more than half of new data center compute capacity deployed in 2016 and beyond will be based on Atoms, ARMs and other ultra-low-power processors. These mighty mites will change much about how application architectures will evolve too. Lastly, I seriously believe that the small, low-power server model will eliminate the use of virtualization in a majority of public cloud capacity by 2018. The impact in the enterprise will be initially less significant, and will take longer to play out, but in the end it will be the same result.

So, let’s take a look at this in more detail to see if you agree.

This week I had the great pleasure to spend an hour with Andrew Feldman, CEO and founder of SeaMicro, Inc., one of the emerging leaders in the nascent low-power server market. SeaMicro has had quite a great run of publicity lately, appearing twice in the Wall Street Journal related to their recent launch of their second-generation product – the SM10000-64 based on a new dual-core 1.66 GHz 64-bit Atom chip created by Intel specifically for SeaMicro.

SeaMicro: 512 Cores, 1TB RAM, 10 RU

Note – the rest of this article is based on SeaMicro and their Atom-based servers.  Calxeda is another company in this space, but uses ARM chips instead.

These little beasties, taking up a mere 10 rack units of space (out of 42 in a typical rack), pack an astonishing 256 individual servers (512 cores), 64 SATA or SSD drives, up to 160GB of external network connectivity (16 x 10GigE), and 1.024 TB of DRAM. Further, SeaMicro uses ¼ of the power, ¼ the space and costs a fraction of a similar amount of capacity in a traditional 1U configuration. Internally, the 256 servers are connected by a 1.28 Tbps “3D torus” fabric modeled on the IBM Blue Gene/L supercomputer.

The approach to using low-power processors in a data center environment is detailed in a paper by a group of researchers out of Carnegie Mellon University. In this paper they show that cluster computing using a FAWN (“Fast Array of Wimpy Nodes”) approach, overall, are “substantially more energy efficient than conventional high-performance CPUs” at the same level of performance.

The Meek Shall Inherit The Earth
A single rack of these units would boast 1,024 individual servers (1 CPU per server), 2,048 cores (total of 3,400 GHz of compute), 4.1TB of DRAM, and 256TB of storage using 1TB SATA drives, and communicate at 1.28Tbps at a cost of around half a million dollars (< $500 per server).

$500/server – really? Yup.

Now, let’s briefly consider the power issue. SeaMicro saves power through a couple of key innovations. First, they’re using these low power chips. But CPU power is typically only 1/3 of the load in a traditional server. To get real savings, they had to build custom ASICs and FPGAs to get 90% of the components off of a typical motherboard (which is now the size of a credit card, with 4 of them on each “blade”). Aside from capacitors, each motherboard has only three types of components – the Atom CPU, DRAM, and the SeaMicro ASIC. The result is 75% less power per server. Google has stated that, even at their scale, the cost of electricity to run servers exceeds the cost to buy them. Power and space consumes >75% of data center operating expense. If you save 75% of the cost of electricity and space, these servers pay for themselves – quickly.

If someone just gave you 256 1U traditional servers to run – for free – it would be far more expensive than purchasing and operating the SeaMicro servers.

Think about it.

Why would anybody buy traditional Xeon-based servers for web farms ever again? As the saying goes, you’d have to pay me to take a standard server now.

This is why I predict that, subject to supply chain capacity, more than 50% of new data center servers will be based on this model in the next 4-5 years.

Atoms and Applications
So let’s dig a bit deeper into the specifics of these 256 servers and how they might impact application architectures. Each has a dual-core 1.66GHz 64-bit Intel Atom N570 processor with 4GB of DRAM. These are just about ideal Web servers and, according to Intel, the highest performance per watt of any Internet workload processer they’ve every built.

They’re really ideal “everyday” servers that can run a huge range of computing tasks. You wouldn’t run HPC workloads on these devices – such as CAD/CAM, simulations, etc. – or a scale-up database like Oracle RAC. My experience is that 4GB is actually a fairly typical VM size in an enterprise environment, so it seems like a pretty good all-purpose machine that can run the vast majority of traditional workloads.

They’d even be ideal as VDI (virtual desktop servers) where literally every running Windows desktop would get their own dedicated server. Cool!

Forrester’s James Staten, in a keynote address at CloudConnect 2011, recommended that people write applications that use many small instances when needed vs. fewer larger instances, and aggressively scale down (e.g. turn off) their instances when demand drops. That’s the best way to optimize economics in metered on-demand cloud business models.

So, with a little thought there’s really no need for most applications to require instances that are larger than 4GB of RAM and 1.66GHz of compute. You just need to build for that.

And databases are going this way too. New and future “scale out” database technologies such as ScaleBase, Akiban, Xeround, dbShards, TransLattice, and (at some future point) NimbusDB can actually run quite well in a SeaMicro configuration, just creating more instances as needed to meet workload demand. The SeaMicro model will accelerate demand for scale-out database technologies in all settings – including the enterprise.

In fact, some enterprises are already buying SeaMicro units for use with Hadoop MapReduce environments. Your own massively scalable distributed analytics farm can be a very compelling first use case.

This model heavily favors Linux due to the far smaller OS memory footprint as compared with Windows Server. Microsoft will have to put Windows Server on a diet to support this model of data center or risk a really bad TCO equation. SeaMicro is adding Windows certification soon, but I’m not sure how popular that will be.

If I’m right, then it would seem that application architectures will indeed be impacted by this – though in the scheme of things it’s probably pretty minor and in line with current trends in cloud.

Virtualization? No Thank You… I’ll Take My Public Cloud Single Tenant, Please!
SeaMicro claims that they can support running virtualization hosts on their servers, but for the life of me I don’t know why you’d want to in most cases.

What do you normally use virtualization for? Typically it’s to take big honking servers and chunk them up into smaller “virtual” servers that match application workload requirements. For that you pay a performance and license penalty. Sure, there are some other capabilities that you get with virtualization solutions, but these can be accomplished in other ways.

With small servers being the standard model going forward, most workloads won’t need to be virtualized.

And consider the tenancy issue. Your 4GB 1.66GHz instance can now run on its own physical server. Nobody else will be on your server impacting your workload or doing nefarious things. All of the security and performance concerns over multi-tenancy go away. With a 1.28 Tbps connectivity fabric, it’s unlikely that you’ll feel their impact at the network layer as well. SeaMicro claims 12x available bandwidth per unit of compute than traditional servers. Faster, more secure, what’s not to love?

And then there’s the cost of virtualization licenses. According to a now-missing blog post on the Virtualization for Services Providers blog (thank you Google) written by a current employee of the VCE Company, the service provider (VSPP) cost for VMware Standard is $5/GB per month. On a 4GB VM, that’s $240 per year – or 150% the cost of the SeaMicro node over three years! (VMware Premier is $15/GB, but in fairness you do get a lot of incremental functionality in that version). And for all that you get a decrease in performance having the hypervisor between you and the bare metal server.

Undoubtedly, Citrix (XenServer), RedHat (KVM), Microsoft (Hyper-V) and VMware will find ways to add value to the SeaMicro equation, but I suspect that many new approaches may emerge that make public clouds without the need for hypervisors a reality. As Feldman put it, SeaMicro represents a potential shift away from virtualization towards the old model of “physicalization” of infrastructure.

The SeaMicro approach represents the first truly new approach to data center architectures since the introduction of blades over a decade ago. You could argue – and I believe you’d be right – that low-power super-dense server clusters are a far more significant and disruptive innovation than blades ever were.

Because of the enormous decrease in TCO represented by this model, as much as 80% or more overall, it’s fairly safe to say that any prior predictions of future aggregate data center compute capacity are probably too low by a very wide margin. Perhaps even by an order of magnitude or more, depending on the price-elasticity of demand in this market.

Whew! This is some seriously good sh%t.

It’s the dawn of a new era in the data center, where the ants will reign supreme and will carry on their backs an unimaginably larger cloud than we had ever anticipated. Combined with hyper-efficient cloud operating models, information technology is about to experience a capacity and value-enablement explosion of Cambrian proportions.

What should you do? Embrace the ants as soon as possible, or face the inevitable Darwinian outcome.

The ants go marching one by one, hurrah, hurrah…

——————

(c) 2011 CloudBzz / TechBzz Media, LLC.  All rights reserved.  This post originally appeared at http://www.cloudbzz.com/seamicro-atom-and-the-ants/. You can follow CloudBzz on Twitter @CloudBzz.

More Stories By John Treadway

John Treadway is a Vice President at Cloud Technology Partners and has over 20 years of experience delivering technology and business solutions to domestic and global enterprises across multiple industries and sectors. As a senior enterprise technology and services executive, he has a successful track record of leading strategic cloud computing and data center initiatives. John is responsible for technology IP at Cloud Technology Partners, and is actively involved with client projects and strategic alliances. John is also an active blogger in the cloud computing space and authors the CloudBzz blog. Sites/Blogs CloudBzz

@MicroservicesExpo Stories
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things c...
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
Many IT organizations have come to learn that leveraging cloud infrastructure is not just unavoidable, it’s one of the most effective paths for IT organizations to become more responsive to business needs. Yet with the cloud comes new challenges, including minimizing downtime, decreasing the cost of operations, and preventing employee burnout to name a few. As companies migrate their processes and procedures to their new reality of a cloud-based infrastructure, an incident management solution...
Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.