Welcome!

Microservices Expo Authors: Elizabeth White, Gopala Krishna Behara, Sridhar Chalasani, Tirumala Khandrika, Kelly Burford

Related Topics: @CloudExpo, Java IoT, Microservices Expo, Microsoft Cloud, Containers Expo Blog, Agile Computing

@CloudExpo: Article

Intel Fields Atom for Microservers

It has forecast that microservers could get to be 10% of the server market by 2015

Intel is going to try going after the data center with a brand new Atom System-on-a-Chip (SoC) that can be built into relatively cheap, high-density microservers for cloud providers.

It really rather not - it really wants to sell its high-end chips - but it has no choice. It has forecast that microservers could get to be 10% of the server market by 2015 and it will have to fight for a piece of it after losing a head start earlier this year when AMD plopped down $334 million in cash and stock for SeaMicro, a microserver start-up that already had Intel designed in.

But, given the tone in its voice this week, Intel is apparently serious about the sector, which it's blown off before for defensive purposes.

Intel says the new 22nm dingus, code-named Centerton and seemingly in development since 2007, is the first low-power 64-bit dual-core SoC for these data center systems that's in production and shipping to customers.

Intel makes the production and shipping point because it's looking over its shoulder at ARM, which is promising to deliver a four-core 64-bit version of its widget for microservers by 2014. Like Intel says there's currently no enterprise-class ARM-based server chip but just wait. The ARM contingent is in major test sites.

ARM vendors have trouble buying the Centerton as a real server chip since it lacks on-chip management, I/O, networking and fabric.

Intel's part sips an un-Intel-like 6W of power - which sounds low to Intel camp followers but it's still hot and therefore expensive by ARM standards. It delivers four threads with Intel Hyper-threading.

It's also got familiar server features like Error-correcting Code (ECC) memory support for higher reliability and Intel Virtualization technology for enhanced workload management. (It's suspected that Atom always had ECC and virtualization but Intel turned the features off in earlier generations.)

Microservers, which could be sold in the droves, are supposed to be good at un-intensive compute chores like serving up web pages, content delivery, large distributed memory caching, simple Big Data search systems and MapReduce apps. Within reason, the Centerton is supposed to run the x86 server-class software data centers are used to, which ARM can't do at all.

It's unclear how many nodes Centerton can support. It pretty much depends on how the OEMs finagle the networking. Rival Calexda, which has got ARM-based microservers out for test at major accounts, say it can theoretically support 4,000 nodes and practically support 500-1,000.

See, it takes a lot of systems to process huge numbers of smaller workloads while keeping the power consumption down and such workloads can run many small but highly parallel chunks of code.

Officially designated the S1200, the Intel widget is also expected to be used in storage and networking systems and Intel says - without indicating who's doing what - that the part's got more than 20 low-power server and storage and networking systems design-wins at Dell, HP, Huawei, Inspur, Quanta, Wiwynn, CETC, Supermicro, Accusys, Microsan, Qsan and Qnap.

In fact an unnamed storage vendor reportedly swapped out an ARM design for the Intel SoC and ARMs are supposed to be pretty darn good in storage applications.

HP, which is already in bed with Calexa and its ARM-based boxes as part of its processor-agnostic Project Moonshot, means to try the Intel part in a hush-hush server dubbed Gemini.

This summer HP said the first Moonshot servers would be based on Centerton, with initial systems shipping by the end of this year. It's now more likely to be in the first quarter.

Dell's been partnering with Marvell to create so-called Copper servers using Marvell's ARM-based Armada XP chip but - since Marvell has gone dark about its development - Dell may be closer to selling Calexa boxes.

SeaMicro, the microserver pioneer that AMD had the temerity to buy - considering all of SeaMicro's gear is based on Intel parts - even Intel parts made especially for it - and will be until it switches over to ARM - has a so-called supercompute fabric that connects thousands of processor cores, memory, storage and input/output traffic and supports multiple processor instruction sets.

Calexda, which is hobbled by the fact that it's neither x86 or 64-bit, useful propaganda points for Intel though in the final analysis it may not matter, has fabric, I/O and management built into its chip.

Apparently OEMs will have to wait until later this year or early next when Intel's supposed to deliver a next-generation Avoton Atom that could make the ARM boys sweat.

It'll be built using Intel's fancy new 22nm 3D Tri-gate transistors and should have 16GB-32GB of memory and four or eight cores.

By then Intel might have a fabric too.

Karl Freund, Calexda's VP of marketing, sent around a message about the Centerton saying, "Intel didn't specify the additional chips required to deliver a real ‘server-class' solution like Calxeda's, but our analysis indicates this could add at least 10 additional watts plus the cost. That would imply the real comparison between ECX and S1200 is 3.8 vs 16 watts, so roughly 3-4 times more power for Intel's new S1200. And again comparing two cores to four, internal Calxeda benchmarks indicate that Calxeda's four cores and larger cache deliver 50% more performance compared to the two hyper-threaded Atom cores. This translates to a Calxeda advantage of 4.5 to six times better performance per watt, depending on the nature of the application."

He provided this chart to make the comparison plain:

 

ECX1000

Intel S1200

Watts

3.8

6.1

Cores

4

2

Cache (GB)

4

1

PCI-E

8 lanes

8 lanes

ECC

Yes

Yes

SATA

Yes

No

Ethernet

Yes

No

Management

Yes

No

Fabric Switch

80 Gb

NA

Fabric ports

5

NA

The new Intel S1200 product family will consist of three processors with frequency ranging from 1.6GHz to 2GHz. They start at $54 in quantities of 1,000.

Despite the design-win parade Intel didn't show off any boxes so competitors figure it won't really have the chip for a while. Microsoft and Facebook are supposed to fancy the widget but it's unclear if they're using it.

Atom SoC configuration in a highly dense rack will reportedly net more revenue than a rack of way fewer, more powerful Xeon processors.

In 2014 Intel will move to a 14nm process first for low-power Xeons and then Atoms.

More Stories By Maureen O'Gara

Maureen O'Gara the most read technology reporter for the past 20 years, is the Cloud Computing and Virtualization News Desk editor of SYS-CON Media. She is the publisher of famous "Billygrams" and the editor-in-chief of "Client/Server News" for more than a decade. One of the most respected technology reporters in the business, Maureen can be reached by email at maureen(at)sys-con.com or paperboy(at)g2news.com, and by phone at 516 759-7025. Twitter: @MaureenOGara

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
Many IT organizations have come to learn that leveraging cloud infrastructure is not just unavoidable, it’s one of the most effective paths for IT organizations to become more responsive to business needs. Yet with the cloud comes new challenges, including minimizing downtime, decreasing the cost of operations, and preventing employee burnout to name a few. As companies migrate their processes and procedures to their new reality of a cloud-based infrastructure, an incident management solution...
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things ...
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
Cloud Governance means many things to many people. Heck, just the word cloud means different things depending on who you are talking to. While definitions can vary, controlling access to cloud resources is invariably a central piece of any governance program. Enterprise cloud computing has transformed IT. Cloud computing decreases time-to-market, improves agility by allowing businesses to adapt quickly to changing market demands, and, ultimately, drives down costs.
Recent survey done across top 500 fortune companies shows almost 70% of the CIO have either heard about IAC from their infrastructure head or they are on their way to implement IAC. Yet if you look under the hood while some level of automation has been done, most of the infrastructure is still managed in much tradition/legacy way. So, what is Infrastructure as Code? how do you determine if your IT infrastructure is truly automated?
Every few years, a disruptive force comes along that prompts us to reframe our understanding of what something means, or how it works. For years, the notion of what a computer is and how you make one went pretty much unchallenged. Then virtualization came along, followed by cloud computing, and most recently containers. Suddenly the old rules no longer seemed to apply, or at least they didn’t always apply. These disruptors made us reconsider our IT worldview.