Welcome!

Microservices Expo Authors: Don MacVittie, Stackify Blog, Liz McMillan, Simon Hill, Dalibor Siroky

Related Topics: Microservices Expo, Microsoft Cloud, Containers Expo Blog, Agile Computing, @CloudExpo, SDN Journal

Microservices Expo: Article

Facebook Moves to Crush Servers in a Group Hug

Facebook released a Common Slot architecture specification for data center motherboards

Um, something happened this week you ought to know about.

Facebook blew up the traditional monolithic server - and lit charges under the entire $55 billion-a-year server industry.

GigaOm was first to say it that way and it may turn out to be true so it bears repeating.

Facebook, along with its user-leaning Open Compute contingent, is bent on redesigning servers to suit themselves using interchangeable, disaggregated, independently upgradeable parts.

Ultimately it's supposed to free the customer from the tyranny of the vendor roadmap.

To advance this crusade, Facebook released a Common Slot architecture specification for data center motherboards at the Open Compute Summit Wednesday.

The thing is nicknamed "Group Hug" and it's supposed to produce boards that are completely vendor-neutral and last through multiple generations of processors from multiple vendors.

Having been born too late to exert any influence over server blades, Facebook is determined to see that the new microserver architectures conform to some sort of compatibility code.

Intel, AMD, Applied Micro and Calxeda are already committed to producing products designed to the Common Slot spec and Calxeda, the little Texas start-up with the ARM microserver designs, is so pleased to be in this rarified company it's beside itself.

The way things are unfolding it looks like Facebook and the Open Compute Project (OPC) are endorsing microservers built out of the mobile chips used in smartphones and tablets, giving ARM a chance to break into the citadel held fast by the x86.

Frank Frankovsky, Facebook's VP of hardware design and supply chain as well as executive director of the Open Compute Foundation, showed off a Group Hug board with five unreleased Intel x86 Avoton S Series Atom chips on it as well as five X-Gene 64-bit ARM SoCs.

They all share the same power, electrical and mechanical interconnects and slide into the same microserver chassis.

The chips are on cards that are inserted into the so-called common slot. The motherboard can currently accommodate 10 cards.

"We're establishing for the first time a common slot for any SoC maker to design to a common standard," Frankovsky said. "All the surrounding bits are the same, with DDR memory and network controllers, and now for the first time we will have the ability to have a common slot architecture."

It uses a simple PCIe x8 connector to link the SoCs to the board.

"If we had left this to the industry they probably would have gone out and found the most expensive and esoteric connector on the planet," he said. "What we decided to do was use a PCI-e x8 connector and simply change the pin-out." The Facebook backplane design has one PCI pin-out per server.

The board's layout isn't etched in stone either. It's just supposed to encourage people to use the common slot for their CPUs. "We will now not be bound by placement of components on a single monolithic motherboard," Frankovsky said. "We will be able to do smarter tech refreshes."

The object of the game is to make hardware that's cheaper, greener, more upgradeable and software-defined so it fits the workload and less power hungry.

Group Hug envisions abandoning the modern vendor-integrated server that has to be switched out generation-to-generation for components that can be upgraded as they become available without scraping what surrounds them, letting customers design modular, custom, scalable servers with just the right compute, storage and networking for the job.

The growing consensus is you shouldn't have to change the whole system just to refresh processors, memory or I/O.

The concept and the movement building behind it obviously threaten IBM, HP, Dell and probably VMware too, since Facebook doesn't much fancy virtualization as a way to drive hardware utilization.

Intel, on the other hand, is on the movement's board and is contributing designs for its forthcoming silicon photonics technology, which will enable 100 Gbps interconnects, enough bandwidth to serve multiple processor generations.

Frankovsky said, "This technology also has such low latency that we can take components that previously needed to be bound to the same motherboard and begin to spread them out within a rack."

"We'll said be able to do things in the data center that we've never been able to do before," Intel CTO Justin Rattner said.

It's supposed to connect servers together using a laser-base technology created using silicon rather than pricier techniques.

A prototype Atom-based rack-mount server from Quanta uses Intel's 100 Gbps silicon photonics to connect parts at full speed anywhere on the rack.

More Stories By Maureen O'Gara

Maureen O'Gara the most read technology reporter for the past 20 years, is the Cloud Computing and Virtualization News Desk editor of SYS-CON Media. She is the publisher of famous "Billygrams" and the editor-in-chief of "Client/Server News" for more than a decade. One of the most respected technology reporters in the business, Maureen can be reached by email at maureen(at)sys-con.com or paperboy(at)g2news.com, and by phone at 516 759-7025. Twitter: @MaureenOGara

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
"Codigm is based on the cloud and we are here to explore marketing opportunities in America. Our mission is to make an ecosystem of the SW environment that anyone can understand, learn, teach, and develop the SW on the cloud," explained Sung Tae Ryu, CEO of Codigm, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, discussed how by using ne...
Many enterprise and government IT organizations are realizing the benefits of cloud computing by extending IT delivery and management processes across private and public cloud services. But they are often challenged with balancing the need for centralized cloud governance without stifling user-driven innovation. This strategy requires an approach that fundamentally reshapes how IT is delivered today, shifting the focus from infrastructure to services aggregation, and mixing and matching the bes...
"CA has been doing a lot of things in the area of DevOps. Now we have a complete set of tool sets in order to enable customers to go all the way from planning to development to testing down to release into the operations," explained Aruna Ravichandran, Vice President of Global Marketing and Strategy at CA Technologies, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
While we understand Agile as a means to accelerate innovation, manage uncertainty and cope with ambiguity, many are inclined to think that it conflicts with the objectives of traditional engineering projects, such as building a highway, skyscraper or power plant. These are plan-driven and predictive projects that seek to avoid any uncertainty. This type of thinking, however, is short-sighted. Agile approaches are valuable in controlling uncertainty because they constrain the complexity that ste...
Cavirin Systems has just announced C2, a SaaS offering designed to bring continuous security assessment and remediation to hybrid environments, containers, and data centers. Cavirin C2 is deployed within Amazon Web Services (AWS) and features a flexible licensing model for easy scalability and clear pay-as-you-go pricing. Although native to AWS, it also supports assessment and remediation of virtual or container instances within Microsoft Azure, Google Cloud Platform (GCP), or on-premise. By dr...
"This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
"We're developing a software that is based on the cloud environment and we are providing those services to corporations and the general public," explained Seungmin Kim, CEO/CTO of SM Systems Inc., in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
It’s “time to move on from DevOps and continuous delivery.” This was the provocative title of a recent article in ZDNet, in which Kelsey Hightower, staff developer advocate at Google Cloud Platform, suggested that “software shops should have put these concepts into action years ago.” Reading articles like this or listening to talks at most DevOps conferences might make you think that we’re entering a post-DevOps world. But vast numbers of organizations still struggle to start and drive transfo...
Agile has finally jumped the technology shark, expanding outside the software world. Enterprises are now increasingly adopting Agile practices across their organizations in order to successfully navigate the disruptive waters that threaten to drown them. In our quest for establishing change as a core competency in our organizations, this business-centric notion of Agile is an essential component of Agile Digital Transformation. In the years since the publication of the Agile Manifesto, the conn...
The cloud revolution in enterprises has very clearly crossed the phase of proof-of-concepts into a truly mainstream adoption. One of most popular enterprise-wide initiatives currently going on are “cloud migration” programs of some kind or another. Finding business value for these programs is not hard to fathom – they include hyperelasticity in infrastructure consumption, subscription based models, and agility derived from rapid speed of deployment of applications. These factors will continue to...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Let's do a visualization exercise. Imagine it's December 31, 2018, and you're ringing in the New Year with your friends and family. You think back on everything that you accomplished in the last year: your company's revenue is through the roof thanks to the success of your product, and you were promoted to Lead Developer. 2019 is poised to be an even bigger year for your company because you have the tools and insight to scale as quickly as demand requires. You're a happy human, and it's not just...
Enterprises are adopting Kubernetes to accelerate the development and the delivery of cloud-native applications. However, sharing a Kubernetes cluster between members of the same team can be challenging. And, sharing clusters across multiple teams is even harder. Kubernetes offers several constructs to help implement segmentation and isolation. However, these primitives can be complex to understand and apply. As a result, it’s becoming common for enterprises to end up with several clusters. Thi...
DevOps teams have more on their plate than ever. As infrastructure needs grow, so does the time required to ensure that everything's running smoothly. This makes automation crucial - especially in the server and network monitoring world. Server monitoring tools can save teams time by automating server management and providing real-time performance updates. As budgets reset for the New Year, there is no better time to implement a new server monitoring tool (or re-evaluate your current solution)....
We just came off of a review of a product that handles both containers and virtual machines in the same interface. Under the covers, implementation of containers defaults to LXC, though recently Docker support was added. When reading online, or searching for information, increasingly we see “Container Management” products listed as competitors to Docker, when in reality things like Rocket, LXC/LXD, and Virtualization are Dockers competitors. After doing some looking around, we have decided tha...
"Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The benefits of automation are well documented; it increases productivity, cuts cost and minimizes errors. It eliminates repetitive manual tasks, freeing us up to be more innovative. By that logic, surely, we should automate everything possible, right? So, is attempting to automate everything a sensible - even feasible - goal? In a word: no. Consider this your short guide as to what to automate and what not to automate.
identify the sources of event storms and performance anomalies will require automated, real-time root-cause analysis. I think Enterprise Management Associates said it well: “The data and metrics collected at instrumentation points across the application ecosystem are essential to performance monitoring and root cause analysis. However, analytics capable of transforming data and metrics into an application-focused report or dashboards are what separates actual application monitoring from relat...