Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Mehdi Daoudi, Yeshim Deniz

Related Topics: Microservices Expo, Mobile IoT, Microsoft Cloud, Containers Expo Blog, Agile Computing, @CloudExpo

Microservices Expo: Article

Five Things You Need to Know About a Carrier Grade Server

The hardware and software that powers the network must be able to perform under some very extreme conditions

The telecom industry provides mission-critical services, which means that the hardware and software that powers the network must be able to perform under some very extreme conditions. Servers, in particular, must be extremely reliable and function during and after everything from streams of heavy traffic to massive earthquakes. Carrier grade servers have to go through rigid testing to receive certification, and they have several characteristics that make them very important.

They Are Part of the National Infrastructure
Telecommunications, as a whole, is considered an important part of the national infrastructure. This means that any disasters that derail the service are a huge problem for everyone involved - from emergency response personnel to government officials to everyone who just wants to call and check on the people they know. This infrastructure has to be operational before, during, and after an event, which is why these servers are tested so ruthlessly. Carrier grade servers should never be the weak link that drops the ball in an emergency.

NEBS Is Tough but Important
The Network Equipment Building System (NEBS) guidelines are extremely stringent and not subject to any kind of leniency. It will take time for the tests to be completed on any new carrier grade server, and the equipment will probably be destroyed in the process. (This happens when one of the tests is to see how well it reacts to fire.) However, these guidelines are based on some directive from the FCC, and when the products can pass these tests, providers can confidently use them and have a reasonable expectation for almost constant uptime.

They Should Be Able to Shake Off an Earthquake
It may sound strange, but these products can't be considered carrier grade until they can handle an 8.2 earthquake. These seismic tests are usually done pretty early in the process, and most products seem to pass the first time without much trouble. Nevertheless, designers must consider many different elements, from taught cables to power failures, as they iterate the next generation of servers.

Compatibility with Legacy Infrastructure Is Still Possible
Carrier grade systems are often assumed to be proprietary technology that are designed for specific purposes and therefore incompatible with other servers already at work. While this has been true, there are new systems that are compatible with legacy servers and operating systems, which make them a more cost-effective option for providers. These new OEM servers provide the necessary levels of reliability, performance, and customization while still keeping costs low enough to be a valid option for emerging and rural markets.

Extreme Uptime Requirements
Industries that provide mission-critical services can suffer a lot when their servers go down. Some estimates claim that a server outage could end up costing a million dollars for every hour that it's down. Given that a normal, average server is capable of hitting 30 to 300 minutes of unplanned downtime a year that could translate into millions of dollars every year.

Carrier grade servers are designed to cut out downtime as much as possible. The requirements published by Telcordia, in fact, only allow for three minutes of downtime per year from planned and unplanned causes. There's not a lot of room there for accidents, and the goal is to always maintain around 99.999% uptime.

More Stories By Jared Jacobs

Jared Jacobs has professional and personal interests in technology. As an employee of Dell, he has to stay up to date on the latest innovations in large enterprise solutions and consumer electronics buying trends. Personally, he loves making additions to his media rooms and experimenting with surround sound equipment. He’s also a big Rockets and Texans fan.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


Microservices Articles
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
As Enterprise business moves from Monoliths to Microservices, adoption and successful implementations of Microservices become more evident. The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Documenting hurdles and problems for the use of Microservices will help consultants, architects and specialists to avoid repeating the same mistakes and learn how and when to use (or not use) Microservices at the enterprise level. The circumstance w...
Containers, microservices and DevOps are all the rage lately. You can read about how great they are and how they’ll change your life and the industry everywhere. So naturally when we started a new company and were deciding how to architect our app, we went with microservices, containers and DevOps. About now you’re expecting a story of how everything went so smoothly, we’re now pushing out code ten times a day, but the reality is quite different.
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
The explosion of new web/cloud/IoT-based applications and the data they generate are transforming our world right before our eyes. In this rush to adopt these new technologies, organizations are often ignoring fundamental questions concerning who owns the data and failing to ask for permission to conduct invasive surveillance of their customers. Organizations that are not transparent about how their systems gather data telemetry without offering shared data ownership risk product rejection, regu...
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. In his session at 18th Cloud Expo, Gordon Haff, senior cloud strategy marketing and evangelism manager at Red Hat, will discuss how today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of rec...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how...