Click here to close now.

Welcome!

MICROSERVICES Authors: Michael Kanasoot, John Wetherill, Elizabeth White, Aria Blog, Dana Gardner

Related Topics: Virtualization, MICROSERVICES, Cloud Expo, Security, Big Data Journal, SDN Journal

Virtualization: Article

Bare Metal Blog: Quality Is Systemic, or It Is Not

In all critical systems the failure of even one piece can have catastrophic results for the user

February 5, 2013

BareMetalBlog talking about quality testing of hardware, in all its forms. F5 does a great job in this space.

For those of you new to the Bare Metal Blog series, find them all right here.

In all critical systems – from home heating units to military firearms – the failure of even one piece can have catastrophic results for the user. While it is unlikely that the failure of an ADC is going to be quite so catastrophic, it can certainly make IT staff’s day(s) terrible and cost the organization a fortune in lost revenue. That’s not to mention the problems that downtime’s impact on an organizations’ brand can have over the longer term. It is actually pretty scary to ponder the loss of any core system, but one that acts as a gateway and scaling factor for remote employee workload and/or customer access is even higher on the list of Things To Be Avoided ™.

In general, if you think about it the number of hardware failures out there is relatively minimal. There are a ton of pieces of network gear doing their thing every day, and yes, there is the occasional outage, but if you consider the number of devices NOT going down on a given day, the failure rate is very tiny.

Still, no one wants to be in that tiny percentage any more than they absolutely must. Hardware breaks, and will always do so, it is the nature of electronic and mechanical things. But we should ask more questions of our vendors to make certain they’re doing all that they can to keep the chances of their device breaking during their otherwise useful lifetime to a minimum.

For an example of doing it right, we’ll talk a bit about the lengths that F5 goes to in an attempt to make devices as reliable as possible from an  electro-mechanical perspective. While I am an F5 employee, I will note that there is no doubt that F5 gear is highly reliable. It was known for quality before I came to F5, and I have not heard anything since joining that would change that impression. So I use F5 because (a) I am aware of the steps we take as an organization and (b) because our hardware testing is an example of doing it right.

And of course, there are things I can’t tell you, and things that we just will not have room to delve into very deeply in this overview blog. I am considering extending the Bare Metal Blog series to include (among other things) more detail about those parts that I would want to know more about if I were a reader, but for this blog, we’re going to skim so there is space to cover everything without making the blog so long you don’t read to the end.

I admit it, I’ve talked to a lot of companies about testing over the years, and can’t recall a vendor that did a more thorough job – though I can think of a few whose record in the field says they probably have a similar program. So let’s look at some of the quality testing done on hardware.

Parts are not just parts.
An ADC, like any computerized system, is a complex beast. There is a lot going on and the quality of the weakest link is the piece that sets the life expectancy and out-of-the-box quality standards for the overall product. As such there are some detailed parts and subassembly tests that gear must go through.

For F5, these tests include:

  • Signal Integrity Tests to test for signal degradation between parts/subsystems.
  • BIOS Test Suites to validate that BIOS performs as expected and handles exception cases reliably.
  • Software Design Verification Testing to detect and eliminate software quality issues early in the development process.
  • Sub- Assembly Tests to verify correct subsystem performance and quality.
  • FPGA System Validation Tests determines that the FPGA design and hardware perform as expected.
  • Automated Optical Inspection used on the PCB production line to prevent and detect defects.
  • Automated X-Ray Inspection takes 3D slices of an assembled circuit board to prevent and detect defects.
  • In-Circuit Test using a series of probes to test the populated circuit board with power applied to detect defects.
  • Flying Probe uses a “golden board” (perfect sample) to compare against a newly produced board to verify there are no defects.

Now that’s a lot of testing, though I have to admit I’m still learning about the testing process, there may well be more. But you’ll note that some things aren’t immediately called out here – like items picked from suppliers, which could be caught in some of these tests but might not  either. That is because supplier quality standards are separate from actual testing, and require that suppliers whose parts make it into F5 gear are up to standard.

Supply demands
So what do we, as an organization, require from a quality perspective of those who wish to be our suppliers? Here’s a list. This list I KNOW isn’t complete, because I pared it down for the purposes of this blog. I think you’ll get the idea from what’s here though.

  • All assembly suppliers are ISO9000 and 140001 certified.
  • Suppliers assemble and test their products to F5 specifications.
  • Suppliers are monitored with closed loop performance metrics including delivery and quality.
  • Formal Supplier Corrective Action Response program – when a fault is determined in supplier quality, a formal system to quickly address the issue.
  • Quarterly reviews with senior management utilizing a formal supplier scorecard to evaluate supplier quality, stability, and more.

The biggest one in the list, IMO, is that suppliers assemble and test product to F5 specifications. Their part is going in our box, but our name is going on it. F5 has a vested interest in protecting that name, so setting the standards by which the suppliers put together and test the product they are supplying is huge. After all, many suppliers are building tiny little subsystems for inside an F5 device, so holding them to F5 standards makes the whole stronger.

By way of example, we require the more reliable but more expensive version of capacitors from our suppliers. For a bit of background on the problem, there is an excellent article on hardwaresecrets.com (and a pretty good overview on wikipedia.com) about capacitors. By demanding that our suppliers use better quality components, the overall life expectancy of our hardware is higher, meaning you get less calls in the middle of the night.

The whole is different than the sum of the parts
While an organization can test parts until the sun rises in the west, that will not guarantee the quality of the overall product. And in the end, it is the overall product that a vendor sells. As such, manufacturers generally (and F5 specifically) keep an entire suite of whole-product tests on-hand for product quality assessment. Here are some of them used at F5.

  • Mechanical Testing Test the construction of the system by  applying shock, drop, vibe, repetitive insertion/extractions, and more.
  • Highly Accelerated Life Testing -  Heat and vibration are used to determine the quality and operational limits of the device. The goal is to simulate years of use in a manageable timeframe.
  • Environmental Stress Screening – Expose the device to extremes of environment, from temperature to voltage.
  • MFG Test Suite System Stress testing - turn everything on, Reboot, Power Cycle, et cetera. By way of example, we cycle power up to 10,000 times during this testing.
  • On-Going Reliability Testing - The products currently in the manufacturing line are randomly picked and then put in a burn-in chamber which then test the device at elevated temperature.
  • Post Pack out Audit – Pull random samples from our finished good inventory to verify quality.

That’s a lot of testing, and it is not anywhere near all that F5 does to validate a box. For example, while software testing got a hat-tip at the component level, our Traffic Management Operating System (TMOS) has a completely separate set of testing, validation, and QA processes that are not listed here because this is the Bare Metal Blog. Maybe at some point in the future I’ll do a series like Bare Metal Blog on our software. That would be interesting for me, hopefully for you also.

It’s not over when it’s over
The entire time that Lori and I were application developers, there was a party to celebrate every time we finished a major piece of software. From an evening out with the team when our tax prep software shipped to a bottle of champagne on the roof of an AutoDesk office building when AutoCAD Map shipped, we always got to relax and enjoy it a bit.

While our hardware dev teams get something similar, our hardware test teams don’t pack up the gear and call it a product. For the entire lifecycle of an F5 box – from first prototype to End of Life – our test team does continuous testing to monitor and improve the quality of the product. Unlike most of what you will find in this blog, that is pretty unique to F5. Other companies do it, but unlike ISO certification or HALT testing, continuous testing is not accepted as a mandatory part of product engineering in the computing space. F5 does this because it makes the most sense. From variations in quality of chips to suppliers changing their suppliers, things change over the production of a product, and F5 feels it is important to overall quality to stay on top of that fact. This system also allows for continuous improvement of the product over its lifecycle.

One of the many reasons I think F5 is a great company. I have twice run into scenarios that involved a vendor who did not do this type of testing, and it cost me. Once was as a reviewer, which means it was worse for the vendor than for me, and once as an IT manager, which means it was worse for me than the vendor. I would suggest you start asking your vendors about lifetime testing, because a manufacturing or supplier change can impact the reliability of the gear. And if it does, either they catch it, or you could be walking into a nightmare. The perfect example (because so many of us had to deal with it) was a huge multinational selling systems with “DeskStar” disks that we all now lovingly call “Death Star” disks.

You can rely on it
This process is a proactive investment by F5 in your satisfaction. While you might think “doesn’t all that testing – particularly when continuous testing occurs over the breadth of devices you sell – cost a lot of money?”, the answer is “nowhere near as much as having to visit every device of model X and repair it, nowhere near as much as the loss of business persistent quality issues generates”. And it is true. We truly care about your satisfaction and the reliability of your network, but when it comes down to it, that caring is based upon enlightened self interest. The net result though is devices you can trust to just keep going.

I know, we have one in our basement from before we came to F5, It’s old and looks funny next to our shiny newer one. But it still works. It’s EOL’d, so it isn’t getting any better, and when it breaks it’s done, but the device is nearly a decade old, and still operates as originally advertised.

If only our laptops could do that.

More Stories By Don MacVittie

Don MacVittie is Founder of Ingrained Technology, LLC, specializing in Development, Devops, and Cloud Strategy. Previously, he was a Technical Marketing Manager at F5 Networks. As an industry veteran, MacVittie has extensive programming experience along with project management, IT management, and systems/network administration expertise.

Prior to joining F5, MacVittie was a Senior Technology Editor at Network Computing, where he conducted product research and evaluated storage and server systems, as well as development and outsourcing solutions. He has authored numerous articles on a variety of topics aimed at IT professionals. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
Cloud computing is changing the way we look at IT costs, according to industry experts on a recent Cloud Luminary Fireside Chat panel discussion. Enterprise IT, traditionally viewed as a cost center, now plays a central role in the delivery of software-driven goods and services. Therefore, companies need to understand their cloud utilization and resulting costs in order to ensure profitability on their business offerings. Led by Bernard Golden, this fireside chat offers valuable insights on ho...
It's 2:15pm on a Friday, and I'm sitting in the keynote hall at PyCon 2013 fidgeting through a succession of lightning talks that have very little relevance to my life. Topics like "Python code coverage techniques" (ho-hum) and "Controlling Christmas lights with Python” (yawn - I wonder if there's anything new on Hacker News)...when Solomon Hykes takes the stage, unveils Docker, and the world shifts. If you haven't seen it yet, you should watch the video of Solomon's Pycon The Future of Linux C...
SYS-CON Events announced today that MangoApps will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY., and the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. MangoApps provides private all-in-one social intranets allowing workers to securely collaborate from anywhere in the world and from any device. Social, mobile, and eas...
SYS-CON Events announced today that Solgenia will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY, and the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Solgenia is the global market leader in Cloud Collaboration and Cloud Infrastructure software solutions. Designed to “Bridge the Gap” between Personal and Professional S...
You hear the terms “subscription economy” and “subscription commerce” all the time. And with good reason. Subscription-based monetization is transforming business as we know it. But what about usage? Where’s the “consumption economy”? Turns out, it’s all around us. When most people think of usage-based billing, the example that probably comes to mind first is metered public utilities — water, gas and electric. Phone services, especially mobile, might come next. Then maybe taxis. And that’s ab...
Exelon Corporation employs technology and process improvements to optimize their IT operations, manage a merger and acquisition transition, and to bring outsourced IT operations back in-house. To learn more about how this leading energy provider in the US, with a family of companies having $23.5 billion in annual revenue, accomplishes these goals we're joined by Jason Thomas, Manager of Service, Asset and Release Management at Exelon. The discussion is moderated by me, Dana Gardner, Principal A...
This month I want to revisit supporting infrastructure and datacenter environments. I have touched (some would say rant) upon this topic since my post in April 2014 called "Take a Holistic View of Support". My thoughts and views on this topic have not changed at all: it's critical for any organization to have a holistic, comprehensive strategy and view of how they support their IT infrastructure and datacenter environments. In fact, I believe it's even more critical today then it was a year ago ...
As a group of concepts, DevOps has converged on several prominent themes including continuous software delivery, automation, and configuration management (CM). These integral pieces often form the pillars of an organization’s DevOps efforts, even as other bigger pieces like overarching best practices and guidelines are still being tried and tested. Being that DevOps is a relatively new paradigm - movement - methodology - [insert your own label here], standards around it have yet to be codified a...
Modern Systems announced completion of a successful project with its new Rapid Program Modernization (eavRPMa"c) software. The eavRPMa"c technology architecturally transforms legacy applications, enabling faster feature development and reducing time-to-market for critical software updates. Working with Modern Systems, the University of California at Santa Barbara (UCSB) leveraged eavRPMa"c to transform its Student Information System from Software AG's Natural syntax to a modern application lev...
When it comes to microservices there are myths and uncertainty about the journey ahead. Deploying a “Hello World” app on Docker is a long way from making microservices work in real enterprises with large applications, complex environments and existing organizational structures. February 19, 2015 10:00am PT / 1:00pm ET → 45 Minutes Join our four experts: Special host Gene Kim, Gary Gruver, Randy Shoup and XebiaLabs’ Andrew Phillips as they explore the realities of microservices in today’s IT worl...
The world's leading Cloud event, Cloud Expo has launched Microservices Journal on the SYS-CON.com portal, featuring over 19,000 original articles, news stories, features, and blog entries. DevOps Journal is focused on this critical enterprise IT topic in the world of cloud computing. Microservices Journal offers top articles, news stories, and blog posts from the world's well-known experts and guarantees better exposure for its authors than any other publication. Follow new article posts on T...
Hosted PaaS providers have given independent developers and startups huge advantages in efficiency and reduced time-to-market over their more process-bound counterparts in enterprises. Software frameworks are now available that allow enterprise IT departments to provide these same advantages for developers in their own organization. In his workshop session at DevOps Summit, Troy Topnik, ActiveState’s Technical Product Manager, will show how on-prem or cloud-hosted Private PaaS can enable organ...
OmniTI has expanded its services to help customers automate their processes to deliver high quality applications to market faster. Consistent with its focus on IT agility and quality, OmniTI operates under DevOps principles, exploring the flow of value through the IT delivery process, identifying opportunities to eliminate waste, realign misaligned incentives, and open bottlenecks. OmniTI takes a unique, value-centric approach by plotting each opportunity in an effort-payoff quadrant, then work...
For those of us that have been practicing SOA for over a decade, it's surprising that there's so much interest in microservices. In fairness microservices don't look like the vendor play that was early SOA in the early noughties. But experienced SOA practitioners everywhere will be wondering if microservices is actually a good thing. You see microservices is basically an SOA pattern that inherits all the well-known SOA principles and adds characteristics that address the use of SOA for distribut...
Microservice architectures are the new hotness, even though they aren't really all that different (in principle) from the paradigm described by SOA (which is dead, or not dead, depending on whom you ask). One of the things this decompositional approach to application architecture does is encourage developers and operations (some might even say DevOps) to re-evaluate scaling strategies. In particular, the notion is forwarded that an application should be built to scale and then infrastructure sho...
SYS-CON Events announced today the IoT Bootcamp – Jumpstart Your IoT Strategy, being held June 9–10, 2015, in conjunction with 16th Cloud Expo and Internet of @ThingsExpo at the Javits Center in New York City. This is your chance to jumpstart your IoT strategy. Combined with real-world scenarios and use cases, the IoT Bootcamp is not just based on presentations but includes hands-on demos and walkthroughs. We will introduce you to a variety of Do-It-Yourself IoT platforms including Arduino, Ras...
Our guest on the podcast this week is Jason Bloomberg, President at Intellyx. When we build services we want them to be lightweight, stateless and scalable while doing one thing really well. In today's cloud world, we're revisiting what to takes to make a good service in the first place. Listen in to learn why following "the book" doesn't necessarily mean that you're solving key business problems.
Microservices are the result of decomposing applications. That may sound a lot like SOA, but SOA was based on an object-oriented (noun) premise; that is, services were built around an object - like a customer - with all the necessary operations (functions) that go along with it. SOA was also founded on a variety of standards (most of them coming out of OASIS) like SOAP, WSDL, XML and UDDI. Microservices have no standards (at least none deriving from a standards body or organization) and can be b...
Right off the bat, Newman advises that we should "think of microservices as a specific approach for SOA in the same way that XP or Scrum are specific approaches for Agile Software development". These analogies are very interesting because my expectation was that microservices is a pattern. So I might infer that microservices is a set of process techniques as opposed to an architectural approach. Yet in the book, Newman clearly includes some elements of concept model and architecture as well as p...
SYS-CON Events announced today the DevOps Foundation Certification Course, being held June ?, 2015, in conjunction with DevOps Summit and 16th Cloud Expo at the Javits Center in New York City, NY. This sixteen (16) hour course provides an introduction to DevOps – the cultural and professional movement that stresses communication, collaboration, integration and automation in order to improve the flow of work between software developers and IT operations professionals. Improved workflows will res...