Click here to close now.




















Welcome!

Microservices Expo Authors: Elizabeth White, Pat Romanski, Liz McMillan, VictorOps Blog, Trevor Parsons

Related Topics: Microservices Expo, Java IoT, Microsoft Cloud, Containers Expo Blog, Agile Computing, @CloudExpo, @BigDataExpo, SDN Journal, OpenStack Journal

Microservices Expo: Blog Feed Post

Bare Metal Blog: Testing for Numbers or Performance?

What you test can say a lot about you

Along the lines of the first blog in the testing portion of the Bare Metal Blog series, I’d like to talk a bit more about how the testing environment, the device configuration, and the payloads translate into test results.

One of the problems most advanced mass education systems run into is the question of standardized testing. While it is true that you cannot fix what you have not determined is broken, like most things involving people, testing students for specific areas of knowledge does kind of guarantee that those doing the teaching will err on the side of preparing students to take the test rather than to succeed in life. The mere fact that there IS a test changes what is taught. It is of course possible to a make this into a massively positive proposition by targeting the standardized tests at the most important things students need to  learn, but for our discussion purposes, the result is the same – the students will be taught to whatever is on that test first, and all else secondarily.

This is far too often true of vendor product testing also. The mere fact that there will be a test of the equipment, and most high-tech markets being highly competitive, makes things lean toward tweaking the device (or the test) to maximize test performance, in spite of what the real world performance will be.

The current most flagrant problem with testing is a variant on an old theme. Way back when testing the throughput of network switches made sense, there was a lot of “packets per second” testing with no payload. Well, you test the ability of the switch to send packets to the right place, but do not at all test the device in a manner consistent with the real world usage of switches. Today we have a whole slew of similar tests for ADCs. The purpose of an ADC is to load balance, optimize, and if needed secure the passage of packets. Primarily this is for application traffic because they’re Application Delivery Controllers. Yet, application traffic being layer seven kind of means that you need to do some layer seven decision-making if the device is to be tested in the real world. If the packet is a layer seven packet, but layer four switching is all that is performed on it, the test is completely useless to determining the actual capabilities of the device. And yet there is a lot of that type of testing going on out there right now.  It’s time – way past time – to drive testing into the real world for ADCs. Layer seven decision making is much more complex and requires a deep look at the packets in question, meaning that the results will not be nearly as pretty as simple layer four switching packets are. While you cannot do a direct comparison of all of the optional features of two different ADCs simply because the level of optional functionality support is so broad once a solid ADC platform is deployed, but you can test the basic capabilities and responsiveness of the core products.

And that is what we, as an industry must begin to insist on. I use one single oddity in ADC testing here, but every branch of high-tech testing I’ve been involved in over the years – security, network gear, storage, application – all have similar “this is not good enough” testing that we need to demand is dropped in favor of solid testing that reflects a real-world device. Not your real-world device unless you are running the test lab, but a real-world device that is seeing – and more importantly acting upon – data that the device will encounter in an actual network, doing the job it was designed for.

As I mentioned in the last testing installment, you can make an ADC look astounding if your tests don’t actually force it to do anything. For our public testing, we have standards, and offer up our configuration and testing goals on DevCentral. Whether you use it to validate the test results F5 uses, or to set up the tests in your own environment, publicly talking about how testing is performed is a big deal. Ask your vendor for configuration files and testing plan when numbers are tossed at you, make certain you know what they’re testing when they try to impress you with over-the-top performance numbers. In my career, I have seen cases where “double the performance of our nearest competitor” was used publicly and was as close to an outright lie as possible, since the test and configuration were different between the two products the test claimed to compare.

When you buy any form of datacenter equipment, you’re going to be stuck with it for a good long while. Make certain you know how testing that is informing your decision was performed, no matter who did the testing. Independent third party testing sometimes isn’t so independent, and knowing that can make you more cautious when hooking your company with gear you’ll have to live with.

Bare Metal Blog Series:

Read the original blog entry...

More Stories By Don MacVittie

Don MacVittie is currently a Senior Solutions Architect at StackIQ, Inc. He is also working with Mesamundi on D20PRO, and is a member of the Stacki Open Source project. He has experience in application development, architecture, infrastructure, technical writing, and IT management. MacVittie holds a B.S. in Computer Science from Northern Michigan University, and an M.S. in Computer Science from Nova Southeastern University.

@MicroservicesExpo Stories
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device acce...
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Microservice architecture is fast becoming a go-to solution for enterprise applications, but it's not always easy to make the transition from an established, monolithic infrastructure. Lightweight and loosely coupled, building a set of microservices is arguably more difficult than building a monolithic application. However, once established, microservices offer a series of advantages over traditional architectures as deployment times become shorter and iterating becomes easier.
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...