|By Michael Bushong||
|January 28, 2014 11:00 AM EST||
We are a few short days away from the biggest spectacle in sports – the Super Bowl. It is impossible to avoid talk this week of Peyton Manning, the Denver Broncos, the Seattle Seahawks, and the NFL in general. But does the NFL have anything to teach tech industries?
The NFL is a massively successful franchise by almost any measure. Despite a rash of recent scandals including a pay-for-injury bounty program and a major law suit and settlement tied to concussions, the league continues to grow its fan base – both in the US and abroad – while raking in record numbers of viewers and revenue. At the heart of the NFL’s resilience when it comes to scandal and its seemingly bottomless pit of revenues is an uncanny to reinvent itself.
In fact, it is the NFL’s overall position on its own evolution that has secured its place at the top of the entertainment pantheon.
Instant Replay in the NFL
The NFL adopted instant replay in 1986 after a fairly healthy debate. Detractors would point out that part of the history of the NFL was that the game was officiated by humans, complete with their flaws. Games had been decided in the past by a team of officials who had to get it right in the moment, and changing that would somehow alter the NFL’s traditions. But it took only a few high-profile officiating mishaps played back on national television to sway sentiment, and in 1986, by a vote of 23 to 4 (with one abstaining), the NFL ushered instant replay into the league.
But instant replay’s first stint in the NFL lasted only until 1992. In its first incarnation, instant replay ranged from effective to wildly unpopular. The rules for which plays could be reviewed was not always clear. The process was slow and at times awkward, making games take too long. And the original incarnation of instant replay allowed officials to review their own calls, which led to somewhat maddening outcomes.
Instant replay went dark until making its triumphant return in 1999. With a few process tweaks (coaches being able to challenge specific calls) and the advance of technology (HD and more angles), the system is clearly here to stay.
But what is so important about how the NFL rolled out instant replay? And how does this apply to networking?
Instant Replay and Networking
First, it is worth noting that instant replay was not a unanimous choice. There were detractors – members of the Old Guard who thought that the new way of doing business was too big a departure from the past. In networking, we face much of the same. There are countless people who fight change at every step because it is not consistent with the old way of doing things. They cling to their technological religion while the rest of the world moves forward. It’s not that their experiences are not not relevant or even not important, but their inability to work alongside the disruptors means that those experiences are kept private, forcing the New Guard to stumble over many of the same obstacles. This is not good for anyone.
Second, we should all realize that instant replay was tried and it failed. But despite the failure, the NFL was able to bring it back to the great benefit of the game. As the SDN revolution wages on, there are people who point to the past. They say clever things like “All that is old is new again” or they refer derisively to past attempts the industry has made to solve some of the same problems being addressed by SDN today.
But if ideas were permanently shelved because of setback or failure, where would we be? Using the past as a compass for the future is helpful; clinging to the past and using it to justify a refusal to move forward is destructive.
And finally, the NFL has shown a remarkable ability to iterate on its ideas. Instant replay was successful in its second run because of the changes the NFL made. New technology will not be invented with perfect foresight. The initial ideas might not even be as important as the iterative adjustments. We need to embrace failure and use it to adapt and overcome. By not being religious about its history, the NFL has successfully evolved. The question for networking specialists everywhere is to what extent our own industry is capable of setting aside its sacred cows.
Rushing, West Coast Offense, Hurry-Up Offense
Football is remarkable in how much it changes over time. Decades ago, offense was all about having a good running back. The passing game was an afterthought, used to lure defenders away from the line of scrimmage. Those days yielded to a more pass-happy time featuring the San Diego Chargers’ Air Coryell offense and the Houston Oilers Run and Shoot. Those teams handed the offensive mantle over to Bill Walsh’s West Coast Offense. Then we saw New Orleans’ more vertical passing attack. And now we have the whole hurry-up offense.
It almost doesn’t matter what is different between these systems. That so many systems have been able to thrive is what is amazing. The NFL, despite its traditions, seems most committed to reinventing itself. And for every one of these offensive systems, there are a dozen others that failed to catch on.
Evolution and Networking
The NFL has figured out that they are a league that thrives on new ideas. Whether its the NFL as a whole, or individual teams and players, the entire league is committed to trying new things. That commitment has created a hyper-fertile breeding ground for new ideas. It is no surprise that the league has managed to reinvent itself every few years, much to the delight of its legions of fans.
Networking is going through an interesting time. This period of 3-4 years might very well be looked on as a Golden Era for networking. The amount of new ideas that are being tested in the marketplace right now is amazing. SDN, NFV, DevOps, Photonic Switching, Sensor Networking, Network Virtualization… and the list goes on. But these new ideas came on the heels of what really were the Dark Ages. After the Dot.com bust, the networking world went dark. Sure, there were new knobs and doodads that were useful for folks, but as an industry, the innovation was pretty incremental.
So when this Golden Era of Networking is over, which networking industry will we have? Will we return to the Dark Ages, or will we end up in another Period of Enlightenment? If the NFL is any indication of what continuous innovation looks like, it would seem the better answer is to embrace the new ideas. But are we culturally prepared to continue embracing disruption? Are we collectively unafraid of failure enough that this type of future suits us? If you ask me, we have to be.
Defense Wins Championships
There is an old saw that goes “Defense wins championships.” At this time of year, it gets trotted out as one of those universal truths. But here’s the reality: evolution wins championships. In the NFL, offenses and defenses win about the same amount (a slight nod to defenses, but only by a hair). It’s a team’s ability to evolve over the years – and even during the game – that dictates success.
Our industry is no different. We have our own Old Guard that talks about past technologies with the kind of reverence that you see when historians put on their smoking jackets and grab their pipes. But our industry is defined by its future more than its past. There is a lot to learn from our history, but if we let those teachings get in the way of our future, we will be no better off than we are now.
So when you are grabbing a beer or diving into that 7-layer dip at whatever Super Bowl party you end up at, talk about the role of innovation and how it reigns supreme over those dusty old defenses.
[Today's fun fact: Clans of long ago that wanted to get rid of their unwanted people without killing them used to burn their houses down, hence the expression "To get fired." I wonder where the term "lay off" came from then?]
The APN DevOps Competency highlights APN Partners who demonstrate deep capabilities delivering continuous integration, continuous delivery, and configuration management. They help customers transform their business to be more efficient and agile by leveraging the AWS platform and DevOps principles.
Oct. 8, 2015 07:30 PM EDT Reads: 210
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Oct. 8, 2015 07:15 PM EDT Reads: 187
Containers are revolutionizing the way we deploy and maintain our infrastructures, but monitoring and troubleshooting in a containerized environment can still be painful and impractical. Understanding even basic resource usage is difficult - let alone tracking network connections or malicious activity. In his session at DevOps Summit, Gianluca Borello, Sr. Software Engineer at Sysdig, will cover the current state of the art for container monitoring and visibility, including pros / cons and li...
Oct. 8, 2015 07:00 PM EDT Reads: 177
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
Oct. 8, 2015 06:45 PM EDT Reads: 252
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes.
Oct. 8, 2015 06:45 PM EDT Reads: 200
With containerization using Docker, the orchestration of containers using Kubernetes, the self-service model for provisioning your projects and applications and the workflows we built in OpenShift is the best in class Platform as a Service that enables introducing DevOps into your organization with ease. In his session at DevOps Summit, Veer Muchandi, PaaS evangelist with RedHat, will provide a deep dive overview of OpenShift v3 and demonstrate how it helps with DevOps.
Oct. 8, 2015 06:00 PM EDT Reads: 624
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration. Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling.
Oct. 8, 2015 06:00 PM EDT Reads: 1,069
The web app is agile. The REST API is agile. The testing and planning are agile. But alas, data infrastructures certainly are not. Once an application matures, changing the shape or indexing scheme of data often forces at best a top down planning exercise and at worst includes schema changes that force downtime. The time has come for a new approach that fundamentally advances the agility of distributed data infrastructures. Come learn about a new solution to the problems faced by software organ...
Oct. 8, 2015 06:00 PM EDT Reads: 781
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Oct. 8, 2015 06:00 PM EDT Reads: 133
Internet of Things (IoT) will be a hybrid ecosystem of diverse devices and sensors collaborating with operational and enterprise systems to create the next big application. In their session at @ThingsExpo, Bramh Gupta, founder and CEO of robomq.io, and Fred Yatzeck, principal architect leading product development at robomq.io, discussed how choosing the right middleware and integration strategy from the get-go will enable IoT solution developers to adapt and grow with the industry, while at th...
Oct. 8, 2015 06:00 PM EDT Reads: 2,159
The last decade was about virtual machines, but the next one is about containers. Containers enable a service to run on any host at any time. Traditional tools are starting to show cracks because they were not designed for this level of application portability. Now is the time to look at new ways to deploy and manage applications at scale. In his session at @DevOpsSummit, Brian “Redbeard” Harrington, a principal architect at CoreOS, will examine how CoreOS helps teams run in production. Attende...
Oct. 8, 2015 05:45 PM EDT Reads: 1,224
In their session at DevOps Summit, Asaf Yigal, co-founder and the VP of Product at Logz.io, and Tomer Levy, co-founder and CEO of Logz.io, will explore the entire process that they have undergone – through research, benchmarking, implementation, optimization, and customer success – in developing a processing engine that can handle petabytes of data. They will also discuss the requirements of such an engine in terms of scalability, resilience, security, and availability along with how the archi...
Oct. 8, 2015 05:00 PM EDT Reads: 394
“All our customers are looking at the cloud ecosystem as an important part of their overall product strategy. Some see it evolve as a multi-cloud / hybrid cloud strategy, while others are embracing all forms of cloud offerings like PaaS, IaaS and SaaS in their solutions,” noted Suhas Joshi, Vice President – Technology, at Harbinger Group, in this exclusive Q&A with Cloud Expo Conference Chair Roger Strukhoff.
Oct. 8, 2015 05:00 PM EDT Reads: 437
DevOps has often been described in terms of CAMS: Culture, Automation, Measuring, Sharing. While we’ve seen a lot of focus on the “A” and even on the “M”, there are very few examples of why the “C" is equally important in the DevOps equation. In her session at @DevOps Summit, Lori MacVittie, of F5 Networks, will explore HTTP/1 and HTTP/2 along with Microservices to illustrate why a collaborative culture between Dev, Ops, and the Network is critical to ensuring success.
Oct. 8, 2015 04:45 PM EDT Reads: 125
Application availability is not just the measure of “being up”. Many apps can claim that status. Technically they are running and responding to requests, but at a rate which users would certainly interpret as being down. That’s because excessive load times can (and will be) interpreted as “not available.” That’s why it’s important to view ensuring application availability as requiring attention to all its composite parts: scalability, performance, and security.
Oct. 8, 2015 03:00 PM EDT Reads: 388
Saviynt Inc. has announced the availability of the next release of Saviynt for AWS. The comprehensive security and compliance solution provides a Command-and-Control center to gain visibility into risks in AWS, enforce real-time protection of critical workloads as well as data and automate access life-cycle governance. The solution enables AWS customers to meet their compliance mandates such as ITAR, SOX, PCI, etc. by including an extensive risk and controls library to detect known threats and b...
Oct. 8, 2015 03:00 PM EDT Reads: 203
Clearly the way forward is to move to cloud be it bare metal, VMs or containers. One aspect of the current public clouds that is slowing this cloud migration is cloud lock-in. Every cloud vendor is trying to make it very difficult to move out once a customer has chosen their cloud. In his session at 17th Cloud Expo, Naveen Nimmu, CEO of Clouber, Inc., will advocate that making the inter-cloud migration as simple as changing airlines would help the entire industry to quickly adopt the cloud wit...
Oct. 8, 2015 02:30 PM EDT Reads: 646
Overgrown applications have given way to modular applications, driven by the need to break larger problems into smaller problems. Similarly large monolithic development processes have been forced to be broken into smaller agile development cycles. Looking at trends in software development, microservices architectures meet the same demands. Additional benefits of microservices architectures are compartmentalization and a limited impact of service failure versus a complete software malfunction....
Oct. 8, 2015 02:00 PM EDT Reads: 159
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Oct. 8, 2015 01:15 PM EDT Reads: 2,042
For it to be SOA – let alone SOA done right – we need to pin down just what "SOA done wrong" might be. First-generation SOA with Web Services and ESBs, perhaps? But then there's second-generation, REST-based SOA. More lightweight and cloud-friendly, but many REST-based SOA practices predate the microservices wave. Today, microservices and containers go hand in hand – only the details of "container-oriented architecture" are largely on the drawing board – and are not likely to look much like S...
Oct. 8, 2015 01:00 PM EDT Reads: 482