Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Yeshim Deniz, Pat Romanski, Zakia Bouachraoui

Related Topics: Microservices Expo, Java IoT, @CloudExpo, Cloud Security

Microservices Expo: Blog Feed Post

Managing Risk in IT

There’s an old adage that says, if it ain’t broke, don’t fix it

The IT debacle at RBS has highlighted the dependency large financial organisations (and other companies) have on their IT infrastructure.  From what has leaked out into the press, the RBS issue relates to a piece of software called CA-7, used for mainframe batch job scheduling. When I first started in IT in 1987, CA-7 (and it’s sister product CA-1, used for tape management) were already legacy technology.  From memory, I believe CA acquired the products from another company; both had archaic configuration processes and poor documentation.  However they did work and were reasonably reliable.

If it Ain’t Broke…
There’s an old adage that says, if it ain’t broke, don’t fix it; meaning if the software works, why change it.  Any change inherently introduces risk; make no changes and you don’t introduce unnecessary risk.  However, IT infrastructure doesn’t run forever.  Change is necessary to accomodate new features & functionality and cope with growth.  Eventually vendors stop supporting certain versions of software and hardware as they entice and force you to upgrade and purchase new products.

The hardware risk profile is pretty well understood by most organisations.  As servers and storage for instance, get older then the cost of support increases as parts become more difficult to obtain (and more expensive).  There’s a tipping point where maintenance costs outweigh upgrade and new purchase and so justification can be made to replace old hardware.  There’s also a number of other factors involved for hardware, including space, power & cooling costs, all of which help create a reasonably mature TCO model which can be used as part of a technology refresh.

The Software Risk Profile
However, I’m not sure we can say the same for software upgrades.  Working out the risk profile for software is more complex.  Firstly, software has no equivalent of hardware parts replacement; software components don’t wear out.  Bugs do get discovered in code, however these usually get fixed with service packs and patches.

Going back to CA-7, this software originally ran in mainframe environments supporting perhaps hundreds or a few thousand batch jobs in an overnight schedule.  In an organisation like RBS, the software may be supporting tens if not hundreds of thousands of complex batch interactions.  These may have dependencies on platforms other than the mainframe, which make things even more complex.

It’s easy to see that too much risk had been concentrated into a single piece of infrastructure software, if a failed upgrade could result in such disastrous consequences.  When software becomes so complex, it’s likely that upgrades get deferred and deferred until the upgrade becomes critical.  Then a failed upgrade has massive consequences.

The risk of failure in this instance was clearly not understood.  The upgrade took place midweek to a system that seemed to cover the update of accounts to every customer in three banks.  With such a high risk profile, this change should have been scheduled for a quiet period such as a bank holiday.  The change and subsequent backout should have been covered by senior staff – The Register article implies junior staff were involved.

Finally, questions have to asked as to how a junior member of staff could delete the entire input queue updating millions of customer records, then requiring “manual” input.  This statement makes no sense or demonstrates huge flaws in RBS’ batch structure.

The Architect’s View
Software and application upgrades are complex and in large organisations that complexity can be one risk too many.  The desire to centralise to reduce costs shouldn’t be done at the expense of introducing excessive risk.  RBS (and probably many other financial organisations) need to reflect on their system designs and look to mitigate these kinds of scenarios.  From my own experience I know we could see another one of these incidents happen at any time.

Read the original blog entry...

Microservices Articles
Most DevOps journeys involve several phases of maturity. Research shows that the inflection point where organizations begin to see maximum value is when they implement tight integration deploying their code to their infrastructure. Success at this level is the last barrier to at-will deployment. Storage, for instance, is more capable than where we read and write data. In his session at @DevOpsSummit at 20th Cloud Expo, Josh Atwell, a Developer Advocate for NetApp, will discuss the role and value...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
CloudEXPO New York 2018, colocated with DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term.
Consumer-driven contracts are an essential part of a mature microservice testing portfolio enabling independent service deployments. In this presentation we'll provide an overview of the tools, patterns and pain points we've seen when implementing contract testing in large development organizations.
Containers and Kubernetes allow for code portability across on-premise VMs, bare metal, or multiple cloud provider environments. Yet, despite this portability promise, developers may include configuration and application definitions that constrain or even eliminate application portability. In this session we'll describe best practices for "configuration as code" in a Kubernetes environment. We will demonstrate how a properly constructed containerized app can be deployed to both Amazon and Azure ...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.