Welcome!

Microservices Expo Authors: Liz McMillan, Pat Romanski, Elizabeth White, Derek Weeks, Mehdi Daoudi

Related Topics: Microservices Expo, Mobile IoT

Microservices Expo: Article

The Automation Paradox

This script is not intended for use in the operation of nuclear facilities

A recent article appeared in the IEEE spectrum on "the automation paradox" which was reflected with the question by Steven Cherry of "Would we be safer overall if we just accept a few deaths due to software?" The concept is a little funny in my mind, since for many years it seems like a lot of the software that I have worked with always carried a EULA that stated something like "...IS NOT INTENDED FOR USE IN THE OPERATION OF NUCLEAR FACILITIES, AIRCRAFT NAVIGATION OR COMMUNICATION SYSTEMS, AIR TRAFFIC CONTROL SYSTEMS, LIFE SUPPORT MACHINES OR OTHER EQUIPMENT IN WHICH THE FAILURE OF THE SOFTWARE COULD LEAD TO DEATH, PERSONAL INJURY, OR SEVERE PHYSICAL OR ENVIRONMENTAL DAMAGE"

This got me to thinking about how far we have come with software and how many devices these days rely on generic software to run a multitude of devices. The question I guess comes down to the level of rigor that has been applied in the testing and quality assurance processes, and the relevance of the technology to the task at hand. It is becoming increasingly difficult to find technology that doesn't rely to some extent on a homogeneous platform and in fact the use of a platform brings many benefits like scale, total cost of ownership, etc. The idea of moving away from discrete things built to perform discrete actions is very appealing.

Consider the smartphone for example. Irrespective of whether you use one, the underlying technology is pretty standard for a given family of products. Of course this ‘standard' has proven to be somewhat of an undoing for some of these platforms in that you might have a tablet device running an Android operating system with the froyo release and yet you cannot make phone calls with it, despite the fact that it has a dial-pad and a ‘make a call' button. So, what it appears a given platform can do is not necessarily reflected in the reality of what you can actually do - this is an obvious one that you quickly realize. Again, as previously mentioned, it is however cost effective for manufacturers to use a generic image of the software to make the hardware usable and then it is up to the consumer to determine what he/she wants to use it for.

Another parallel comes from the world of music. Over the holidays, I met with family and we discussed how being a deejay has transformed over the last couple of years with the almost complete elimination for  the tunes-minder to drive a minivan brimming with boxes of CDs or vinyl records. The MP3 standard and digitizing music into data files has effectively rendered that industry much more efficient. So much so  that you could be carrying thousands of hours of broadcast grade audio media in your pocket in the same device that you make phone calls. All pretty amazing except that now there is so much of it, it is difficult to manage unless you are an incredibly disciplined or organized person. I have terabytes of recorded media at home: songs, videos, movies, and photos that I have tried to store in a sorted fashion, but whose volume has rapidly outstripped my discipline levels. I almost despair when I try to find things sometimes. So do deejays really carry their entire library of music with them? I assume not. They have a standard repertoire probably and a few extra tracks in the wings for the occasionally requested, but not mainstream songs.

Back to the concept of the automation paradox then. The idea here is that automation is the operation of some activity automatically, without continuous input from an operator. The more reliable one considers an automation to be, the more complexity one introduces to the automation and ultimately the less the operator can contribute to the resulting success. This is a paradox because it could be a contradiction. I have my audio tracks so now I don't have to carry hundreds of discs from venue to venue. But because there is so much of it, I only carry some of it now. I have changed my usage model. I've switched from a phone that just makes calls, to a phone that makes calls, surfs the webs, plays music and takes photographs but really, what do I use it mostly for? I have over 60 applications installed, but I only use two or three regularly. I am not representative of anyone except myself, but do you see some parallels here with yourself? In theory at least, I don't need my phone, MP3 player, camera and my laptop, but which ones have I really shed?

I thought this topic relevant in the realm of creating automations around SAP transactions because we assume that we can save time and energy by building scripts around all manner of actions in the world of SAP. However, sometimes we simply need to step back from the problem and evaluate whether we really should be modeling automations around things that we find annoying or that we think we can build automations around.

The message has to be that even though you ‘can' build a script around a given transaction or process in SAP, is it the right thing to do? The transaction recording mode for example that comes to my mind is building scripts using GUI scripting. While this method often works and works well, it is an area that has been frequently called out by SAP admins  and SAP auditors as an area to be cautious with. The challenge with this method is that it leverages classic screen scraping and doesn't rely to the same extent on the field and screen definitions defined programmatically in the SAP transaction. In a phrase, "you can sometimes land up with unexpected outcomes"; you can only assess success if you perform a review of the results. Do you always do that?

Again, you can build a degree of checking and controlling into your recording, but the level of effort may ultimately make it better by not trying to build such an automation. In such circumstances, I always encourage people to look at alternative ways to achieving the same objective, perhaps using multiple scripts or considering a BAPI or an SAP API like a remotely enabled Function Module to achieve the same result. Irrespective of the approach that you choose, of course, testing will be key and playing through a number of use cases and scenarios will be pivotal to determining whether your automation is robust and reliable.

Fortunately, all of this isn't likely to be very life threatening. You're not likely to be using that transaction recording to run a defibrillator, fly an airplane or a heart lung machine. Don't underestimate the script's importance though, especially if you are using it to do interesting things like maintain bills of materials for the manufacture and assembly of items. Do consider though whether ultimately you aren't building something that will make more work for you in the long run, if something goes wrong. Remember that in the world of SAP, without a system restore, there is no undo, only the ability to fix forward.

Suggested further reading:

The Benefits of Risk - IEEE Spectrum

What is Froyo - Gizmodo

Radio Deejays versus Radio Automation - Hubpages

More Stories By Clinton Jones

Clinton Jones is a Product Manager at Winshuttle. He is experienced in international technology and business process with a focus on integrated business technologies. Clinton also services a technical consultant on technology and quality management as it relates to data and process management and governance. Before coming to Winshuttle, Clinton served as a Technical Quality Manager at SAP. Twitter @winshuttle

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that's no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, explored how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He expla...
SYS-CON Events announced today that Synametrics Technologies will exhibit at SYS-CON's 22nd International Cloud Expo®, which will take place on June 5-7, 2018, at the Javits Center in New York, NY. Synametrics Technologies is a privately held company based in Plainsboro, New Jersey that has been providing solutions for the developer community since 1997. Based on the success of its initial product offerings such as WinSQL, Xeams, SynaMan and Syncrify, Synametrics continues to create and hone in...
DevOps promotes continuous improvement through a culture of collaboration. But in real terms, how do you: Integrate activities across diverse teams and services? Make objective decisions with system-wide visibility? Use feedback loops to enable learning and improvement? With technology insights and real-world examples, in his general session at @DevOpsSummit, at 21st Cloud Expo, Andi Mann, Chief Technology Advocate at Splunk, explored how leading organizations use data-driven DevOps to clos...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
The dynamic nature of the cloud means that change is a constant when it comes to modern cloud-based infrastructure. Delivering modern applications to end users, therefore, is a constantly shifting challenge. Delivery automation helps IT Ops teams ensure that apps are providing an optimal end user experience over hybrid-cloud and multi-cloud environments, no matter what the current state of the infrastructure is. To employ a delivery automation strategy that reflects your business rules, making r...
The past few years have brought a sea change in the way applications are architected, developed, and consumed—increasing both the complexity of testing and the business impact of software failures. How can software testing professionals keep pace with modern application delivery, given the trends that impact both architectures (cloud, microservices, and APIs) and processes (DevOps, agile, and continuous delivery)? This is where continuous testing comes in. D
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
Admiral Calcote - also known as Lee Calcote (@lcalcote) or the Ginger Geek to his friends - gave a presentation entitled Characterizing and Contrasting Container Orchestrators at the 2016 All Day DevOps conference. Okay, he isn't really an admiral - nor does anyone call him that - but he used the title admiral to describe what container orchestrators do, relating it to an admiral directing a fleet of container ships. You could also say that they are like the conductor of an orchestra, directing...
The past few years have seen a huge increase in the amount of critical IT services that companies outsource to SaaS/IaaS/PaaS providers, be it security, storage, monitoring, or operations. Of course, along with any outsourcing to a service provider comes a Service Level Agreement (SLA) to ensure that the vendor is held financially responsible for any lapses in their service which affect the customer’s end users, and ultimately, their bottom line. SLAs can be very tricky to manage for a number ...
Our work, both with clients and with tools, has lead us to wonder how it is that organizations are handling compliance issues in the cloud. The big cloud vendors offer compliance for their infrastructure, but the shared responsibility model requires that you take certain steps to meet compliance requirements. Which lead us to start poking around a little more. We wanted to get a picture of what was available, and how it was being used. There is a lot of fluidity in this space, as in all things c...
Gaining visibility in today’s sprawling cloud infrastructure is complex and laborious, involving drilling down into tools offered by various cloud services providers. Enterprise IT organizations need smarter and effective tools at their disposal in order to address this pertinent problem. Gaining a 360 - degree view of the cloud costs requires collection and analysis of the cost data across all cloud infrastructures used inside an enterprise.
Some people are directors, managers, and administrators. Others are disrupters. Eddie Webb (@edwardawebb) is an IT Disrupter for Software Development Platforms at Liberty Mutual and was a presenter at the 2016 All Day DevOps conference. His talk, Organically DevOps: Building Quality and Security into the Software Supply Chain at Liberty Mutual, looked at Liberty Mutual's transformation to Continuous Integration, Continuous Delivery, and DevOps. For a large, heavily regulated industry, this task...
The goal of Microservices is to improve software delivery speed and increase system safety as scale increases. Microservices being modular these are faster to change and enables an evolutionary architecture where systems can change, as the business needs change. Microservices can scale elastically and by being service oriented can enable APIs natively. Microservices also reduce implementation and release cycle time and enables continuous delivery. This paper provides a logical overview of the Mi...
The notion of improving operational efficiency is conspicuously absent from the healthcare debate - neither Obamacare nor the newly proposed GOP plan discusses the impact that a step-function improvement in efficiency could have on access to healthcare (through more capacity), quality of healthcare services (through reduced wait times for patients) or cost (through better utilization of scarce, expensive assets).
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
The “Digital Era” is forcing us to engage with new methods to build, operate and maintain applications. This transformation also implies an evolution to more and more intelligent applications to better engage with the customers, while creating significant market differentiators. In both cases, the cloud has become a key enabler to embrace this digital revolution. So, moving to the cloud is no longer the question; the new questions are HOW and WHEN. To make this equation even more complex, most ...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...