Click here to close now.




















Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, Mike Kavis, Ian Khan, Lori MacVittie

Related Topics: Containers Expo Blog, Java IoT, Linux Containers, @CloudExpo, Cloud Security, @BigDataExpo

Containers Expo Blog: Blog Feed Post

[Case Study] API Testing and Service Virtualization Reduce Testing Time 20x

Accelerating Testing in Parallel and Agile Development Environments

Ignis Asset Management is a global asset management company, headquartered in London, with over $100 billion (USD) in assets under management. Ignis recently embarked on a large project aimed at outsourcing the back office as well as implementing the architecture and applications required to support the outsourcing model.

IgnisServiceVirtualizationAPITesting"To meet the business's needs, a number of projects have to be developed and delivered in parallel," explained Aaron Martin, Programme Test Manager at Ignis. "However, we didn't have the resources, budget, and management capacity required to create and maintain multiple test environments internally. This limited test environment access impeded our ability to validate each application under test's (AUT) integration with third-party architectures. Moreover, our third-party providers also had limited test environment access, which restricted the time and scope of their joint integration testing."

At the same time, the company was transitioning to an agile development methodology. To support this initiative, they needed to adopt an automated testing solution to provide faster feedback after each build.

It soon became apparent that the existing testing process had to be optimized in order to meet these new demands. Executing the core test plan required 10 man-days. This process involved manually entering transactions in the originating application, which wasn't the primary AUT. Moreover, they were also manually building simple stubs to simulate interactions with third-party components that were not integrated. To enable complete testing to occur in more agile, parallel development-without requiring additional test environments to be built and maintained- they needed ways to:

  • Enable applications (or parts of the target architecture) to be tested against the Ignis architecture before integration into the complete Ignis system.
  • More efficiently simulate the AUT's interactions with third-party systems not yet integrated into the Ignis system.

Parasoft API Testing and Service Virtualization Enables Ignis to Begin Extensive Automated Testing Before Integration

Ignis implemented Parasoft's API Testing and Service Virtualization solutions to establish a test automation framework that not only addressed the challenges outlined above, but also helped extend test automation across the SDLC.

Ignis's initial implementation of the API Testing solution focused on automating the generation of order management traffic at the API level. The AUT was the message architecture, which interfaces with third-party components-both existing services provided by business partners as well as services being implemented in parallel by outsourcing providers. From the application initiating the order, live trade scenarios were used to form their basic test transactions. Using SOAtest (Parasoft's API Testing tool), they were able to run the full transaction test plan, generating new instances of the message from a data source. This data-driven message building took advantage of features such as SOAtest's ability to update attributes to create unique IDs, set dates, and perform calculations.

In parallel with the functional test automation, Parasoft Virtualize (Parasoft's Service Virtualization tool) was implemented to simulate the expected transaction response messages from third-party components. "First, we rapidly implemented a simple virtual asset that provided a positive response to all generated transactions, enabling us to simulate third-party responses without manually developing and managing stubs," Martin explained. "The virtual assets were then extended to handle more complex response scenarios."

Ignis also implemented automated tests and virtual assets to test outsourced components fully- decoupled from the Ignis environment. They used this to establish a "quality gate" that had to be passed before progressing to the integration phase. Martin remarked, "This was quite useful, since their code quality was poor and repeated testing in our integrated environment would have impacted other deliverables."

Leveraging Supero to Transform a Manual Testing Process into an Automated One

Since Ignis test resources were not experienced in test automation or service virtualization, they enlisted the help of an automation developer to build out their test requirements in the Parasoft ecosystem. Ignis engaged Supero Solutions to manage the implementation and ongoing test requirements since they had extensive experience implementing and using Parasoft. Ignis has now replaced all the manual test resources in one location with Supero resources.

Supero's expertise has been critical for building automated tests within the scrum teams, which is a key factor in the success of the Ignis agile initiative. "Using Supero allows us to flex our resources to meet project requirements while still maintaining a consistent approach," Martin said.

Once the implementation proceeded, the value of having a Parasoft expert lay the proper foundation became clear. From this starting point, any resource can now run test plans via Parasoft and enable virtual assets in the test environment with a very minimal learning curve.

Results: A 20x Reduction in Testing Time

"With Parasoft's integrated functional test automation and service virtualization, we were able to reduce the execution and verification time for our transaction regression test plan from 10 days to a half day," shared Martin. This testing is not only automated, but also quite extensive. For example, to test the Ignis system's integration with one business partner's trading system, Ignis's fully automated regression testing now covers 300 test scenarios in a near UAT-level approach-with 12,600 validation checkpoints per test run.

"Previous automation implementations focused on automating testing at the UI level-with varying levels of success," Martin continued. "We determined that we really needed to generate transaction scenarios and traffic at the API level instead. With Parasoft, we can focus on the core test requirements and get more value from our investment in automation."

Beyond addressing the original challenges posed by the project, the solution has also enabled automated testing to occur all the way from the component/unit level to system integration. To achieve this impressive level of automation, testers fostered close relationships with the development team. Now, testers' role within the organization is elevated, and collaboration between development and testing has reached an all-time high.

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Technical Writer at Parasoft, authors technical articles, documentation, white papers, case studies, and other marketing communications—currently specializing in service virtualization, API testing, DevOps, and continuous testing. She has also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

@MicroservicesExpo Stories
SYS-CON Events announced today that the "Second Containers & Microservices Expo" will take place November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Containers and microservices have become topics of intense interest throughout the cloud developer and enterprise IT communities.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Container technology is sending shock waves through the world of cloud computing. Heralded as the 'next big thing,' containers provide software owners a consistent way to package their software and dependencies while infrastructure operators benefit from a standard way to deploy and run them. Containers present new challenges for tracking usage due to their dynamic nature. They can also be deployed to bare metal, virtual machines and various cloud platforms. How do software owners track the usag...
Our guest on the podcast this week is JP Morgenthal, Global Solutions Executive at CSC. We discuss the architecture of microservices and how to overcome the challenge of making different tools work together. We learn about the importance of hiring engineers who can compose services into an integrated system.
Alibaba, the world’s largest ecommerce provider, has pumped over a $1 billion into its subsidiary, Aliya, a cloud services provider. This is perhaps one of the biggest moments in the global Cloud Wars that signals the entry of China into the main arena. Here is why this matters. The cloud industry worldwide is being propelled into fast growth by tremendous demand for cloud computing services. Cloud, which is highly scalable and offers low investment and high computational capabilities to end us...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
One of the ways to increase scalability of services – and applications – is to go “stateless.” The reasons for this are many, but in general by eliminating the mapping between a single client and a single app or service instance you eliminate the need for resources to manage state in the app (overhead) and improve the distributability (I can make up words if I want) of requests across a pool of instances. The latter occurs because sessions don’t need to hang out and consume resources that could ...
Microservices has the potential of significantly impacting the way in which developers create applications. It's possible to create applications using microservices faster and more efficiently than other technologies that are currently available. The problem is that many people are suspicious of microservices because of all the technology claims to do. In addition, anytime you start moving things around in an organization, it means changing the status quo and people dislike change. Even so, micr...
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
Cloud Migration Management (CMM) refers to the best practices for planning and managing migration of IT systems from a legacy platform to a Cloud Provider through a combination professional services consulting and software tools. A Cloud migration project can be a relatively simple exercise, where applications are migrated ‘as is’, to gain benefits such as elastic capacity and utility pricing, but without making any changes to the application architecture, software development methods or busine...
The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out ...
At DevOps Summit NY there’s been a whole lot of talk about not just DevOps, but containers, IoT, and microservices. Sessions focused not just on the cultural shift needed to grow at scale with a DevOps approach, but also made sure to include the network ”plumbing” needed to ensure success as applications decompose into the microservice architectures enabling rapid growth and support for the Internet of (Every)Things.
Our guest on the podcast this week is Adrian Cockcroft, Technology Fellow at Battery Ventures. We discuss what makes Docker and Netflix highly successful, especially through their use of well-designed IT architecture and DevOps.