Welcome!

Microservices Expo Authors: Aruna Ravichandran, Elizabeth White, Carmen Gonzalez, Liz McMillan, Pat Romanski

Related Topics: Microservices Expo

Microservices Expo: RSS Feed Item

Agile SOA Across the Lifecycle - Part Five: IT and SOA Governance

This is the fifth of a six part series of posts on the Agile SOA life cycle

This is the fifth of a six part series of posts on the Agile SOA life cycle. Here we will at look at IT and SOA Governance. With the introduction of agile, spiral, and scrum development methodologies, the traditional waterfall development approach of testing a near-finished app at the end of many Agile development cycles won't be agile at all, as the elements of the application are constantly changing.  Traditional models of IT governance will also not work. To aggravate testing, the service-oriented architecture (SOA) design pattern is used to make IT  more responsive to changes requested by business. New process tooling has been introduced to specifically assist in the cataloging of service assets, and organization of policies governing SOA. This new set of tooling created to support SOA revolves around governance platforms like HP Systinet / S2, SAG Centrasite, SOA Software, TIBCO ActiveMatrix and Oracle Fusion. These SOA Governance tools manage the collection and cataloging of metadata about services, and organize the interdependencies they have with each other, along with the documentation of SOA Policies to define how combined services should meet business requirements.  Within the Governance toolset, SOA Registry/Repositories provide a new platform for LISA to automate testing and validation.

The traditional system requirement and functional testing will still occur, but in a SOA there are more opportunities to automate the validation and enforcement of policies. The current thinking around SOA Governance is largely siloed and specialized around design-time WSDL validation, runtime performance, and security policy enforcement.  As SOA Governance matures to support robust and widely diverse SOA initiatives, this thinking must expand dramatically. One of the greatest values a SOA Governance platform must provide is the proof that policies written in human or business terms are in fact implemented in the system, on a continuous basis. Validation with our LISA solution takes the form of positive and negative tests on the use of these policies, which are executed continuously at change and runtime, as services and their underlying technologies and data are constantly changing and evolving. The validation of the LISA test becomes a demonstrable artifact for exposing and enforcing SOA Policy, within the changing workflow of the SOA Governance framework. Example Scenario of Validating SOA Policies Let's look at a sample deployment, a series of web service definitions, as WSDL files, are loaded into the UDDI registry.  The relationships of the services to each other, and the documentation of how they can be tied together to create a business process is documented.  For a new employee introduction, there may be a series of services from the human resources package to identify the employee's location and managers, there are services that provision IT resources like computer and e-mail, and services to a external partner for insurance and benefits enrollment.  These services would be documented via their WSDLs and stored in the UDDI registry.  A HR-to-payroll business process will be defined utilizing these assets. This all seems simple as long as each participant in the business process provides a well documented, functioning, reliable, and secure service.  The policies around the structural, behavior, and performance of these individual services are documented the governance tool.  For example, the business analyst will write policies in a tool like CentraSite ActiveSOA, describing the behavior of the services.  The IT service will accept a new employee's name and id number and return an e-mail address, domain login, and default password.  The security service will accept the manager and role and provision the appropriate access control to systems for the new employee. The security manager will specify that the services must only run if encrypted and signed data is sent to them.  The signature must be validated against the companies master certificate provider to prove that the request is from only authorized HR personnel.  Lastly, the IT operations team will provide policies around how many transactions per second (TPS) the service should handle, and the maximum response time the service will take, so the component will be a good participant in a decomposed transaction. That's far too many steps to carry out if it's a strictly manual validation process. The execution of the tests needs to live within the SOA Governance process. Testing, Validation and Policy Enforcement SolutionsGiven all of the processes enterprises must employ to manage the design, construction, integration and management of the IT environment, it makes sense to have a common, reusable way to apply validation and enforcement across these processes. SOA Validation -becomes the long arm of the law- with reusable and rich testing that merges into the workflow of each of these processes, and the tools that support them. The Validation arm shown above (with our LISA as the "long arm") provides capabilities to run test cases to verify different rules and policies defined in leading IT process tools. The policing action is accomplished by tying automatic running of tests to the expected behaviors and policies in the workflow of the process tool.  The invocation of  tests from a governance platform to ensure SOA policy is similar to the policing action done for test management solutions, the biggest difference is the context and stage in the lifecycle of the component that needs to be verified.  In a traditional test of a waterfall development there is a specific point in time in which the system is tested and deemed ready for production.  In today's services-based applications, there is no longer one point in time in which we can identify a test run to ensure quality of the application. In a SOA lifecycle, we think of design time, run time, and change time as the three stages of a service or business process being constructed within these loosely coupled systems.  In the example above some of the services are interactions with packaged applications (HR system), some from home grown systems (IT user and security provisioning), and other from external partners (benefits).  The development and release cycles of these three areas will be different and will not be coordinated for a single big bang release.  One advantage of SOA is the ability to decouple systems and utilize services created by different parties instead of building everything yourself. At design time, a test will be run to make sure that WSDLs are WS-I compliant, that they follow an RPC or doc-literal call structure, and that a customer-specific XML tag is used for service identification.  Based the on passing of a compliance test, the WSDL will then be made available in the Repository for developers to code the functionality behind it.  Once the developers have created the necessary code to implement the service and its underlying technology components, the behavior and security of the service needs to be validated.  The only way to verify if the code behind a service is working properly, since there are no hard-coded responses in the real world, and that security is respected, is to invoke the service, then verify that the underlying systems of record are updated properly. Again, a test case is used to invoke and verify that the business requirements and the behavioral and security policies have been implemented. If the candidate Service can pass these Policy tests, it can now be published in the registry for a consumer to use.  If these tests fail, the service should not be promoted for availability to consumers in the registry. As a best practice, the individual services should undergo load testing once they are made available. Far less risk is introduced if we test for load while the components are being built, when something can be done to improve performance issues, rather than waiting until the system is deployed, when issues are much more costly to repair. Progressive development shops start their load testing once the service functionality has been verified, so that developers can make code changes, and optimize logic and data access before consumers start utilizing the service. In run time, the service is now promoted to the Registry, along with a test that demonstrates its functionality, and made available for consumers to leverage as part of their own application workflows. These consumers also bear responsibility for validating that their intended use of the service is actually supportable. The best way to do this is to create an executable test asset to validate that the structural, behavioral and performance aspects of the overall consumer's workflow are supported when the service is leveraged. This will pay big dividends when the test is run continuously as part of the governance process. The final stage is change time. This is when the true advantages of SOA and agile development come into play. If a new service needs to be added, or a modification to a existing service needs to be made to respond to changing business requirements, the policies for the service must be re-verified.  Not only do we want to update any behavioral policy with the new expected behavior, but the regression of existing policies must occur. Put yourself in the place of the consumer of a service. How can you trust that changes will not create unintended consequences (e.g. failures) for a service you depend upon, and break your existing business processes?  Each consumer needs to place their expected behavior as a policy in the registry, with an accompanying test, so that it can be automatically validated before that change is made.  The automated structural and behavior policies in the test represent the consumer???s rights ??? and their responsibility for using the services as defined. When these tests run automatically as part of SOA Governance, trust between service consumers and producers is achieved. For more on SOA governance see Joe McKenrick's posting of the full transcript of a recent eBizQ SOA governance panel that John Michelsen participated in. In our next post, we will provide our conclusions. You can download this complete series as the "Agile SOA Across the Lifecycle" at our ITKO LISA resources page. 

Read the original blog entry...

@MicroservicesExpo Stories
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
The 20th International Cloud Expo has announced that its Call for Papers is open. Cloud Expo, to be held June 6-8, 2017, at the Javits Center in New York City, brings together Cloud Computing, Big Data, Internet of Things, DevOps, Containers, Microservices and WebRTC to one location. With cloud computing driving a higher percentage of enterprise IT budgets every year, it becomes increasingly important to plant your flag in this fast-expanding business opportunity. Submit your speaking proposal ...
@DevOpsSummit taking place June 6-8, 2017 at Javits Center, New York City, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @DevOpsSummit at Cloud Expo New York Call for Papers is now open.
SYS-CON Events announced today that Dataloop.IO, an innovator in cloud IT-monitoring whose products help organizations save time and money, has been named “Bronze Sponsor” of SYS-CON's 20th International Cloud Expo®, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Dataloop.IO is an emerging software company on the cutting edge of major IT-infrastructure trends including cloud computing and microservices. The company, founded in the UK but now based in San Fran...
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in Embedded and IoT solutions, will exhibit at SYS-CON's 20th International Cloud Expo®, which will take place on June 7-9, 2017, at the Javits Center in New York City, NY. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology, is a premier provider of advanced server Building Block Solutions® for Data Center, Cloud Computing, Enterprise IT, Hadoop/Big Data, HPC and E...
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
SYS-CON Events announced today that Catchpoint Systems, Inc., a provider of innovative web and infrastructure monitoring solutions, has been named “Silver Sponsor” of SYS-CON's DevOps Summit at 18th Cloud Expo New York, which will take place June 7-9, 2016, at the Javits Center in New York City, NY. Catchpoint is a leading Digital Performance Analytics company that provides unparalleled insight into customer-critical services to help consistently deliver an amazing customer experience. Designed ...
Here’s a novel, but controversial statement, “it’s time for the CEO, COO, CIO to start to take joint responsibility for application platform decisions.” For too many years now technical meritocracy has led the decision-making for the business with regard to platform selection. This includes, but is not limited to, servers, operating systems, virtualization, cloud and application platforms. In many of these cases the decision has not worked in favor of the business with regard to agility and cost...
Buzzword alert: Microservices and IoT at a DevOps conference? What could possibly go wrong? In this Power Panel at DevOps Summit, moderated by Jason Bloomberg, the leading expert on architecting agility for the enterprise and president of Intellyx, panelists peeled away the buzz and discuss the important architectural principles behind implementing IoT solutions for the enterprise. As remote IoT devices and sensors become increasingly intelligent, they become part of our distributed cloud enviro...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
"Plutora provides release and testing environment capabilities to the enterprise," explained Dalibor Siroky, Director and Co-founder of Plutora, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
Adding public cloud resources to an existing application can be a daunting process. The tools that you currently use to manage the software and hardware outside the cloud aren’t always the best tools to efficiently grow into the cloud. All of the major configuration management tools have cloud orchestration plugins that can be leveraged, but there are also cloud-native tools that can dramatically improve the efficiency of managing your application lifecycle. In his session at 18th Cloud Expo, ...
In his session at @DevOpsSummit at 19th Cloud Expo, Robert Doyle, lead architect at eCube Systems, will examine the issues and need for an agile infrastructure and show the advantages of capturing developer knowledge in an exportable file for migration into production. He will introduce the use of NXTmonitor, a next-generation DevOps tool that captures application environments, dependencies and start/stop procedures in a portable configuration file with an easy-to-use GUI. In addition to captur...
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, will discuss solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool...