Welcome!

Microservices Expo Authors: Dalibor Siroky, Stackify Blog, Elizabeth White, Pat Romanski, Liz McMillan

Related Topics: PowerBuilder, Java IoT, Industrial IoT, Microservices Expo, Containers Expo Blog, Machine Learning

PowerBuilder: Article

A Pragmatic Approach to Enterprise Architecture

Examples from the Financial Services industry

Managing complexity is difficult in any growing business. As companies innovate, add new business lines, expand their global reach, cater to increased volume, or adopt new regulatory rules, processes proliferate and the discipline surrounding them goes out of the window. Moreover, the IT that supports these processes becomes more entangled as aging legacy applications jostle with new applications to support the needs of the business. Over time the technology that support this business unravels, causing the environment to suffer from instability and poor performance and become difficult to change and maintain. In short, it lowers business efficiency and effectiveness.

A sound Enterprise Architecture (EA) approach is required to ensure that both the business and technology are well aligned and will help restore order to this landscape. An Enterprise Architecture is a description of the goals of a company, how these goals are realized by business processes, and how these business processes can be better served through technology. EA is about finding opportunities to use technology to add business value.

The primary purpose of an EA is to inform, guide, and constrain the decisions for the enterprise, especially those related to IT investments. The true challenge of EA is to maintain the architecture as a primary authoritative resource for enterprise IT planning. This goal is not met via enforced policy, but by the value and utility of the information provided by the EA.

Why Have an EA Approach:

  • Provides a basic framework for major change initiatives
  • Divides and conquers technical and organizational complexities
  • Supports business and IT budget prioritization
  • Improves the ability to share and efficiently process information
  • Provides the ability to respond faster to changes in technology and business needs
  • Reduces costs due to economies of scale and resource sharing
  • Enhances productivity, flexibility and maintainability
  • Serves as a construction blueprint and ensures consistency across systems
  • Supports decision making

More specific benefits include:

  • Simplified application development
  • Quality
  • Integration
  • Extensibility
  • Location transparency
  • Horizontal scaling
  • Isolation
  • Portability
  • Reuse

An EA is a blueprint that is developed, implemented, maintained, and used to explain and guide how an organization's applications landscape works together to efficiently accomplish the mission of the organization. An EA addresses the following views:

  • Business activities and processes
  • Data sets and information flows
  • Applications and software
  • Technology

The Four Pillars of Enterprise Architecture:
1.
Business Architecture
Business Architecture realizes the business strategy. It describes how we organize our business processing to meet the strategic needs of the business. It should allow us to maximize the flexibility of the business to respond to changing business environments, reduce the complexity of our environment by simplifying business processing and reduce the effort required for application implementation and maintenance. It provides a view of the business and describes where we can improve business functions.

Examples:

  • Create processing utilities for common functions within the back office area that support cross products such as confirmations, cash settlement, security settlement, and collateral management
  • Create a single common analytics versus an individual analytics library used for pricing products. Common analytics can be used for all front office trading desks, for different functions such as front office and risk

2. Data Architecture
The data architecture describes the way data will be processed, stored and used by the organization. It lays out the criteria on processing operations including the whole flow of the system. It should increase the accuracy and timeliness of business data used by applications.

An example of patterns used is Master Data Management (MDM) - its objective is to provide processes for collecting, aggregating, quality-assuring, persisting and distributing such data throughout an organization to ensure consistency and control in the on-going maintenance and application of this information. Moving to a model of a single golden data source will eliminate duplication and inefficiency, e.g., single bond static data sourced from multiple data vendors and publishing it to multiple systems (e-trading, trading, risk and PL, and settlement systems.)

Implement a firm-wide description of common data objects, e.g., fpML (open standard XML standard for electronic dealing and OTC derivatives processing). This will reduce the risk of the data being misunderstood and provide a higher quality of data flowing through the organization thereby increasing efficiency and effectiveness.

3. Applications architecture
Applications architecture describes the structure and behavior of applications used in a business, focusing on how they interact with each other and with users. It's focused on the data consumed and produced by applications rather than their internal structure.

Examples:

  • Enforcing the use of a golden source of data, e.g., instrument static, counterparty static, market data, etc.
  • Standardizing on an application platform and interfacing approach
  • Implementing a standard application monitoring framework for all applications to report the business status

4. Technical Architecture
This describes the common technology components that will be used to build our applications. This includes standards for vendor packages, third-party products and application components, e.g., servers, networks, desktops, middleware, security, storage, and virtualization. This will describe the current and target state.

Examples:

  • SSO should be a standard mechanism for user authentication for all enterprise applications
  • Implement a server virtualization strategy to help reduce costs and increase flexibility
  • All critical applications should have a recovery time of less than an hour
  • Have a technology menu of strategic products that development teams can use for projects

Building an EA for Your Organization
It is important to understand the business strategy of the organization and this drives everything. This vision and strategy will drive where the organization's IT environment and capabilities should be in the next three years.

The next step in this process is to characterize the current status and snapshot the existing IT capabilities. The word "characterize" is used because it isn't usually necessary to identify and analyze everything IT or information related in the organization. You just need enough data to understand the basic situation you are in and the problems that exist, and to develop an idea of where you want to go. You need to understand where the inefficiencies and duplications exist. The question is whether IT is being used in the most effective way to accomplish the organization's program goals.

What Work Is Performed?
You must have a clear understanding of what work the organization performs and where it is performed (anywhere from one location to multiple locations throughout the world).

What Information Is Needed and by Whom?
You need to understand the basic flow of information, not just within your organization but also to and from your organization, and what the information consists of and how that information is organized.

What Applications Are Used to Process that Information?
What software is used to process, analyze, etc., the needed information? What types of data structures and protocols are used?

What Technology Is Used to Perform the Work?
What IT hardware infrastructure is currently used?

Having formed an understanding of where you currently stand, you now need to try to figure out where you need to be in the future. There are two main drivers for this:

  1. Business drivers tell you that you need to do business differently. Customers may be demanding better or different services.
  2. Technology drivers tell you that technology is providing you with options for doing things better

The target architecture is the heart of the process. The four components (business architecture, data architecture [e.g., data sets and information flows], application architecture and technical architecture) of the EA need to be modeled separately. Security considerations should be addressed throughout. The process consists of defining each set of architectural components and its key attributes. The result is an organized set of definitions and models to reflect the different views of the architecture. Again, the relative complexity of the situation will determine how detailed and extensive this effort and documentation needs to be. The four components are then synthesized into a comprehensive target architecture.

Due to the rapid pace of technology advancement, the goal should be to produce an "evolvable" architecture that can accommodate change easily. Some rules to help to produce this are - keep things modular and loosely coupled, have well-defined boundaries between systems and components, reusable logic should be divided into services, use industry-standard interfaces, use open-systems standards, and use common mechanisms whenever possible. Planning for loosely coupled, modular systems with clear boundaries allows you to change portions of the IT architecture without having to revise other aspects in the architecture, and also helps you see how changes in one part of the architecture may affect other elements.

At this point, you are in a position to determine the gaps between your current and target architectures. What are the differences between your baseline and the architecture you want to achieve?

Architecture Governance
For architecture to succeed within an organization, it is essential to have the support and commitment of senior management. This major initiative needs sponsorship by the CIO, and senior management need to be supportive and fully involved in ensuring it is a success. The governance process needs to ensure that:

  • People planning and developing IT systems do so in a way consistent with the target  architecture
  • Procurements are consistent with the target architecture
  • Determine if exceptions or changes to the Enterprise Architecture are needed for a specific system or procurement
  • Track the implementation of the architecture migration plan and the benefits/flaws of the Enterprise Architecture
  • Keep the EA up-to-date, thereby reflecting changes in the business, new technology, etc.

There needs to be integration with the program planning and the budget processes.

Technology is changing rapidly and business needs and processes change over time. Therefore, the target architecture, whether fully implemented or not, that addresses how IT and information will serve business needs must be periodically reviewed and updated to reflect these changes.

It is important that EA is not used as a mechanism that attempts to slow down delivery unnecessarily; it needs to add value to the business by producing superior solutions and not add unnecessary bureaucracy. A pragmatic approach to architecture is needed that balances the needs for agility and innovation yet delivers the efficiency and effectiveness in the technology solution provided by EA.

Architecture Principles
Architecture principles define the fundamental assumptions and rules of conduct for the IT organization to create and maintain IT capability. It provides a compass to guide it to its target architecture. A well-defined architecture principle consists of a name, definition, motivation and implications. Table 1 shows the Architecture Principles on the Reuse, Buy, Build Principle.

Table 1: Reuse, Buy, Build Principle

Definition

(What)

We prefer to reuse existing assets over buying, and buying over building

Motivation

(Why)

We are not a software company

Our company has many IT assets that are underleveraged because we have previously favored building rather than reusing or buying

We have many redundant applications and reducing this through reuse will reduce maintenance costs and improve system stability

Implications

(How)

Architecture Governance will ensure projects adhere to this principle.

 

Our company will develop an understanding of functional and technical assets available for reuse. This will be kept up to date.

 

We will strive for fewer and deeper software vendor relationships and need to influence their roadmaps to mesh with our needs

 

To add a new tool to our portfolio, we will also plan and fund replacement of the installed base of the former tool

Architecture principles become a core shared assumption for all initiatives in the enterprise. This radically simplifies decision making. It ensures that all projects align with and are moving toward the target state

Other enterprise architecture principles an organization might consider include:

  • Don't Automate Bad Business Complexity Principle
  • Avoid Package Customization Principle
  • Prefer Service Orientation over Application Orientation Principle
  • Don't Over / Under Engineer Principle

An example of the Banking Specific Architecture Principle:

  • Only a master source of data can create business events
  • All processing should be STP with manual interventions only for exceptions
  • Combine multiple analytics libraries into a single common library (depends on trading desk size and product complexity!)

Conclusion
EA is important and without it organizations will be unable to deliver technology in an efficient and effective manner. If a project team works anyway they want to, and use any technology they want to, chaos will result. Functionality and information will be duplicated and reuse will occur sporadically, if at all. There will be conflict between systems that cause each other to fail. Individual projects may be deemed successful, but as a portfolio there may be serious challenges. Systems don't exist in vacuum, but rather co-exist with several and sometimes hundreds of other systems. For example, building a Bloomberg interface to store bond static and prices built by the rates front office IT group may be viewed as a success in isolation, but such functionality are required by many systems within the organization, e.g., e-trading, pricing analytics, risk, settlement systems, and other front office trading applications, e.g., credit derivatives and repo. If each area builds such functionality, costs skyrocket (e.g., multiple Bloomberg licenses, duplication of interfaces, data, hardware), and it increases complexity and operational risk within the organization. EA plays a fundamental role in preventing such scenarios from occurring.

More Stories By Sanjeev Khurana

Sanjeev Khurana is Head of Development at a large European Investment Bank and has over 20 years of IT experience. He has also been a part time Lecturer at Universities such as Brunel, Middlesex, Greenwich teaching undergraduates and postgraduates Software Engineering.

@MicroservicesExpo Stories
The nature of test environments is inherently temporary—you set up an environment, run through an automated test suite, and then tear down the environment. If you can reduce the cycle time for this process down to hours or minutes, then you may be able to cut your test environment budgets considerably. The impact of cloud adoption on test environments is a valuable advancement in both cost savings and agility. The on-demand model takes advantage of public cloud APIs requiring only payment for t...
It has never been a better time to be a developer! Thanks to cloud computing, deploying our applications is much easier than it used to be. How we deploy our apps continues to evolve thanks to cloud hosting, Platform-as-a-Service (PaaS), and now Function-as-a-Service. FaaS is the concept of serverless computing via serverless architectures. Software developers can leverage this to deploy an individual "function", action, or piece of business logic. They are expected to start within milliseconds...
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory? In her Day 2 Keynote at @DevOpsSummit at 21st Cloud Expo, Aruna Ravichandran, VP, DevOps Solutions Marketing, CA Technologies, was jo...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configurations? In his session at @DevOpsSummit at 21st Cloud Expo, Oleg Chunikhin, CTO at Kublr, answered these questions and demonstrated techniques for implementing advanced scheduling. For example, using spot instances and co...
The cloud era has reached the stage where it is no longer a question of whether a company should migrate, but when. Enterprises have embraced the outsourcing of where their various applications are stored and who manages them, saving significant investment along the way. Plus, the cloud has become a defining competitive edge. Companies that fail to successfully adapt risk failure. The media, of course, continues to extol the virtues of the cloud, including how easy it is to get there. Migrating...
For DevOps teams, the concepts behind service-oriented architecture (SOA) are nothing new. A style of software design initially made popular in the 1990s, SOA was an alternative to a monolithic application; essentially a collection of coarse-grained components that communicated with each other. Communication would involve either simple data passing or two or more services coordinating some activity. SOA served as a valid approach to solving many architectural problems faced by businesses, as app...
Some journey to cloud on a mission, others, a deadline. Change management is useful when migrating to public, private or hybrid cloud environments in either case. For most, stakeholder engagement peaks during the planning and post migration phases of a project. Legacy engagements are fairly direct: projects follow a linear progression of activities (the “waterfall” approach) – change managers and application coders work from the same functional and technical requirements. Enablement and develo...
Gone are the days when application development was the daunting task of the highly skilled developers backed with strong IT skills, low code application development has democratized app development and empowered a new generation of citizen developers. There was a time when app development was in the domain of people with complex coding and technical skills. We called these people by various names like programmers, coders, techies, and they usually worked in a world oblivious of the everyday pri...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
From manual human effort the world is slowly paving its way to a new space where most process are getting replaced with tools and systems to improve efficiency and bring down operational costs. Automation is the next big thing and low code platforms are fueling it in a significant way. The Automation era is here. We are in the fast pace of replacing manual human efforts with machines and processes. In the world of Information Technology too, we are linking disparate systems, softwares and tool...
DevOps is good for organizations. According to the soon to be released State of DevOps Report high-performing IT organizations are 2X more likely to exceed profitability, market share, and productivity goals. But how do they do it? How do they use DevOps to drive value and differentiate their companies? We recently sat down with Nicole Forsgren, CEO and Chief Scientist at DORA (DevOps Research and Assessment) and lead investigator for the State of DevOps Report, to discuss the role of measure...
DevOps is under attack because developers don’t want to mess with infrastructure. They will happily own their code into production, but want to use platforms instead of raw automation. That’s changing the landscape that we understand as DevOps with both architecture concepts (CloudNative) and process redefinition (SRE). Rob Hirschfeld’s recent work in Kubernetes operations has led to the conclusion that containers and related platforms have changed the way we should be thinking about DevOps and...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
These days, APIs have become an integral part of the digital transformation journey for all enterprises. Every digital innovation story is connected to APIs . But have you ever pondered over to know what are the source of these APIs? Let me explain - APIs sources can be varied, internal or external, solving different purposes, but mostly categorized into the following two categories. Data lakes is a term used to represent disconnected but relevant data that are used by various business units wit...
With continuous delivery (CD) almost always in the spotlight, continuous integration (CI) is often left out in the cold. Indeed, it's been in use for so long and so widely, we often take the model for granted. So what is CI and how can you make the most of it? This blog is intended to answer those questions. Before we step into examining CI, we need to look back. Software developers often work in small teams and modularity, and need to integrate their changes with the rest of the project code b...
"I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
"Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software," explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Kubernetes is an open source system for automating deployment, scaling, and management of containerized applications. Kubernetes was originally built by Google, leveraging years of experience with managing container workloads, and is now a Cloud Native Compute Foundation (CNCF) project. Kubernetes has been widely adopted by the community, supported on all major public and private cloud providers, and is gaining rapid adoption in enterprises. However, Kubernetes may seem intimidating and complex ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...