Microservices Expo Authors: Roger Strukhoff, Hollis Tibbetts, Elizabeth White, Liz McMillan, Sematext Blog

Related Topics: PowerBuilder, Java IoT, Industrial IoT, Microservices Expo, Containers Expo Blog, IoT User Interface

PowerBuilder: Article

A Pragmatic Approach to Enterprise Architecture

Examples from the Financial Services industry

Managing complexity is difficult in any growing business. As companies innovate, add new business lines, expand their global reach, cater to increased volume, or adopt new regulatory rules, processes proliferate and the discipline surrounding them goes out of the window. Moreover, the IT that supports these processes becomes more entangled as aging legacy applications jostle with new applications to support the needs of the business. Over time the technology that support this business unravels, causing the environment to suffer from instability and poor performance and become difficult to change and maintain. In short, it lowers business efficiency and effectiveness.

A sound Enterprise Architecture (EA) approach is required to ensure that both the business and technology are well aligned and will help restore order to this landscape. An Enterprise Architecture is a description of the goals of a company, how these goals are realized by business processes, and how these business processes can be better served through technology. EA is about finding opportunities to use technology to add business value.

The primary purpose of an EA is to inform, guide, and constrain the decisions for the enterprise, especially those related to IT investments. The true challenge of EA is to maintain the architecture as a primary authoritative resource for enterprise IT planning. This goal is not met via enforced policy, but by the value and utility of the information provided by the EA.

Why Have an EA Approach:

  • Provides a basic framework for major change initiatives
  • Divides and conquers technical and organizational complexities
  • Supports business and IT budget prioritization
  • Improves the ability to share and efficiently process information
  • Provides the ability to respond faster to changes in technology and business needs
  • Reduces costs due to economies of scale and resource sharing
  • Enhances productivity, flexibility and maintainability
  • Serves as a construction blueprint and ensures consistency across systems
  • Supports decision making

More specific benefits include:

  • Simplified application development
  • Quality
  • Integration
  • Extensibility
  • Location transparency
  • Horizontal scaling
  • Isolation
  • Portability
  • Reuse

An EA is a blueprint that is developed, implemented, maintained, and used to explain and guide how an organization's applications landscape works together to efficiently accomplish the mission of the organization. An EA addresses the following views:

  • Business activities and processes
  • Data sets and information flows
  • Applications and software
  • Technology

The Four Pillars of Enterprise Architecture:
Business Architecture
Business Architecture realizes the business strategy. It describes how we organize our business processing to meet the strategic needs of the business. It should allow us to maximize the flexibility of the business to respond to changing business environments, reduce the complexity of our environment by simplifying business processing and reduce the effort required for application implementation and maintenance. It provides a view of the business and describes where we can improve business functions.


  • Create processing utilities for common functions within the back office area that support cross products such as confirmations, cash settlement, security settlement, and collateral management
  • Create a single common analytics versus an individual analytics library used for pricing products. Common analytics can be used for all front office trading desks, for different functions such as front office and risk

2. Data Architecture
The data architecture describes the way data will be processed, stored and used by the organization. It lays out the criteria on processing operations including the whole flow of the system. It should increase the accuracy and timeliness of business data used by applications.

An example of patterns used is Master Data Management (MDM) - its objective is to provide processes for collecting, aggregating, quality-assuring, persisting and distributing such data throughout an organization to ensure consistency and control in the on-going maintenance and application of this information. Moving to a model of a single golden data source will eliminate duplication and inefficiency, e.g., single bond static data sourced from multiple data vendors and publishing it to multiple systems (e-trading, trading, risk and PL, and settlement systems.)

Implement a firm-wide description of common data objects, e.g., fpML (open standard XML standard for electronic dealing and OTC derivatives processing). This will reduce the risk of the data being misunderstood and provide a higher quality of data flowing through the organization thereby increasing efficiency and effectiveness.

3. Applications architecture
Applications architecture describes the structure and behavior of applications used in a business, focusing on how they interact with each other and with users. It's focused on the data consumed and produced by applications rather than their internal structure.


  • Enforcing the use of a golden source of data, e.g., instrument static, counterparty static, market data, etc.
  • Standardizing on an application platform and interfacing approach
  • Implementing a standard application monitoring framework for all applications to report the business status

4. Technical Architecture
This describes the common technology components that will be used to build our applications. This includes standards for vendor packages, third-party products and application components, e.g., servers, networks, desktops, middleware, security, storage, and virtualization. This will describe the current and target state.


  • SSO should be a standard mechanism for user authentication for all enterprise applications
  • Implement a server virtualization strategy to help reduce costs and increase flexibility
  • All critical applications should have a recovery time of less than an hour
  • Have a technology menu of strategic products that development teams can use for projects

Building an EA for Your Organization
It is important to understand the business strategy of the organization and this drives everything. This vision and strategy will drive where the organization's IT environment and capabilities should be in the next three years.

The next step in this process is to characterize the current status and snapshot the existing IT capabilities. The word "characterize" is used because it isn't usually necessary to identify and analyze everything IT or information related in the organization. You just need enough data to understand the basic situation you are in and the problems that exist, and to develop an idea of where you want to go. You need to understand where the inefficiencies and duplications exist. The question is whether IT is being used in the most effective way to accomplish the organization's program goals.

What Work Is Performed?
You must have a clear understanding of what work the organization performs and where it is performed (anywhere from one location to multiple locations throughout the world).

What Information Is Needed and by Whom?
You need to understand the basic flow of information, not just within your organization but also to and from your organization, and what the information consists of and how that information is organized.

What Applications Are Used to Process that Information?
What software is used to process, analyze, etc., the needed information? What types of data structures and protocols are used?

What Technology Is Used to Perform the Work?
What IT hardware infrastructure is currently used?

Having formed an understanding of where you currently stand, you now need to try to figure out where you need to be in the future. There are two main drivers for this:

  1. Business drivers tell you that you need to do business differently. Customers may be demanding better or different services.
  2. Technology drivers tell you that technology is providing you with options for doing things better

The target architecture is the heart of the process. The four components (business architecture, data architecture [e.g., data sets and information flows], application architecture and technical architecture) of the EA need to be modeled separately. Security considerations should be addressed throughout. The process consists of defining each set of architectural components and its key attributes. The result is an organized set of definitions and models to reflect the different views of the architecture. Again, the relative complexity of the situation will determine how detailed and extensive this effort and documentation needs to be. The four components are then synthesized into a comprehensive target architecture.

Due to the rapid pace of technology advancement, the goal should be to produce an "evolvable" architecture that can accommodate change easily. Some rules to help to produce this are - keep things modular and loosely coupled, have well-defined boundaries between systems and components, reusable logic should be divided into services, use industry-standard interfaces, use open-systems standards, and use common mechanisms whenever possible. Planning for loosely coupled, modular systems with clear boundaries allows you to change portions of the IT architecture without having to revise other aspects in the architecture, and also helps you see how changes in one part of the architecture may affect other elements.

At this point, you are in a position to determine the gaps between your current and target architectures. What are the differences between your baseline and the architecture you want to achieve?

Architecture Governance
For architecture to succeed within an organization, it is essential to have the support and commitment of senior management. This major initiative needs sponsorship by the CIO, and senior management need to be supportive and fully involved in ensuring it is a success. The governance process needs to ensure that:

  • People planning and developing IT systems do so in a way consistent with the target  architecture
  • Procurements are consistent with the target architecture
  • Determine if exceptions or changes to the Enterprise Architecture are needed for a specific system or procurement
  • Track the implementation of the architecture migration plan and the benefits/flaws of the Enterprise Architecture
  • Keep the EA up-to-date, thereby reflecting changes in the business, new technology, etc.

There needs to be integration with the program planning and the budget processes.

Technology is changing rapidly and business needs and processes change over time. Therefore, the target architecture, whether fully implemented or not, that addresses how IT and information will serve business needs must be periodically reviewed and updated to reflect these changes.

It is important that EA is not used as a mechanism that attempts to slow down delivery unnecessarily; it needs to add value to the business by producing superior solutions and not add unnecessary bureaucracy. A pragmatic approach to architecture is needed that balances the needs for agility and innovation yet delivers the efficiency and effectiveness in the technology solution provided by EA.

Architecture Principles
Architecture principles define the fundamental assumptions and rules of conduct for the IT organization to create and maintain IT capability. It provides a compass to guide it to its target architecture. A well-defined architecture principle consists of a name, definition, motivation and implications. Table 1 shows the Architecture Principles on the Reuse, Buy, Build Principle.

Table 1: Reuse, Buy, Build Principle



We prefer to reuse existing assets over buying, and buying over building



We are not a software company

Our company has many IT assets that are underleveraged because we have previously favored building rather than reusing or buying

We have many redundant applications and reducing this through reuse will reduce maintenance costs and improve system stability



Architecture Governance will ensure projects adhere to this principle.


Our company will develop an understanding of functional and technical assets available for reuse. This will be kept up to date.


We will strive for fewer and deeper software vendor relationships and need to influence their roadmaps to mesh with our needs


To add a new tool to our portfolio, we will also plan and fund replacement of the installed base of the former tool

Architecture principles become a core shared assumption for all initiatives in the enterprise. This radically simplifies decision making. It ensures that all projects align with and are moving toward the target state

Other enterprise architecture principles an organization might consider include:

  • Don't Automate Bad Business Complexity Principle
  • Avoid Package Customization Principle
  • Prefer Service Orientation over Application Orientation Principle
  • Don't Over / Under Engineer Principle

An example of the Banking Specific Architecture Principle:

  • Only a master source of data can create business events
  • All processing should be STP with manual interventions only for exceptions
  • Combine multiple analytics libraries into a single common library (depends on trading desk size and product complexity!)

EA is important and without it organizations will be unable to deliver technology in an efficient and effective manner. If a project team works anyway they want to, and use any technology they want to, chaos will result. Functionality and information will be duplicated and reuse will occur sporadically, if at all. There will be conflict between systems that cause each other to fail. Individual projects may be deemed successful, but as a portfolio there may be serious challenges. Systems don't exist in vacuum, but rather co-exist with several and sometimes hundreds of other systems. For example, building a Bloomberg interface to store bond static and prices built by the rates front office IT group may be viewed as a success in isolation, but such functionality are required by many systems within the organization, e.g., e-trading, pricing analytics, risk, settlement systems, and other front office trading applications, e.g., credit derivatives and repo. If each area builds such functionality, costs skyrocket (e.g., multiple Bloomberg licenses, duplication of interfaces, data, hardware), and it increases complexity and operational risk within the organization. EA plays a fundamental role in preventing such scenarios from occurring.

More Stories By Sanjeev Khurana

Sanjeev Khurana is Head of Development at a large European Investment Bank and has over 20 years of IT experience. He has also been a part time Lecturer at Universities such as Brunel, Middlesex, Greenwich teaching undergraduates and postgraduates Software Engineering.

@MicroservicesExpo Stories
As we enter the final week before the 19th International Cloud Expo | @ThingsExpo in Santa Clara, CA, it's time for me to reflect on six big topics that will be important during the show. Hybrid Cloud This general-purpose term seems to provide a comfort zone for many enterprise IT managers. It sounds reassuring to be able to work with one of the major public-cloud providers like AWS or Microsoft Azure while still maintaining an on-site presence.
DevOps is being widely accepted (if not fully adopted) as essential in enterprise IT. But as Enterprise DevOps gains maturity, expands scope, and increases velocity, the need for data-driven decisions across teams becomes more acute. DevOps teams in any modern business must wrangle the ‘digital exhaust’ from the delivery toolchain, "pervasive" and "cognitive" computing, APIs and services, mobile devices and applications, the Internet of Things, and now even blockchain. In this power panel at @...
A completely new computing platform is on the horizon. They’re called Microservers by some, ARM Servers by others, and sometimes even ARM-based Servers. No matter what you call them, Microservers will have a huge impact on the data center and on server computing in general. Although few people are familiar with Microservers today, their impact will be felt very soon. This is a new category of computing platform that is available today and is predicted to have triple-digit growth rates for some ...
SYS-CON Events announced today that Numerex Corp, a leading provider of managed enterprise solutions enabling the Internet of Things (IoT), will exhibit at the 19th International Cloud Expo | @ThingsExpo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Numerex Corp. (NASDAQ:NMRX) is a leading provider of managed enterprise solutions enabling the Internet of Things (IoT). The Company's solutions produce new revenue streams or create operating...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...
When we talk about the impact of BYOD and BYOA and the Internet of Things, we often focus on the impact on data center architectures. That's because there will be an increasing need for authentication, for access control, for security, for application delivery as the number of potential endpoints (clients, devices, things) increases. That means scale in the data center. What we gloss over, what we skip, is that before any of these "things" ever makes a request to access an application it had to...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, will demonstrate how to move beyond today's coding paradigm ...
Virgil consists of an open-source encryption library, which implements Cryptographic Message Syntax (CMS) and Elliptic Curve Integrated Encryption Scheme (ECIES) (including RSA schema), a Key Management API, and a cloud-based Key Management Service (Virgil Keys). The Virgil Keys Service consists of a public key service and a private key escrow service. 

SYS-CON Events announced today that Tintri Inc., a leading producer of VM-aware storage (VAS) for virtualization and cloud environments, will present at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Tintri VM-aware storage is the simplest for virtualized applications and cloud. Organizations including GE, Toyota, United Healthcare, NASA and 6 of the Fortune 15 have said “No to LUNs.” With Tintri they manag...
SYS-CON Events announced today that eCube Systems, the leading provider of modern development tools and best practices for Continuous Integration on OpenVMS, will exhibit at SYS-CON's @DevOpsSummit at Cloud Expo New York, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. eCube Systems offers a family of middleware products and development tools that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
All clouds are not equal. To succeed in a DevOps context, organizations should plan to develop/deploy apps across a choice of on-premise and public clouds simultaneously depending on the business needs. This is where the concept of the Lean Cloud comes in - resting on the idea that you often need to relocate your app modules over their life cycles for both innovation and operational efficiency in the cloud. In his session at @DevOpsSummit at19th Cloud Expo, Valentin (Val) Bercovici, CTO of So...
Apache Hadoop is a key technology for gaining business insights from your Big Data, but the penetration into enterprises is shockingly low. In fact, Apache Hadoop and Big Data proponents recognize that this technology has not yet achieved its game-changing business potential. In his session at 19th Cloud Expo, John Mertic, director of program management for ODPi at The Linux Foundation, will explain why this is, how we can work together as an open data community to increase adoption, and the i...
operations aren’t merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations – together with other practices such as security – to collaborate and coexist with less overhead and conflict than in the past. In his session at @DevOpsSummit at 19th Cloud Expo, Gordon Haff, Red Hat Technology Evangelist, will discuss what modern operational practices look like in a world in which applications are more loosely coupled, are deve...
DevOps is a term that comes full of controversy. A lot of people are on the bandwagon, while others are waiting for the term to jump the shark, and eventually go back to business as usual. Regardless of where you are along the specturm of loving or hating the term DevOps, one thing is certain. More and more people are using it to describe a system administrator who uses scripts, or tools like, Chef, Puppet or Ansible, in order to provision infrastructure. There is also usually an expectation of...
DevOps is speeding towards the IT world like a freight train and the hype around it is deafening. There is no reason to be afraid of this change as it is the natural reaction to the agile movement that revolutionized development just a few years ago. By definition, DevOps is the natural alignment of IT performance to business profitability. The relevance of this has yet to be quantified but it has been suggested that the route to the CEO’s chair will come from the IT leaders that successfully ma...
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
As software becomes more and more complex, we, as software developers, have been splitting up our code into smaller and smaller components. This is also true for the environment in which we run our code: going from bare metal, to VMs to the modern-day Cloud Native world of containers, schedulers and microservices. While we have figured out how to run containerized applications in the cloud using schedulers, we've yet to come up with a good solution to bridge the gap between getting your conta...
DevOps theory promotes a culture of continuous improvement built on collaboration, empowerment, systems thinking, and feedback loops. But how do you collaborate effectively across the traditional silos? How can you make decisions without system-wide visibility? How can you see the whole system when it is spread across teams and locations? How do you close feedback loops across teams and activities delivering complex multi-tier, cloud, container, serverless, and/or API-based services?
SYS-CON Events announced today that SoftNet Solutions will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. SoftNet Solutions specializes in Enterprise Solutions for Hadoop and Big Data. It offers customers the most open, robust, and value-conscious portfolio of solutions, services, and tools for the shortest route to success with Big Data. The unique differentiator is the ability to architect and ...