Microservices Expo Authors: Liz McMillan, Flint Brenton, Jason Bloomberg, Elizabeth White, Yeshim Deniz

Related Topics: Microservices Expo

Microservices Expo: Article

Replication: The Single point of entry to the UBR Cloud

Replication: The Single point of entry to the UBR Cloud

Replication is a process of synchronizing data among the participants (or entities) in the operator cloud. The cloud acts as a single logical entity or entry to the outside world. The goal of replication is to facilitate uniformity and consistency in the data present in the UBR. This can be achieved by the set of replication messages defined in the UDDI Version 2 Replication Specification. Nodes represent operators and are used synonymously in the replication specification. Identified sets of entities form the operator cloud.

This article looks at the importance of replication and its coexistence with the UDDI service. I'll also cover the replication APIs that are implemented by the operators and will discuss the business advantages. I assume you are familiar with XML, SOAP, and UDDI.

UDDI Business Registry Cloud
Figure 1 shows the UBR cloud with the operators replicating each other. Currently, IBM, Microsoft, NTT Communications, and SAP are the public operators of the UDDI registry and form the cloud. The process of adding a node to the cloud is bound to the UDDI Operators council, a governing body within the UDDI.org project. All the public operators should implement the UDDI Specifications as mandated by UDDI.org. Replication as the major functionality is operational with all the operators. With this, Web services clients can query any registry for their businesses, services, tModels, etc., irrespective of their publisher accounts that are bound to a single registry.


Replication Business Model
Consider the ACME Company, which specializes in providing consulting related to wealth management for its customers. Assume that ACME registers itself with any of the UBRs (IBM, Microsoft, NTT Communications, or SAP). Suppose that it holds a publisher account with IBM. ACME publishes its business ("ACME Consulting Business") with IBM Business Registry. Now the business, which was published in the UBR, will be visible across the nodes in the UBR through replication. The content of ACME business will also reside in the Microsoft, NTT Communications, and SAP business registries, considering that these entities are involved in replication. These registries will have an entry in their data store that corresponds to "ACME Consulting Business," with IBM as the primary custodian of this business. Likewise, all the services, tModels registered to "ACME business," are replicated across all the UBRs within the cloud. Publisher accounts are not replicated across the registries, only the content or the data in the registry is replicated.

The custodian is the only authoritative person to modify or update the content of the registered business.

Business Search Using UDDI4J
The find_business call of the UDDI4J APIs helps find the replicated business with the operator as IBM, using the Apache Axis as the transport. The snippet of the FindReplicated Business.java is shown in Listing 1 (the listings and sample code for this article can be found online at www.sys-con.com/webservices/sourcec.cfm).

To run this UDDI4j sample, you need to set the classpath which has the following JARs :

C:\ axis-1_1RC1\lib\axis.jar;
C:\ axis-1_1RC1\lib\commons-discovery.jar;
C:\ axis-1_1RC1\lib\commons-logging.jar;
C:\ axis-1_1RC1\lib\saaj.jar;
C:\ axis-1_1RC1\lib\jaxrpc.jar;

If you are behind a firewall, run this sample with your proxy details :

java -Dhttp.proxyHost=yourProxyHost
-Dhttp.proxyPort=yourProxyPort FindReplicatedBusiness

The output in Listing 2 is the result of a UDDI4J find_business call made to IBM registry. The other UBRs result in similar output with the operator attribute pointing to the respective UBR.

Replication Data Structures
The UDDI Replication Specification defines a set of data structures that are used by the replication APIs.

Update Sequence Number (USN)
The UDDI node participating in the replication process shall assign an increasing number to each of the change records created at that node. This is the originating USN for that particular change record. The originating USN value should be in the increasing order. There can be gaps in the node's originating USN sequence that may be caused by abnormal system failures.

As a result of performing replication, the node has to process all the replicated data, and must assign an additional unique local USN for that particular change record. To avoid the outage of USN values, the replication specification mandates that the nodes should implement a USN with a size exacting to 63 bits. An originating USN of value "0" will be used to represent that no change records have been seen or applied from a node. So, the nodes will skip this USN during replication processing.

Change Records
When a publish call is made to specific datum at a node, the node will create a change record that describes the details of the change. Suppose that when a service is added to a business that exists already, a change record will be generated as a result of this process. The change record will hold the following information:

  • nodeID: Where the change record was initially created
  • Originating USN: Assigned to the change record at its creation by its originating node
  • Data: Conveys the semantics of the change in question.

    Change Record Journal
    Whenever a node receives change records from other nodes, it should create an entry in the change record journal. The journal stores the XML text of the change records. This helps to verify that the transmitted data has not been altered by the intermediary nodes during the course of replication. The change record journal is maintained in the data store of the UBR.

    High Water Mark Vector
    Each UDDI node maintains state information such as the originating USN of the most recent changes that have been successfully processed by each node of the registry as a high water mark vector. The high water mark vector has one entry per node with each entry holding the following information:

  • operatorNodeID: The UUID of the node
  • originatingUSN: The originating USN of the most recent change associated with the node that has been successfully consumed

    Replication APIs
    Replication involves change notification and retrieval of those changes from nodes in the registry. This is done by broadcasting that information from the node in the registry to its peers. The node, which is interested in those changes, will subsequently make a call to retrieve those changes. In order to achieve this functionality, UDDI Replication defines the following APIs:

    • get_changeRecords
    • notify_changeRecordsAvailable
    • do_ping
    • get_highWaterMarks
    This UDDI API call is used to initiate the replication of change records from one node to another. The requestingNode is the node that initiates get_changeRecords and will provide information such as chan gesAlreadySeen as part of the high water mark vector. This information is used by the callee to determine the change records needed by the caller.

    The get_changeRecords Schema is shown in Listing 3. An example message is shown in Listing 4.

    Nodes can inform others that they have new change records available for consumption by replication by using this message. The notify_changeRecordsAvailable message is the predecessor to the get_ chan geRecords message. The schema for this is in Listing 5 with an example message in Listing 6.

    This UDDI API call provides the means to verify the connectivity of a node that wishes to start replication.

    <element name="do_ping">
    <complexType final="restriction">

    Example Message

    <?xml version="1.0" encoding="UTF-8"?>
    <Envelope xmlns="http://schemas.xmlsoap.org/soap/envelope/">
    <do_ping xmlns="urn:uddi-org:repl_v2"/>

    This UDDI API message provides a means to obtain a list of highWaterMark elements containing the highest known USN for all nodes in the replication communication graph.


    <element name="get_highWaterMarks">

    Example Message

    <?xml version="1.0" encoding="UTF-8"?>
    <Envelope xmlns="http://schemas.xmlsoap.org/soap/envelope/">
    <get_highWaterMarks xmlns="urn:uddi-org:repl_v2" />

    Replication Processing
    Replication processing involves the API calls that are made to and from the replicating nodes. This follows a simple life cycle. Consider that nodes A and B (see Figure 2) are participating in a replication scenario. Assume the nodes are configured for replication processing. For instance, Node A initiates the process and makes a "do_ping" call to Node B to check for its availability. Node B makes a similar call to check for Node A's availability. If the "do_ping" call of Node A is successful, then Node A makes a "notify_change Records Available" call to Node B. This call tells Node B that Node A has some changes that are unseen by Node B. In response to this call, Node B makes a "get_changeRecords" call to Node A. Node A sends back all the unseen changeRecords to Node B. Node B processes all the change records from Node A and updates its local repository. This completes a single replication cycle, assuming everything goes fine during this process. The replication specification defines a detailed section on the failure scenarios and how to handle them during replication processing. Figure 2 shows the interaction between the replication APIs in case of a two-node scenario.


    Replication Configuration
    Replication Configuration File

    The replication functionality implemented by an operator should be configurable as mandated by the UDDI Replication Specification. This is done through the Replication Configuration File (RCF), which may be located centrally, and can be accessed by the operators. It typically resides in the following URL: https:// www.uddi.org/operator/ReplicationConfiguration.xml and can also be stored within the operator's Web server. In the latter case, each operator has to maintain the same copy of the RCF in order to maintain the consistency of the nodes. UDDI data replication is governed by the set of parameters that form this RCF. This file maintains the necessary information about the operators in the replication process.

    The following are the parameters defined in the RCF:

  • serialNumber: Value of this element changes whenever the RCF is updated or changed.
  • timeOfConfigurationUpdate: Gives you the timestamp of the RCF.
  • councilContact: Provides information about the person who maintains or updates the RCF.
  • maximumTimeToSyncUBR: Allows you to specify the maximum amount of time (in hours) that a node in the UBR can sync with all nodes in the UBR. The change made at any single node in the UBR is expected to be visible at all nodes in the UBR within this time limit.
  • maximumTimeToGetChanges: Allows you to specify the maximum amount of time (in hours) that an individual node may wait to request changes. The nodes must perform get_change Records within this time limit.
  • operator: Provides the list of nodes that are part of replication topology.
  • communicationGraph: Provides the communication paths of the nodes and their replication topologies.

    Sample Replication Configuration File
    The RCF shown in Listing 7 represents a four-node scenario in the communication graph. This RCF holds information about the nodes that take part in the replication process.

    Replication Business Advantage
    Currently, replication is used in synchronizing the data in the registry among public operators. In the near future, replication as a functionality will be used to synchronize data across geographical locations. Nodes can selectively replicate data based on their requirements. For example, an operator hosting a registry in Japan will be able to replicate only the services that are located in their region based on their language. This filtering helps consumers to avail the services at their doorstep.

    Filtering can also be done based on categories, which might be helpful to promote shared businesses. For example, Company A and Company B can host their individual registries, but they can share a specific business segment between them. This business segment has to be replicated between them in order to keep them in sync. Thus, businesses collaborate with each other in a secure manner that benefits their customers.

    Private registries that are deployed within the enterprise can be promoted to public registries with the help of replication. Part of the data can be in a private registry and part of it can be in a public registry. For example, you could have a bindingTemplate in a private registry that points to a tModel in public registry. The UDDI v3 Specification details the concept of entity promotion, whereby the test registries can be promoted to production mode retaining their keys.

    As the UBR cloud has become online and operational, the replication functionality of the UDDI registry will make it more adoptable in the Web services community as a standard service discovery protocol. This business-rich feature marks a milestone in the history of UDDI. More businesses should register their meaningful services in the UBR in order to add value to the registry data, which in turn benefits the service consumers.


  • UDDI Version 2.0 Replication Specification: Version 2.03 Specification: http://uddi.org/pubs/Replication-V2.03-Published-20020719.pdf
  • UDDI Version 2.0 XML Replication Schema: http://uddi.org/schema/uddi_v2replication.xsd
  • UDDI Version 2.0 Operator's Specification: http://uddi.org/pubs/Operators-V2.01-Published-20020719.pdf
  • UDDI4J SDK: http://uddi4j.org
  • More Stories By Arulazi Dhesiaseelan

    Arulazi Dhesiaseelan holds Master of Computer Applications degree from PSG College of Technology, India. He has been involved in designing and building Java based applications and SDK for more than three years. He was also involved in the API development of UDDI4j project hosted at http://uddi4j.org. He's working with Hewlett Packard Company (India Software Operations), Bangalore. Currently he is involved in the development of an open service framework for mobile infrastructures. He can be reached at [email protected]

    Comments (0)

    Share your thoughts on this story.

    Add your comment
    You must be signed in to add a comment. Sign-in | Register

    In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

    @MicroservicesExpo Stories
    All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
    Don’t go chasing waterfall … development, that is. According to a recent post by Madison Moore on Medium featuring insights from several software delivery industry leaders, waterfall is – while still popular – not the best way to win in the marketplace. With methodologies like Agile, DevOps and Continuous Delivery becoming ever more prominent over the past 15 years or so, waterfall is old news. Or, is it? Moore cites a recent study by Gartner: “According to Gartner’s IT Key Metrics Data report, ...
    Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
    In his session at Cloud Expo, Alan Winters, U.S. Head of Business Development at MobiDev, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to maximize project result...
    In his keynote at 19th Cloud Expo, Sheng Liang, co-founder and CEO of Rancher Labs, discussed the technological advances and new business opportunities created by the rapid adoption of containers. With the success of Amazon Web Services (AWS) and various open source technologies used to build private clouds, cloud computing has become an essential component of IT strategy. However, users continue to face challenges in implementing clouds, as older technologies evolve and newer ones like Docker c...
    For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
    Without a clear strategy for cost control and an architecture designed with cloud services in mind, costs and operational performance can quickly get out of control. To avoid multiple architectural redesigns requires extensive thought and planning. Boundary (now part of BMC) launched a new public-facing multi-tenant high resolution monitoring service on Amazon AWS two years ago, facing challenges and learning best practices in the early days of the new service.
    You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
    Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
    "We view the cloud not as a specific technology but as a way of doing business and that way of doing business is transforming the way software, infrastructure and services are being delivered to business," explained Matthew Rosen, CEO and Director at Fusion, in this SYS-CON.tv interview at 18th Cloud Expo (http://www.CloudComputingExpo.com), held June 7-9 at the Javits Center in New York City, NY.
    Without lifecycle traceability and visibility across the tool chain, stakeholders from Planning-to-Ops have limited insight and answers to who, what, when, why and how across the DevOps lifecycle. This impacts the ability to deliver high quality software at the needed velocity to drive positive business outcomes. In his general session at @DevOpsSummit at 19th Cloud Expo, Eric Robertson, General Manager at CollabNet, will discuss how customers are able to achieve a level of transparency that e...
    We all know that end users experience the internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices - not doing so will be a path to eventual ...
    "DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
    We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers' relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual b...
    "This all sounds great. But it's just not realistic." This is what a group of five senior IT executives told me during a workshop I held not long ago. We were working through an exercise on the organizational characteristics necessary to successfully execute a digital transformation, and the group was doing their ‘readout.' The executives loved everything we discussed and agreed that if such an environment existed, it would make transformation much easier. They just didn't believe it was reali...
    "Opsani helps the enterprise adopt containers, help them move their infrastructure into this modern world of DevOps, accelerate the delivery of new features into production, and really get them going on the container path," explained Ross Schibler, CEO of Opsani, and Peter Nickolov, CTO of Opsani, in this SYS-CON.tv interview at DevOps Summit at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
    Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
    "I focus on what we are calling CAST Highlight, which is our SaaS application portfolio analysis tool. It is an extremely lightweight tool that can integrate with pretty much any build process right now," explained Andrew Siegmund, Application Migration Specialist for CAST, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
    Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Archi...