Welcome!

Microservices Expo Authors: Elizabeth White, Yeshim Deniz, Jason Bloomberg, Pat Romanski, Mark Leake

Related Topics: Microservices Expo, Java IoT

Microservices Expo: Article

SOA & Web Services - What Is SDO?

Part One: The value of many of the facets of SDO

Service Data Objects (SDOs) simplify and unify Service Oriented Architecture (SOA) data access and code.

SDO complements the strength that SCA (Service Component Architecture) offers for simplifying development of SOA-based solutions. SCA handles the composition of service networks and SDO focuses on simplifying data handling. These technologies are getting significant support in the industry. The development of the SDO and SCA specifications is in the hands of the Open Service Oriented Architecture collaboration (www.osoa.org) and open source implementations of these specifications are being developed in the Apache Tuscany incubator project (http://incubator.apache.org/tuscany).

In this two-part article we use a scenario to demonstrate the value of many of the facets of SDO.

Some History
The first SDO specification was published in November 2004 as a result of collaborative between IBM and BEA. The Eclipse Foundation developed an open source implementation of this SDO 1 specification. SDO primarily addressed the lack of general applicability of the existing technologies such as JAXB and JDO. Around that time Microsoft entered this space with ADO.NET, offering a slightly different technical perspective. The SDO 2.0.1 specification appeared late in 2005 and is continuing to evolve, with wider industry involvement; at the time of writing revision 2.1 is imminent and revision 3.0 is in the pipeline.

The Advantages of SDO
SDO provides flexible data structures that allow data to be organized as graphs of objects (called data objects) that are composed of properties. Properties can be single or many valued and can have other data objects as their values. A data object can maintain a change summary of the alterations made to it, providing efficient communication of changes and a convenient way to update an original data source. SDO naturally permits disconnected data access patterns with an optimistic concurrency control model.

SDO offers a convenient way to work with XML documents. SDO implementations provide helpers to populate a data graph from both XML documents and relational databases and to read SDO metadata from an XML Schema Definition (XSD). Data objects can be serialized to XML and the metadata can be serialized to an XSD file (see Figure 1).

Data Objects can be introspected using the SDO metadata API to get information about types, relationships, and constraints.

SDO delivers unified and consistent access to data from heterogeneous sources. This provides both a simple programming model for the application programmer and lets tools and frameworks work consistently across those heterogeneous data sources.

SDO offers a single model for data across the enterprise.

The diagram below shows a WebUI client accessing data from a variety of sources, mediated by SDO. Web applications typically operate in a semi-connected fashion and rely on optimistic-concurrency. SDO is well suited to this environment, where data can be manipulated remotely and then a summary of the changes can be delivered back to the data sources (see Figure 2).

The following sections will introduce SDO in more detail.

A Scenario
This example is based on an imaginary project inspired by some real-world scenarios. A hypothetical group of universities, hospitals, and companies have embarked on a long-term collaboration to study some family of diseases that has both a genetic and environmental component. They will need to exchange the medical histories of the people they're treating and studying, and also exchange the medical histories of relatives. The data will likely come from disparate sources; basic patient data will probably be in a relational database; data from medical investigations conducted as part of this research project will be in XML documents; other medical data may come from less well known formats or custom sources. The amount of data about any given person will vary greatly. A long-standing patient may come with an extensive medical history. A relative might have little beyond name and relationship. This data has to be assembled into a coherent manageable whole, and SDO is an attractive option for representing a complicated mix of data about each person and potentially maintain a graph of such entities. For this example, we can't even assert that the graph is a (family) tree because with adoption, re-marriage, fertility treatment, and so on, one person's associations with others can be quite intricate.

The various institutions involved may not want to give unrestricted access to their data sources, although they've agreed to supply pieces of it as needed. A hospital may be willing to provide the medical data associated with one patient as part of an investigation, but they won't permit open access to their entire patient record database. Similarly, a company will want to limit access to commercially sensitive material. SDO provides a convenient way for the owner of the data to deliver to outsiders a subset of that data of their own choosing.

We'll now show some of the key values of SDO through this scenario.

To illustrate where an SDO feature helps, consider a scenario where a hospital refers a patient to a university for further investigation. Relevant data will have to flow from the hospital to the university, and it may well come from a variety of different sources. Assume that name, age, records of visits, and so forth comes from an SQL database, while specific medical data (the results of tests) are in XML documents. Using standard SDO features it's straightforward for the hospital to combine these various sources into a data object and send that, letting users of the data access it via SDO's unified API.

The university does whatever it does with the patient, and then updates the SDO and sends it back. The change history in the SDO lets the hospital apply the updates to its various data repositories without the university ever needing to know the detail of those repositories.

It's unlikely that these updates will clash with other updates made independently by the hospital, but if they do, the use of an SDO change summary ensures that this is detected and sorted out (probably manually in this case). The software component responsible for moving data between the data source (for example, a relational database in this case) and SDO is called a Data Access Service (DAS). A DAS can typically also handle conflicting updates.1

Using SDO as the data exchange format makes the system tolerant of the considerable variation to be expected in such a loosely coupled system. It's inevitable that, sooner or later, the versions of the applications that are sending and receiving data will get out-of-step. In fact, this may be usual. However, the fact that an SDO can arrive with its own metadata means that an older application can always retrieve what it wants from a newer (and presumably richer) input SDO - ignoring anything that it doesn't recognise. In the reverse case, a newer application can similarly recognise that the information it has received is from an older version and compensate accordingly.

In the previous scenarios, we've concentrated mostly on XML and SQL data sources. Now, let's suppose that in one hospital the results from the biochemistry lab are delivered in HL7 message format. This message format is widely used in the healthcare industry but is virtually unknown outside it, and so there's no off-the-shelf way to read such messages into an SDO. At this point there are several choices. We could use some broker-style product to reformat the HL7 into XML and then read it into an SDO or we could pay someone to write a new DAS that would populate an SDO directly from HL7. Since our collaborators are using an open source implementation of SDO, however, they opt to write their own DAS and donate it to the Apache Software Foundation's Tuscany project.

Other approaches exist to linking these various organisations, such as putting some software intermediary in the middle and using that to convert the data as needed. To do so though requires knowledge of all the possible input and output formats and how to convert between them. In such a loose collaboration there simply is no such central authority.

We now turn our attention to presenting the details of SDO using some code fragments.

Creating Types
In SDO data objects have a type so the first step in presenting our example is to construct the types we're going to use. We have several choices here. The first choice we face is whether to generate static Java classes that represent the types or whether to build them dynamically.

In a situation where the type system is stable and well understood then generating static types leads to simpler, more natural coding. For example, with generated types we'd be able to code something like...

person.getPatientName()

as opposed to

person.getString("patientName")

Statically generated types also offer the possibility for the programmer to code to the generated interface without knowing the SDO API. The corollary of this is that when the type system isn't well known or might change then the dynamic SDO API may be more suitable. Choosing to use statically generated types has the advantage that the whole SDO dynamic API is still available to the programmer for handling less common operations.

We're going to focus on dynamically building types, since it naturally leads to exploring more of the SDO API, but bear in mind that if we were to use the option of generating static classes we have the power of SDO operating behind the scenes while using Java method calls that are no more than JavaBean getters and setters.

An option when defining types dynamically might be to use the facilities of an existing DAS, which, for example, could convert from a database schema; we could also use SDO's XSDHelper to read an XSD and build SDO types from it. The SDO specification provides a way to create types dynamically, however, it depends on knowing the SDO API, which we haven't seen yet! To simplify this example, we'll use an extension from Apache Tuscany, which lets type definitions be built without knowing the SDO API.

Type personType = SDOUtil.createType(
     TypeHelper.INSTANCE, "www.example.org", "Person", false);

The net result of this line of code is the creation of an empty SDO type called "Person" scoped by the URI "www.example.org" and when completed, this type can be used to instantiate data objects. We can now add properties to the type and set its characteristics. Every property has a type, and we can make use of SDO's built-in types (in this case a string) to build our model.

Type stringType = TypeHelper.INSTANCE.getType("commonj.sdo", "string");

We use this type to add a "name" property to our "person" type.

SDOUtil.createProperty(personType, "name", stringType);

In our example scenario we can't know in advance all the information we might want to associate with a person. By making the type open we permit data objects having this type to carry additional properties that aren't defined as part of this type.

SDOUtil.setOpen(personType, true);

We could continue building this type in this way, however, it would rapidly become tedious. An alternative approach is to use SDO's XSDHelper to build a type system by reading XML schema definitions (see Figure 1).

The XSD for our type definitions is shown in Listing 1. Just as in our previous code example, we've made the Person type in the XSD open by using the "xsd:any." There are other interesting aspects of this schema that we'll develop as the example unfolds.

We can read this schema using SDO's XSDHelper and then we have three new types, Person, Relative, and PersonSet, available to us that can be used to create data objects.

File inputFile =
     new File("Person.xsd").getAbsoluteFile();
InputStream inputStream =
     new FileInputStream(inputFile);
List schemaTypes = xsdHelper.define(
     inputStream,
     inputFile.toURI().toString());

Building a Graph
These new types are all scoped within the URI www.example.org/people. Now that we have some types defined we're in good shape to create some data objects, so the first thing we do is to use the SDO DataFactory and our Person type to construct an SDO DataObject representing an instance of Person.

DataObject person1 = DataFactory.
    INSTANCE.create("www.example.org/
    people", "Person");

We know from the XSD that the Person type has "id," "name," and "gender" properties so we can set values for these properties as follows.

    person1.setString("id", "1");
    person1.setString("name", "Joe Johnson Snr.");
    person1.setString("gender", "male");

We can begin building a graph by adding this person to a set of "referrals" in a test, i.e., the set of people who have been referred by a medical practitioner because they've exhibited symptoms or are related to an existing patient. We do this by creating a new DataObject of type PersonSet.

DataObject referrals = DataFactory.
INSTANCE.create("www.example.org/people", "PersonSet");

The "people" property of the referrals DataObject is defined in the XSD as many valued and is therefore accessed via the getList method.

referrals.getList("people").add(person1);


More Stories By Kelvin Goodson

Kelvin Goodson is based at IBM Hursley in the UK as part of the Open Source SOA team. He is a committer to the Apache Tuscany incubator project, and works primarily on development of the Tuscany Java implementation of SDO. He gained a Ph.D. in image analysis and artificial intelligence in 1988, and has previously worked in the areas of medical imaging, weather forecasting and messaging middleware.

More Stories By Geoffrey Winn

Geoff Winn is based at IBM Hursley in the UK, as part of the Open Source SOA team. He is a member of the SDO specification group and currently works on development of the Apache Tuscany C++ implementation of SDO. He has degrees in Mathematics and Computation, and has previously worked in the areas messaging and brokering middleware.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
21st International Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Me...
SYS-CON Events announced today that CA Technologies has been named "Platinum Sponsor" of SYS-CON's 21st International Cloud Expo®, which will take place October 31-November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CA Technologies helps customers succeed in a future where every business - from apparel to energy - is being rewritten by software. From planning to development to management to security, CA creates software that fuels transformation for companies in the applic...
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
"When we talk about cloud without compromise what we're talking about is that when people think about 'I need the flexibility of the cloud' - it's the ability to create applications and run them in a cloud environment that's far more flexible,” explained Matthew Finnie, CTO of Interoute, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, Doug Vanderweide, an instructor at Linux Academy, discussed why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers wit...
A common misconception about the cloud is that one size fits all. Companies expecting to run all of their operations using one cloud solution or service must realize that doing so is akin to forcing the totality of their business functionality into a straightjacket. Unlocking the full potential of the cloud means embracing the multi-cloud future where businesses use their own cloud, and/or clouds from different vendors, to support separate functions or product groups. There is no single cloud so...
Both SaaS vendors and SaaS buyers are going “all-in” to hyperscale IaaS platforms such as AWS, which is disrupting the SaaS value proposition. Why should the enterprise SaaS consumer pay for the SaaS service if their data is resident in adjacent AWS S3 buckets? If both SaaS sellers and buyers are using the same cloud tools, automation and pay-per-transaction model offered by IaaS platforms, then why not host the “shrink-wrapped” software in the customers’ cloud? Further, serverless computing, cl...
The taxi industry never saw Uber coming. Startups are a threat to incumbents like never before, and a major enabler for startups is that they are instantly “cloud ready.” If innovation moves at the pace of IT, then your company is in trouble. Why? Because your data center will not keep up with frenetic pace AWS, Microsoft and Google are rolling out new capabilities. In his session at 20th Cloud Expo, Don Browning, VP of Cloud Architecture at Turner, posited that disruption is inevitable for comp...
Hybrid IT is today’s reality, and while its implementation may seem daunting at times, more and more organizations are migrating to the cloud. In fact, according to SolarWinds 2017 IT Trends Index: Portrait of a Hybrid IT Organization 95 percent of organizations have migrated crucial applications to the cloud in the past year. As such, it’s in every IT professional’s best interest to know what to expect.
For most organizations, the move to hybrid cloud is now a question of when, not if. Fully 82% of enterprises plan to have a hybrid cloud strategy this year, according to Infoholic Research. The worldwide hybrid cloud computing market is expected to grow about 34% annually over the next five years, reaching $241.13 billion by 2022. Companies are embracing hybrid cloud because of the many advantages it offers compared to relying on a single provider for all of their cloud needs. Hybrid offers bala...
Cloud Expo, Inc. has announced today that Andi Mann and Aruna Ravichandran have been named Co-Chairs of @DevOpsSummit at Cloud Expo Silicon Valley which will take place Oct. 31-Nov. 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. "DevOps is at the intersection of technology and business-optimizing tools, organizations and processes to bring measurable improvements in productivity and profitability," said Aruna Ravichandran, vice president, DevOps product and solutions marketing...
Containers, microservices and DevOps are all the rage lately. You can read about how great they are and how they’ll change your life and the industry everywhere. So naturally when we started a new company and were deciding how to architect our app, we went with microservices, containers and DevOps. About now you’re expecting a story of how everything went so smoothly, we’re now pushing out code ten times a day, but the reality is quite different.
"We are a monitoring company. We work with Salesforce, BBC, and quite a few other big logos. We basically provide monitoring for them, structure for their cloud services and we fit into the DevOps world" explained David Gildeh, Co-founder and CEO of Outlyer, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
If you cannot explicitly articulate how investing in a new technology, changing the approach or re-engineering the business process will help you achieve your customer-centric vision of the future in direct and measurable ways, you probably shouldn’t be doing it. At Intellyx, we spend a lot of time talking to technology vendors. In our conversations, we explore emerging new technologies that are either disrupting the way enterprise organizations work or that help enable those organizations to ...
For organizations that have amassed large sums of software complexity, taking a microservices approach is the first step toward DevOps and continuous improvement / development. Integrating system-level analysis with microservices makes it easier to change and add functionality to applications at any time without the increase of risk. Before you start big transformation projects or a cloud migration, make sure these changes won’t take down your entire organization.
There's a lot to gain from cloud computing, but success requires a thoughtful and enterprise focused approach. Cloud computing decouples data and information from the infrastructure on which it lies. A process that is a LOT more involved than dragging some folders from your desktop to a shared drive. Cloud computing as a mission transformation activity, not a technological one. As an organization moves from local information hosting to the cloud, one of the most important challenges is addressi...
What's the role of an IT self-service portal when you get to continuous delivery and Infrastructure as Code? This general session showed how to create the continuous delivery culture and eight accelerators for leading the change. Don Demcsak is a DevOps and Cloud Native Modernization Principal for Dell EMC based out of New Jersey. He is a former, long time, Microsoft Most Valuable Professional, specializing in building and architecting Application Delivery Pipelines for hybrid legacy, and cloud ...
Companies have always been concerned that traditional enterprise software is slow and complex to install, often disrupting critical and time-sensitive operations during roll-out. With the growing need to integrate new digital technologies into the enterprise to transform business processes, this concern has become even more pressing. A 2016 Panorama Consulting Solutions study revealed that enterprise resource planning (ERP) projects took an average of 21 months to install, with 57 percent of t...
Microservices are increasingly used in the development world as developers work to create larger, more complex applications that are better developed and managed as a combination of smaller services that work cohesively together for larger, application-wide functionality. Tools such as Service Fabric are rising to meet the need to think about and build apps using a piece-by-piece methodology that is, frankly, less mind-boggling than considering the whole of the application at once. Today, we'll ...
In his session at Cloud Expo, Alan Winters, an entertainment executive/TV producer turned serial entrepreneur, presented a success story of an entrepreneur who has both suffered through and benefited from offshore development across multiple businesses: The smart choice, or how to select the right offshore development partner Warning signs, or how to minimize chances of making the wrong choice Collaboration, or how to establish the most effective work processes Budget control, or how to ma...