Welcome!

Microservices Expo Authors: Pat Romanski, Christoph Schell, Elizabeth White, Matt Brickey, Liz McMillan

Related Topics: Microservices Expo, Java IoT, Industrial IoT

Microservices Expo: Article

SQL Peer-to-Peer Dynamic Structured Data Processing Collaboration

Using automatic metadata maintenance

Unstructured and XML semi-structured data is now used more than structured data. Unstructured data is useful because of its fuzzy processing applied to this more common ubiquitous data.  But fixed structured data still keeps businesses running day in and day out, which requires consistent predictable highly principled processing for correct results. This means structured data cannot be replaced by unstructured or semi-structured data.  For this reason, it would be very useful to have a general purpose peer-to-peer collaboration capability that can utilize highly principled hierarchical data processing and its flexible and advanced structured processing to support dynamically structured data and its dynamic structured processing.  This flexible dynamic structured processing can change the structure of the data as necessary for the required processing while preserving the relational and hierarchical data principles and semantics of the data to derive correct structured data results even after structure transformations.

This processing will perform freely across remote unrelated peer locations anytime and transparently support unpredictable structured data and data type changes automatically for immediate processing.  Such an automatic peer-to-peer dynamic structured data collaboration is depicted below with its dynamic working hierarchical data structure being modified at each peer site that are labeled P1 to P4 in the diagram directly below.  Its operation is described below the diagram.


Diagram Description

In the diagram above, peer locations: P1, P2, P3, and P4 located anywhere need to collaborate and share their structured data in order to produce a needed result. This process will require unpredictably changing the data structure and data types as it becomes necessary to achieve the desired need and result.  Peer 1 starts the collaboration process by inputting three relational tables, A, B, and C, and models them into a hierarchical structure sending it off to Peer 2 for further processing. Concurrently overlapping with peer 2's processing, Peer 1 also inputs a XML linear hierarchical structure, XYZ, and transforms it into a nonlinear multipath hierarchical structure sending it off to Peer 3 for further processing.

Peer 2 and peer 3 are now performing independently and concurrently. Each is: retrieving their structure input from peer 1, inputting additional relational table data from their different peer home locations, and joining this data to their working data structures. On completion, peer 2 and peer 3 both send their modified data structures off to common peer 4 for further processing.

Peer 4 accepts the modified data structures from both peer 2 and peer 3 which operated concurrently. It hierarchically joins them together using a matching data item value between nodes B and X (B.b=X.x). Peer 4 then eliminates unneeded data items from the joined result using SQL's dynamic SELECT operation to select data items for output  from nodes A, B, E, Y and W.  This SQL query looks like: SELECT A.a, B.b, E.e, Y.y, W.w FROM P2View LEFT JOIN P3View ON B.b=X.x.  This slices out all nodes (C, D, Z ,X, V) that were not referenced by the SELECT statement. This automatically aggregates the necessary data nicely as shown in the diagram above. This process is known as projection in relational processing and node promotion in hierarchical processing. The LEFT JOIN operation hierarchically places P2's structure over P3's structure connected by the ON clause specification of: B.b=X.x. This newly combined hierarchical structure in peer 4 is sent back to Peer 1 for immediate review and processing where the hierarchical data can be selectively output in different formats each with different data selections as shown in the above diagram.

During this entire peer-to-peer collaboration process, the changing data structures and data types are automatically maintained and utilized transparently for the user as needed. The user at each receiving peer can also view the current active structure and its data types. But knowledge of the structure is not necessary for the user to specify in the query because the maintained structure is automatically known and used inherently by the query processor.  Different working data structure versions can also be saved and restored at each peer by the user.

Integrating SQL With Peer-to-Peer Structured Data Collaboration

The problem with performing the above type of dynamic processing is that structured data processing has been limited to fixed static structure processing because dynamically generated structured data cannot be handled today.  Sharing structured data today is performed with shared metadata.  The metadata remains the same, so the structure must remain static. But with dynamic structured processing, the data structure can be dynamically modified as needed to support the required structured operation as shown and described above. This requires automatic metadata maintenance which has not previously been supported by the industry for structured data processing.

An advanced ANSI SQL transparent hierarchical processor prototype, SQLfX (www.adatinc.com/demo.html), has been developed that can support the required dynamic and flexible structured data processing necessary for collaboration. In addition, it uses SQL's inherent hierarchical data processing capabilities that naturally support full multipath dynamic hierarchical data structures. This allows the most complex hierarchical operations to be performed in order to always meet the need at the required time and peer locations with the SQLfX processor. It already operates structure-aware because of its dynamic processing which is also necessary for the required automatic metadata maintenance to occur automatically at each peer because of the dynamically changing data structure and data items.

The ANSI SQL SQLfX flexible dynamic hierarchical processing technology can be enhanced to integrate with peer-to-peer structured data processing collaboration that eliminates the user control necessary for the dynamically changing metadata. The automatic metadata maintenance supplies the updated current metadata that accompanies the data when transmitted between peers. This allows amazingly fast on the fly advanced hierarchical structured data processing collaboration. This enables previously unknown structure results delivered to any peer to be immediate processed automatically by a SQLfX SQL processor located at the peer location.

SQLfX SQL controls the peer-to-peer processing sending and receiving of data structures using new SQL InFile and OutFile keywords added by SQLfX for this purpose. Password and data encryption can also be supported for data security. Further dynamic processing and data structure modification can be performed at each peer visited in any order including in parallel as shown in the diagram above. This opens up the new capability of dynamic structured data processing and its automatic and transparent metadata handling.

SQL Hierarchical Processing Capabilities for Structured Data Collaboration
SQLfX is a powerful new ANSI SQL transparent multipath hierarchical processor that dynamically processes heterogeneous logical flat data like relational and physical hierarchical data such as XML initially. This SQLfX full dynamic hierarchical data processing enables logical and physical structures to be hierarchically joined and modeled dynamically. This hierarchical processing significantly increases the power of the data structure and the queries applied to it.  This is extremely flexible and powerful and is automatically performed without user hierarchical navigation. This operation naturally utilizes the hierarchical semantic information between the pathways to process powerful multipath queries. This can freely reference multipath queries that can for example select data from one path based on data in another path. This unlimited dynamic processing requires special automatic processing known as Lowest Common Ancestor (LCA) processing which enables any conceivable valid multipath query to be processed automatically.  This capability is supported in SQLfX naturally and is missing in other data processors such as XQuery.

An additional valuable benefit of using hierarchical structures is that they are great at naturally organizing and reusing data. Their ability to freely create and grow logical hierarchical multipath structures dynamically also has another overlooked powerful benefit.  It continually increases the data value of the data nonlinearly through automatic data reuse and sharing of the data at higher levels with the multiple lower levels in a pyramid fashion. In addition, the dynamic joining of these hierarchical structures can dynamically increase their data value and querying power many times.  Another powerful advantage are logical hierarchical data structures that are assembled on the fly when creating new structures such as when structures are joined and exist only when and while they are being used. These logical structures add flexibility to hierarchical structures and efficiency to their new use.

Hierarchical structures can also be hierarchically data filtered in their entirety following hierarchical semantics using SQL's WHERE clause to filter the data by data value to only the precise desired data result. This is a complex and powerful operation because data filtering applied to any node data item in a hierarchical structure affects all other nodes of the structure. This is because every node in a hierarchical structure is related to every other node in the data structure. This is demonstrated in Diagram 2 below where filtering node E winds up affecting all other nodes sometimes indirectly through a cousin relationship such as node B. In this example, all nodes with a data occurrence related to data item E equal to 25 are filtered out. The data filtering flow is represented by the arrows.  This is a powerful concept giving multipath processing and WHERE clause hierarchical data processing significant power. This global hierarchical structure filtering is particularly useful when combined with transferring these powerful multipath structures between peers in peer-to-peer processing and the entire structure needs to be filtered for some data value condition. This can be a complex condition involving multiple paths.

SQLfX SQL also supports a very advanced dynamic any-to-any data structure transformation and a data structure virtualization capability. This allows all hierarchically data transformations to be performed semantically correct at a high SQL processing level. With multipath hierarchical processing and any-to-any structure transformations, a variety of hypothetical, experimental, research, exploratory, and problem solving queries can be carried out immediately in an unrestricted fashion further enhanced by powerful real time hierarchical processing collaboration.

Conclusion

All of the powerful and flexible capabilities mentioned in this article make multipath hierarchical structures and their hierarchical processing the perfect opportunity for this dynamic structured data processing collaboration. The universally known SQL interface makes it a perfect API and this is backed by this new relational hierarchical processing technology.  Single one-way data transmissions will also always be available to send to anyone any time because a receive-only version of SQLfX peer-to-peer will be freely available to download and use to automatically view and utilize the one-way transmitted data structure. Additional information on SQLfX's advanced hierarchical processing capabilities and operation can be found at www.adatinc.com. Persons and Companies wanting more information or help on SQL  peer-to-peer dynamic structured data processing collaboration can contact [email protected].

More Stories By Michael M David

Michael M. David is founder and CTO of Advanced Data Access Technologies, Inc. He has been a staff scientist and lead XML architect for NCR/Teradata and their representative to the SQLX Group. He has researched, designed and developed commercial query languages for heterogeneous hierarchical and relational databases for over twenty years. He has authored the book "Advanced ANSI SQL Data Modeling and Structure Processing" Published by Artech House Publishers and many papers and articles on database topics. His research and findings have shown that Hierarchical Data Processing is a subset of Relational Processing and how to utilize this advanced inherent capability in ANSI SQL. Additionally, his research has shown that advanced multipath (LCA) processing is also naturally supported and performed automatically in ANSI SQL, and advanced hierarchical processing operations are also possible. These advanced capabilities can be performed and explained in the ANSI SQL Transparent XML Hierarchical Processor at his site at: www.adatinc.com/demo.html.

@MicroservicesExpo Stories
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
If you read a lot of business and technology publications, you might think public clouds are universally preferred over all other cloud options. To be sure, the numbers posted by Amazon Web Services (AWS) and Microsoft’s Azure platform are nothing short of impressive. Statistics reveal that public clouds are growing faster than private clouds and analysts at IDC predict that public cloud growth will be 3 times that of private clouds by 2019.
"Outscale was founded in 2010, is based in France, is a strategic partner to Dassault Systémes and has done quite a bit of work with divisions of Dassault," explained Jackie Funk, Digital Marketing exec at Outscale, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
From personal care products to groceries and movies on demand, cloud-based subscriptions are fulfilling the needs of consumers across an array of market sectors. Nowhere is this shift to subscription services more evident than in the technology sector. By adopting an Everything-as-a-Service (XaaS) delivery model, companies are able to tailor their computing environments to shape the experiences they want for customers as well as their workforce.
"We focus on SAP workloads because they are among the most powerful but somewhat challenging workloads out there to take into public cloud," explained Swen Conrad, CEO of Ocean9, Inc., in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"DivvyCloud as a company set out to help customers automate solutions to the most common cloud problems," noted Jeremy Snyder, VP of Business Development at DivvyCloud, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"I think DevOps is now a rambunctious teenager – it’s starting to get a mind of its own, wanting to get its own things but it still needs some adult supervision," explained Thomas Hooker, VP of marketing at CollabNet, in this SYS-CON.tv interview at DevOps Summit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Your homes and cars can be automated and self-serviced. Why can't your storage? From simply asking questions to analyze and troubleshoot your infrastructure, to provisioning storage with snapshots, recovery and replication, your wildest sci-fi dream has come true. In his session at @DevOpsSummit at 20th Cloud Expo, Dan Florea, Director of Product Management at Tintri, provided a ChatOps demo where you can talk to your storage and manage it from anywhere, through Slack and similar services with...
There is a huge demand for responsive, real-time mobile and web experiences, but current architectural patterns do not easily accommodate applications that respond to events in real time. Common solutions using message queues or HTTP long-polling quickly lead to resiliency, scalability and development velocity challenges. In his session at 21st Cloud Expo, Ryland Degnan, a Senior Software Engineer on the Netflix Edge Platform team, will discuss how by leveraging a reactive stream-based protocol,...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
For over a decade, Application Programming Interface or APIs have been used to exchange data between multiple platforms. From social media to news and media sites, most websites depend on APIs to provide a dynamic and real-time digital experience. APIs have made its way into almost every device and service available today and it continues to spur innovations in every field of technology. There are multiple programming languages used to build and run applications in the online world. And just li...
If you are thinking about moving applications off a mainframe and over to open systems and the cloud, consider these guidelines to prioritize what to move and what to eliminate. On the surface, mainframe architecture seems relatively simple: A centrally located computer processes data through an input/output subsystem and stores its computations in memory. At the other end of the mainframe are printers and terminals that communicate with the mainframe through protocols. For all of its appare...
"Peak 10 is a hybrid infrastructure provider across the nation. We are in the thick of things when it comes to hybrid IT," explained Michael Fuhrman, Chief Technology Officer at Peak 10, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Data reduction delivers compelling cost reduction that substantially improves the business case in every cloud deployment model. No matter which cloud approach you choose, the cost savings benefits from data reduction should not be ignored and must be a component of your cloud strategy. IT professionals are finding that the future of IT infrastructure lies in the cloud. Data reduction technologies enable clouds — public, private, and hybrid — to deliver business agility and elasticity at the lo...
"As we've gone out into the public cloud we've seen that over time we may have lost a few things - we've lost control, we've given up cost to a certain extent, and then security, flexibility," explained Steve Conner, VP of Sales at Cloudistics,in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
"I will be talking about ChatOps and ChatOps as a way to solve some problems in the DevOps space," explained Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
Five years ago development was seen as a dead-end career, now it’s anything but – with an explosion in mobile and IoT initiatives increasing the demand for skilled engineers. But apart from having a ready supply of great coders, what constitutes true ‘DevOps Royalty’? It’ll be the ability to craft resilient architectures, supportability, security everywhere across the software lifecycle. In his keynote at @DevOpsSummit at 20th Cloud Expo, Jeffrey Scheaffer, GM and SVP, Continuous Delivery Busine...
"We do one of the best file systems in the world. We learned how to deal with Big Data many years ago and we implemented this knowledge into our software," explained Jakub Ratajczak, Business Development Manager at MooseFS, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.