Microservices Expo Authors: Liz McMillan, Pat Romanski, Carmen Gonzalez, Elizabeth White, Jason Bloomberg

Related Topics: Microservices Expo, Open Source Cloud

Microservices Expo: Article

Dion Hinchcliffe's SOA Blog: Notes on Making Good Social Software

I've been studying the mechanics of social software quite a bit recently

Dion Hinchcliffe's SOA Blog: Notes on Making Good Social Software

I've been studying the mechanics of social software quite a bit recently.  Now that I've begun writing a book about Web 2.0 for publication in summer, 2006 (details on that in a future article), I'm trying to get a handle on why it took so long for many of the "planks" of Web 2.0 to go mainstream.  Particularly the powerful two-way social software that we now see all around us today, which are best exemplified by blogs and wikis but also by hundreds of other applications right now, today.  Clay Shirky, in his absolutly wonderful essay, A Group Is Its Own Worst Enemy, makes the observation that it was eight long years from the first forms-capable browser and blogs finally getting off the ground.

So, what did we have to learn in that time for social software to really get off the ground?

As most of my readers know, social software is enablement of groups of people to collaborate using computer mediation.  It's a surprisingly sophisticated field that's been around for almost 40 years now.  Two famous examples of social software include the bulletin board systems of the 1980s and now-famous groupware system by Ray Ozzie, Lotus Notes.

The Web is now packed with numerous examples of useful, potent, and widely used social software including well-known examples like Wikipedia, del.icio.us, digg, and Wordpress.  There is also a growing body of next-generation social software exemplars such as AllPeers, RubHub, Squidoo, and Wink.  For a fairly new and more objective top 10 social software list, see here by Ross Mayfield.

This is all interesting backstory of course but I'm still trying to pin down the lessons we've actually learned so far.  Sure, at least at first there was a general Internet skill gap that impeded the mass adoption of social software by the general public.  Millions of people had to learn how to use the Web first, establish a level of trust with it, and then begin to learn the habits of being social online.  It was a steep curve for many, but more and more of us are here now.

Unfortunately, one thing I learned in my research is that both the usage and creation of much of our social software still seems to be mostly experienced-based.  And as Shirky points out, it's the worst possible way to learn. He notes the ideal way to acquire knowledge is when someone else figures it out and tells you: "Don't go in the swamp.  There are alligators in there."  Dryly, Shirky notes that "Learning from experience about the alligators is lousy, compared to learning from reading, say."

Where I'm going with this is that there have been wildly successful social places created on the Web (Usenet, Myspace) and there have been failures (Geocities).  I'm trying to pin down the exact mechanisms that make social software better, over indifferent, or even outright terrible.  Like most Web 2.0 ideas, it's about best practices. Or, how do we break away from single sink software?

From what I can see, it boils down to a few things, which I'll summarize here.  I was surprised at the extensive bodies of knowledge on social software, which often seems untapped if you look at some of the recent attempts at it (Flock, the social browser for example.)  So, in a nutshell, here are the fundamentals of social software.  Again, refer to the Shirky citation above to get some great history and background on these:

Pillars of Social Software

1. Establishment of Handles: Anonymity doesn't really work well with social software, but users want their privacy.  Allowing them a handle to use lets people start tracking who said what and for people to find each other and form groups.  In general, switching handles must be penalized to encourage constructive behavior.

2. Allow for Members in Good Standing: Permit users that contribute well or do good works to get recognized.  This can be as simple as associating their handle with their social activities or it can be much more sophisticated.  There just needs to be a connection between the handle and the social behavior for others to observe.

3. Barriers to Participation: This seems counterintuitive to social software, but it isn't.  The history of social software has time and again pointed to the need for certain controls in a social system to be harder to access.  Anonymous users get lower credibility and abilities than identified users, and even fewer users have the power to moderate or exercise central control.  Without this, the core group won't have to tools necessary to maintain order and defend the overall social group, and chaos would eventually reign.

4. Protect Conversations From Scale: With the Web, the numbers of users in a social setting has no practical upper bound, but most social activities are groups of two-way conversations.  In a setting of thousands of people, no one can track the conversations and get involved.  Forget about the social software sites that have tens or hundreds of thousands of people.  Finding way for people to self-organize, split up and reform dynamically, and form affinities with groups is one way. There are many others.

I'll talk more about social software and Web 2.0 in the future.  As always, the exciting part of the Web is that it's made of people.  Now how are we going to use our software to make these conversations exciting, dynamic, and useful?

What do you think the essential ingredients of social software are?

posted Thursday, 5 January 2006

More Stories By RIA News Desk

Ever since Google popularized a smarter, more responsive and interactive Web experience by using AJAX (Asynchronous JavaScript + XML) for its Google Maps & Gmail applications, SYS-CON's RIA News Desk has been covering every aspect of Rich Internet Applications and those creating and deploying them. If you have breaking RIA news, please send it to [email protected] to share your product and company news coverage with AJAXWorld readers.

Comments (2)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Microservices Articles
Modern software design has fundamentally changed how we manage applications, causing many to turn to containers as the new virtual machine for resource management. As container adoption grows beyond stateless applications to stateful workloads, the need for persistent storage is foundational - something customers routinely cite as a top pain point. In his session at @DevOpsSummit at 21st Cloud Expo, Bill Borsari, Head of Systems Engineering at Datera, explored how organizations can reap the bene...
"NetApp's vision is how we help organizations manage data - delivering the right data in the right place, in the right time, to the people who need it, and doing it agnostic to what the platform is," explained Josh Atwell, Developer Advocate for NetApp, in this SYS-CON.tv interview at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, will discuss why containers should be paired with new architectural practices such as microservices ra...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin, ...
DevOps is often described as a combination of technology and culture. Without both, DevOps isn't complete. However, applying the culture to outdated technology is a recipe for disaster; as response times grow and connections between teams are delayed by technology, the culture will die. A Nutanix Enterprise Cloud has many benefits that provide the needed base for a true DevOps paradigm. In their Day 3 Keynote at 20th Cloud Expo, Chris Brown, a Solutions Marketing Manager at Nutanix, and Mark Lav...
Many organizations are now looking to DevOps maturity models to gauge their DevOps adoption and compare their maturity to their peers. However, as enterprise organizations rush to adopt DevOps, moving past experimentation to embrace it at scale, they are in danger of falling into the trap that they have fallen into time and time again. Unfortunately, we've seen this movie before, and we know how it ends: badly.
TCP (Transmission Control Protocol) is a common and reliable transmission protocol on the Internet. TCP was introduced in the 70s by Stanford University for US Defense to establish connectivity between distributed systems to maintain a backup of defense information. At the time, TCP was introduced to communicate amongst a selected set of devices for a smaller dataset over shorter distances. As the Internet evolved, however, the number of applications and users, and the types of data accessed and...