Welcome!

Microservices Expo Authors: Yeshim Deniz, Pat Romanski, Carmen Gonzalez, Elizabeth White, Liz McMillan

Related Topics: SDN Journal, Java IoT, Microservices Expo, Linux Containers, Containers Expo Blog, Agile Computing

SDN Journal: Blog Feed Post

@CloudExpo | #Cloud User Experience and Workflow

How these things relate to one another in the context of networking

We’ve spent some cycles talking about user experience and workflow in previous articles. So in this post, we’re going to explore how these things relate to one another in the context of networking. We’ll talk a little about each separately, then we’ll bring it together in the end.

User Experience
User Experience (UX), in networking is a tricky thing. It’s not just about the direct user interaction of a particular feature or of a particular product. Over at Packet Pushers, we see many blog entries reviewing network products.  Time and time again, they show us that UX encompasses something much broader:  It’s the experience of how well the vendor delivers the product, not just the product itself. Vendors must consider the user’s experience from the first interactions with the company, to the unboxing of the product, the ease of finding and consuming relevant documentation, through the actual support process.

Network Engineers expect that there will be problems.  However, problems frequently mean one-offs and exceptions until an appropriate fix surfaces.  If we can never find an answer in documentation, this can be frustrating.  Especially if we are told by support that it is a well known issue.  If it’s well known, then produce some documentation with workarounds!  If there is an associated bug, put a bug ID in the docs.  If possible, put a target release for an upcoming fix, too.  Easy to find, relevant documentation can vastly improve the experience of the product, even when dealing with bugs.

Let’s consider the UX of support.  On average, your users frequently know more about networking than you do.  Customers have poured a significant amount of their time into memorizing piles and piles of facts about how every tiny thing in networking should work. Standards are tricky thing, too.  If you are going to tell the customer that your product is “behaving according to standards” then you better be prepared to back that up.  You also should be prepared to back down from that argument as well.  There are many standards, and portions of standards, that are not followed by any vendor or community in practice.  Sometimes standards are not clear or they are contradictory.  In these cases, over time, implementations have evolved a certain way to deal with these inconsistencies in order to ensure interoperability. It is incumbent upon the vendor to adjust as required in these situations. Remember the Robustness Principle in RFC 1122 (section 1.2.2):   “Be liberal in what you accept, and conservative in what you send.”

The most important part here is that the user’s experience of the company is more important than just the product alone.  Vendors should understand all points of interaction with the customer from sales, to documentation, the product user interface, to support, should be crafted relative to the workflow of their customer.  Focus less on features, and more on solving problems.

Workflow
We’ve spent some time, in particular, talking about workflow here on the Plexxi blog site.  The first, and key, take away here is that Network Engineering is workflow dominated.  Everything we do ends up being a series of steps, and those steps don’t always involve direct interaction with the product.  The second takeaway is that workflows are dynamic.  As previously discussed, referential space in networking is enormously complicated.  Exactly where a network engineer starts their workflow in referential space, and what path they will end up traversing, is highly dependent on what they are doing and what they stumble across while they are working.

When customers are evaluating a product, what they need to understand is how the product solves some difficult problem for them, and this means the vendor must have a  solid understanding of where their product fits in referential space.  What difficult, error-prone, risk-heavy, or tedious path in this space are you solving?  It’s important to know that even if you are solving a problem well, you are at the same time altering the landscape of this space.  This is very important for the customer.  This is what that drives them to want to understand the classic “packet-walkthrough”, for instance.  If you are telling them “You won’t need to configure VLANs anymore” then you should be able to tangibly show them what they will be doing.

Ambiguity is bad.  To the customer, ambiguity means an investment in time and and effort to understand and operate the product.

Conclusion
Coming out of the SDN hype-cycle, we have learned many things (I hope).  The most important thing is that networking is really complicated.  We can’t just make it go away.  The most successful products have simply moved the complexity around, and they do this with little understanding of what network engineers do.

When vendors truly understand how user experience intersects with the complex space that network workflows happen in, then they can change how they build and deliver network products in innovative ways.  The way to do this is by partnering with network engineers and their teams, and to really listen to how they work and the problems they actually have.  Every part of the user experience should be tailored to make network engineers more effective.

In short, network engineers make better partners than they do targets. Understanding our customers means understanding networks in context. This is how we move networking forward.

The post User Experience and Workflow appeared first on Plexxi.

Read the original blog entry...

More Stories By Derick Winkworth

Derick Winkworth has been a developer, network engineer, and IT architect in various verticals throughout his career.He is currently a Product Manager at Plexxi, Inc where he focuses on workflow automation and product UX.

@MicroservicesExpo Stories
Docker containers have brought great opportunities to shorten the deployment process through continuous integration and the delivery of applications and microservices. This applies equally to enterprise data centers as well as the cloud. In his session at 20th Cloud Expo, Jari Kolehmainen, founder and CTO of Kontena, will discuss solutions and benefits of a deeply integrated deployment pipeline using technologies such as container management platforms, Docker containers, and the drone.io Cl tool...
Internet of @ThingsExpo, taking place June 6-8, 2017 at the Javits Center in New York City, New York, is co-located with the 20th International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. @ThingsExpo New York Call for Papers is now open.
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and Containers together help companies to achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of D...
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap - Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
"We provide DevOps solutions. We also partner with some key players in the DevOps space and we use the technology that we partner with to engineer custom solutions for different organizations," stated Himanshu Chhetri, CTO of Addteq, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being...
In 2014, Amazon announced a new form of compute called Lambda. We didn't know it at the time, but this represented a fundamental shift in what we expect from cloud computing. Now, all of the major cloud computing vendors want to take part in this disruptive technology. In his session at 20th Cloud Expo, John Jelinek IV, a web developer at Linux Academy, will discuss why major players like AWS, Microsoft Azure, IBM Bluemix, and Google Cloud Platform are all trying to sidestep VMs and containers...
DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps.
In his session at 19th Cloud Expo, Claude Remillard, Principal Program Manager in Developer Division at Microsoft, contrasted how his team used config as code and immutable patterns for continuous delivery of microservices and apps to the cloud. He showed how the immutable patterns helps developers do away with most of the complexity of config as code-enabling scenarios such as rollback, zero downtime upgrades with far greater simplicity. He also demoed building immutable pipelines in the cloud ...
Using new techniques of information modeling, indexing, and processing, new cloud-based systems can support cloud-based workloads previously not possible for high-throughput insurance, banking, and case-based applications. In his session at 18th Cloud Expo, John Newton, CTO, Founder and Chairman of Alfresco, described how to scale cloud-based content management repositories to store, manage, and retrieve billions of documents and related information with fast and linear scalability. He addres...
True Story. Over the past few years, Fannie Mae transformed the way in which they delivered software. Deploys increased from 1,200/month to 15,000/month. At the same time, productivity increased by 28% while reducing costs by 30%. But, how did they do it? During the All Day DevOps conference, over 13,500 practitioners from around the world to learn from their peers in the industry. Barry Snyder, Senior Manager of DevOps at Fannie Mae, was one of 57 practitioners who shared his real world journe...
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, reviewed the current landscape of Dev...
When building DevOps or continuous delivery practices you can learn a great deal from others. What choices did they make, what practices did they put in place, and how did they connect the dots? At Sonatype, we pulled together a set of 21 reference architectures for folks building continuous delivery and DevOps practices using Docker. Why? After 3,000 DevOps professionals attended our webinar on "Continuous Integration using Docker" discussing just one reference architecture example, we recogn...
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
In their general session at 16th Cloud Expo, Michael Piccininni, Global Account Manager - Cloud SP at EMC Corporation, and Mike Dietze, Regional Director at Windstream Hosted Solutions, reviewed next generation cloud services, including the Windstream-EMC Tier Storage solutions, and discussed how to increase efficiencies, improve service delivery and enhance corporate cloud solution development. Michael Piccininni is Global Account Manager – Cloud SP at EMC Corporation. He has been engaged in t...
You have great SaaS business app ideas. You want to turn your idea quickly into a functional and engaging proof of concept. You need to be able to modify it to meet customers' needs, and you need to deliver a complete and secure SaaS application. How could you achieve all the above and yet avoid unforeseen IT requirements that add unnecessary cost and complexity? You also want your app to be responsive in any device at any time. In his session at 19th Cloud Expo, Mark Allen, General Manager of...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
In his General Session at DevOps Summit, Asaf Yigal, Co-Founder & VP of Product at Logz.io, will explore the value of Kibana 4 for log analysis and will give a real live, hands-on tutorial on how to set up Kibana 4 and get the most out of Apache log files. He will examine three use cases: IT operations, business intelligence, and security and compliance. This is a hands-on session that will require participants to bring their own laptops, and we will provide the rest.
In his General Session at 16th Cloud Expo, David Shacochis, host of The Hybrid IT Files podcast and Vice President at CenturyLink, investigated three key trends of the “gigabit economy" though the story of a Fortune 500 communications company in transformation. Narrating how multi-modal hybrid IT, service automation, and agile delivery all intersect, he will cover the role of storytelling and empathy in achieving strategic alignment between the enterprise and its information technology.
"We're bringing out a new application monitoring system to the DevOps space. It manages large enterprise applications that are distributed throughout a node in many enterprises and we manage them as one collective," explained Kevin Barnes, President of eCube Systems, in this SYS-CON.tv interview at DevOps at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.