Welcome!

Microservices Expo Authors: Elizabeth White, Liz McMillan, Sematext Blog, Lori MacVittie, David Sprott

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Linux Containers, @CloudExpo, @DevOpsSummit

Containers Expo Blog: Blog Post

Theresa Lanowitz on Solving Age-Old Problems in the Enterprise

Extreme Automation, Service Virtualization, and More

Lanowitz_picBy Noel Wurst, Managing Editor at Skytap

This article was originally published on the Skytap Blog

Noel: Hello, this is Noel Wurst with Skytap and I am speaking with Theresa Lanowitz today, who is the founder of voke. Theresa is going to be giving a keynote at this year's STAREAST conference on May 8, in Orlando, Florida. The keynote is titled "Extreme Automation: Software Quality for the Next Generation Enterprise." I wanted to speak with her about what exactly extreme automation involves, trying to define the "next generation enterprise," and to find out more about what she does and what voke does. Theresa, how are you today?

Theresa: I'm great, and thanks for inviting me to do this interview.

Noel: You're welcome! So, let's learn a little bit more about what you do with voke and what voke does. I was reading about some of your company's services on your website-particularly those that relate to application development at the enterprise level. I saw where voke helps companies evaluate a variety of application lifecycle solutions. Actually, I'll go ahead and let you talk about that first before I move on.

Theresa: Okay, so I'll just give you a little bit of a background about who we are. We are an independent industry analyst firm and I'm the founder, I founded it 2006. What we do at voke is we really focus on the application lifecycle, the entire application lifecycle, and the transformation of that application lifecycle- including technology such as virtualization, cloud computing, embedded systems, mobile and device software and so on. We provide strategic, independent, impartial advice and market observations through both quantitative and qualitative research. That's just a little bit about who we are and what we do.

Noel: So, when you're working with clients and you're trying to help them make these decisions that involve the entire lifecycle, I'm sure there are numerous questions, obviously, but I was curious- are there any questions that you tend to ask, or you're trying to get the answers to some questions that perhaps clients kind of tend to forget or overlook, or not maybe think about when you're dealing with the entire lifecycle?

Theresa: Yeah, when you're looking at really evaluating application lifecycle solutions one of the things we always want to understand from people that we're working with is, how mature is the organization? Do you have one part of the organization that might be a little bit more mature than the other? Maybe, is your QA organization really, really, mature with their practices and processes and tooling and maybe other parts of the organization may not be as mature.

We really want to understand the maturity of the organization. Then we also want to understand whether or not there is parity between the development, the quality assurance organization, and the operations organization, so those three pillars,those three classic pillars of IT. Do you have parity across those? Are all three, dev, QA and operations, are they really working to support the line of business to deliver high quality valuable business outcomes?

Another really important thing that we see going on right now is we want to find out if there is a change agent at the executive level in the organization. Because one of the things we know now is, there is really great technology in the market to help us overcome some of those traditional age-old computing problems that we've had. Things such as virtualization, things such as virtual lab management capabilities, service virtualization capability that free up a lot of time from people in dev, QA, and operations to do far more strategic things. If there is a change agent in the organization, that change agent is really able to effect change it will really get buy-in from the senior level management to make these changes happen. Finding out whether or not there is a change agent at the executive level is really important.

Then, if there is a change agent, how committed really is that executive team to implementing the change? Are they just saying, "You know, we think this is a good thing to do because it seems to be one of the things that people are talking about." What type of commitment is there? Another thing that's really, really, important is, how valued are requirements within the organization? Are you really willing to take more time to get requirements right to prevent those defects later on? Do you really understand what your cost of quality is? Do you really understand what the cost of building that software is actually going to be? How committed are you to those requirement and to getting it right?

Then I think another important thing that we really look at is what is most important to the organization? Are they more concerned about cost, quality, or schedule? Ideally, you want to be equally concerned about cost, quality, and schedule. But as we see from so many big catastrophic failures that happen in the news these days is that often, people are more concerned about schedule. Faster is better than correct, or faster is better than high quality.

If you're willing to take that risk of having those catastrophic events, what do you do about your cost, what do you do about your quality? If you are willing to take that risk and have those catastrophic events out there, how willing are you to have your brand impacted? Because if you think about it, every company, every government agency is a software company, because you're building software that are going to deliver these business outcomes and software is the differentiator for your business. What we see are these big, big catastrophic failures making headlines and we have to ask ourselves why are these failures making headlines?

One thing is, during the global financial crisis we really saw a lack of investment in IT. IT budget remained flat or they declined. Then we have this idea of faster was greater than being right or having higher quality. Faster is not really equal to better. In many organizations, we see a lot of old technology. Organizations are not up-to-date on the software platform that they're using and a lot of organizations are really not leveraging the power of a lot of these really wonderful modern solutions that are out there.

Noel: That really is a complete transformation as you kind of listed some of those things as far as virtualization and dev/test environments and the cloud, hybrid applications, continuous integration, etc. All of these things are being adopted by companies that are doing it right but they're also things that some are having to embrace all at the same time. It really is a complete transformation from collecting those requirements to delivering better software or faster, it's not just, "oh, we only needed to do one of those things to get it right."

Theresa: Yeah, and it's the reality of understanding cost, quality, schedule, where you're willing to make the sacrifices, and then also looking at the people process and the technology. Do you have the right skillsets in place? Do you have a relationship with professional service providers? Do you have a relationship with the software vendor that you're using? Do you have the right process in place for each project, because process is not a one size fits all? Do you have the right tooling?

In many, many cases like I said, we see organizations using versions of software that are several, several, versions old and really not embracing some of these new technologies that you just discussed. If you don't have the skillsets internally, look to a good professional services organization to help you really bring these new technologies in. Because a lot of the things that you were doing in the past, some of these very manually-related activities, can now be done through the use of these modern tools and have really wonderful return on investment with these tools. Such as lowering the number of defects going into production, testing on more platforms, having environments available anytime people want them for testing. These are all really great things that these newer technologies offer to organizations.

Noel: Let's talk about your keynote for a little bit. Again, the title is, "Extreme Automation: Software Quality for the Next Generation Enterprise." You're employing all of these different technologies and skills and processes to build this next generation enterprise, so I was curious to get your definition as to what makes a piece of enterprise software "next generation?" What it makes it different from a piece of software in the past?

Theresa: Okay, one of the things that we hold core to our beliefs is that virtualization; the technology of virtualization is really the hub of the modern application lifecycle. Using things like virtual lab management or VLM, dev/test clouds, service virtualization, defect virtualization, device virtualization, bringing that virtualization technology to the pre-production environment, because we know how great virtualization worked in the production environment for the data center, for the operations team in terms of saving capital investments on hardware, reducing the footprint in the data center, reducing energy consumption, just making things far more efficient. We do believe that virtualization is really the hub of the modern application lifecycle and bringing it to the preproduction site is something that we've been really bullish on since we founded the company in 2006.

If you look at the next generation enterprise, that next generation enterprise is really about business connectivity. It's about a global marketplace. Your customers are everywhere and you're powered by software but that software has to be ready and available and working anytime, anywhere, any place. If you think about software, software only has to do three things: software has to work, software has to perform, and software has to be secure. When you think of it in terms of, "does my software work, is it fast enough, does it perform well, and is it secure enough," those are three very, very basic fundamental questions, but that has to be right.

It has to have the quality aspect associated with that. That's what you're going to see in the next generation enterprise, the technology is really optimized for the business outcomes to make sure that people are having that software experience that works with them, that is performing enough, and does have a high degree of security.

Noel: To look at the other half of your keynote's title, "extreme automation." I'm always a fan of writing about automation and reading people's opinions on it. It tends to stir up a debate sometimes where you have some people who are talking about automation is the key to this, and the automation is the key to that, or I feel like sometimes they think it's the key to everything, but then you have others who are kind of holding their hands up and saying, "automation isn't going to solve everything." Is that kind of a tough decision sometimes to figure out when automation is absolutely necessary and when it's not?

Theresa: Well, I think if you look at what's going on in the enterprise we know that the enterprise does not embrace automation as much as it could, given the capability of a lot of the new tooling that's out there. If we look at extreme automation, the definition of extreme automation is the concept of solving classic computing problems across the lifecycle with the use of modern tooling technology. You're removing barriers and you're facilitating communication, collaboration and connectivity of the development team, the QA team and the operations team to support the line of business and that insatiable demand for quality software. It's this idea of using modern tooling, removing those barriers, using people, processes, and technology to really deliver on that demand for high quality software and that's how we define extreme automation.

Noel: I was just writing it down as you were saying that it's "solving classic problems with new technology and new tooling. That almost seems like a gentler way of saying "extreme automation." I wonder if maybe it wouldn't scare as many people when they hear "automation!" I love that, because, it's not solving problems people don't know they have, or haven't ever heard of, it's problems that they know they have, and have always had, and new technologies are available to solve those. That's great.

Theresa: Yeah and you're right they are problems that people have known that they have always had, so take for example, a test environment. What do people typically do? People will typically have to wait. We have survey data that says that 96% of people wait to get access to a test environment.

Noel: Wow!

Theresa: To get access to the test environment in a typical organization, without using virtualization, they have to wait for the operations team to provision, so that becomes a bottleneck. Quite honestly, the skills of the operations team should be used far more strategically, and the skills of the QA team waiting to get into that test environment should not be used waiting for an environment to be provisioned.

If you're using something like virtual lab management technology, or dev/test cloud technology, you can spin up those virtual environments for tests that give people an environment as close to production as possible for as long as they want it to test whatever they need to test.

That's really something that's really beneficial because everybody today has to work with a third party supply chain for their software, so you have your entire software supply chain where they're using outsourcers to do a portion of your development or testing, or whether you're taking code drops from a partner that you might be working with, or whether you're working on some type of collaborative project with another business partner. We have this software supply chain that we have to work with, and not waiting for those tactical things to happen really gives you a big, big benefit.

Noel: Absolutely. Well, for my last question, I feel like it's all led up to this, you've got developers and testers able to work alongside each other and not have to wait, and IT not have to spend so much time provisioning and managing: it all leads to collaboration. I wanted to look specifically at the collaboration between developers and testers.

I feel like we're not hearing, and I'm not reading as much about as the incredible differences between those two groups. Obviously there are still differences, but there's just much more talk about them working together and realizing that by collaborating and working together that's what ends up building better software faster. We're not hearing anywhere near as much about the headbutting of these two groups in particular.

Theresa: Yeah, there is absolutely has to be collaboration, communication, and connectivity. I think one of the things you have to look at is, "is there really parity across development, QA, and operations to support that line of business?" With the development team really delivering architectural readiness, the QA team really delivering customer readiness, and the operations team really delivering production readiness. That line of business is really the requirements communicator, the keeper of profit and loss, and everybody in the IT organization, those three pillars of IT, they're working to deliver those valuable business outcomes for that line of business.

Now, having said that, the line of business also has to be involved as well. You can't just run around and code something and say, "Okay here you go line of business, you know, this is what we think you wanted." The line of business also has to be involved as well.

If you think about the idea of parity, so you want to have parity between development, quality assurance, and operation to support that line of business. If there is parity, are the groups really collaborative or are they functionally isolated? You wanted have that collaboration and functionally there is still a need to have specialization of resources but you don't want them to be isolated.

If there is no parity, is one group more dominant than the other? Is the operations team driving everything at the expense of what dev and the QA teams are doing? Collaboration across the groups is really essential and one of the things we've been talking about I think during the course of this discussion is that we have really good technology available to eliminate those age-old issues among the groups. Virtualization, as I said, we believe is the hub of the modern application lifecycle, so what you do with this collaboration using tooling such as virtualization is you don't have to wait for operations to spin up the environments for testing. You have as many test environments for as long as you need.

You eliminate that friction between development and QA, where QA identifies the defect and then says, "Okay, dev team here is the defect" and the dev team comes back and says, "Well, I can't really identify this because it works on my machine." So, we eliminate that phrase, "it works on my machine."

It's really, really wonderful just to take that out of the IT vocabulary. It's a big, big win. One of the people that we've talked to in one of the many market research surveys that was actually done on virtualization, one participant said that virtual lab management brings about peace between developers and testers ...

Peace in the IT world. I looked at that, and I look at just the idea of virtual lab management without bringing any other piece of virtualization into the preproduction environment and say, "if we can really eliminate this friction between the three pillars of IT to really work, to support what that line of business needs we're really in a good position." It's great that there are these collaborative tools out there that allow more collaboration, allow more communication, allow more connectivity. Organizations should not be struggling with that anymore because the tools do exist.

It's been really great and I think that we've seen this really happen in the past couple or three years-we've really seen these tools come to a new level where there is not this reluctance to say, "Well, I'm not really sure if these tool is going to work, this tool might be too difficult." The tools are getting easier and easier to use. The tools are really robust.

Again, if you don't have the skillsets in your organization, go out and reach out to professional services organizations and make sure that you have that relationship with the professional services team. Leverage those relationships with the professional service providers. Leverage the relationships with the vendors.

One of the things that I always like to say to people in working with the software vender is talk to that vendor, have a relationship with the vendor, tell the vendor what you want to see in terms of features and functionality of the tooling. Vendors are very, very open and very receptive to hearing from their customers and from their potential customers, so leverage that relationship.

And then, if you're thinking about bringing in some new technology-select a pilot project. Don't say, "We're going to bring this in and put it right in to the organization and have everybody use it." Select a pilot project, figure out where you can get that really quick good return on investment and be able to go around and do some internal public relations about how it's working, how it's making a difference.

The best thing I can say about technology is to get current and stay current on your existing tools. Also, go out and evaluate and new technologies that you may not have, or get it as a way to complement and supplement what you're already doing. Leverage the whole people, process, and technology portion to deliver high quality software on time and on budget.

Noel: That's great. That really kind of sums everything up. I love the bit about these tools actually bringing peace to these organizations. I feel like sometimes like if your boss comes to you and says you got a new tool that's going to help you work faster or help you work harder it's kind of like, "I didn't know I needed to work faster or harder." But you find out that it's actually going to also bring peace to the environment around you, that's another attractive selling point of this technology.

Theresa: Yeah, it lets you work smarter and it allows you to focus your attention, your activities on far more strategic things rather than sitting around waiting for a lab, manually scheduling a lab, trying to get into that lab, hoping that ... as a test team, hoping that you don't run into any unforeseen problems and the team in front of you didn't run into any unforeseen problems, they were able to get out when they were supposed to get out in the lab and you're able to get on that lab when you were supposed to get at the lab.

But that says "all right, now we're limiting our testing, what if I now have to test on maybe my line of business, you know what we really need to support a new tablet device so now I have to test on multiple platforms." And if you're in a physical lab you may not have time to do everything." So having that environment it's close to production as possible when you need for as long as you need it is ... you know, you're right, it brings ... it's a very peaceful environment running around doing a lot of tactical things.

Noel: That's great. Thank you so much for speaking with me today.

Theresa: Oh you're quite welcome.

Noel: Thank you. Everybody, again, this is Theresa Lanowitz, who is the founder of voke and you can hear Theresa's keynote or visit it in person at STAREAST on May 8 ... it' s Thursday, May 8. The title again is "Extreme Automation: Software Quality for the Next Generation Enterprise. Thanks so much again.

Theresa: Thank you.

More from Theresa at the SDLC Acceleration Summit: A Deep Dive into Delivering Better Software Faster

SDLCAccelerationSummitUnder pressure to deliver more software, more frequently-and with zero defects? Want to explore SDLC acceleration best practices, trends, and insights with your peers and industry experts (industry Theresa Lanowitz)? Join us on May 13 in San Francisco for the SDLC Acceleration Summit.

The SDLC Acceleration Summit is your forum for asking questions and sharing ideas about accelerating development and test cycles to ensure that top-quality applications are delivered on time and on budget. Join us as we delve into topics such as:

  • The Future of the SDLC
  • Integrity within the Software Supply Chain
  • Reassessing the True Cost of Software Quality
  • Gaining a Competitive Advantage via an Advanced Software Delivery Process

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Technical Writer at Parasoft, authors technical articles, documentation, white papers, case studies, and other marketing communications—currently specializing in service virtualization, API testing, DevOps, and continuous testing. She has also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

@MicroservicesExpo Stories
With so much going on in this space you could be forgiven for thinking you were always working with yesterday’s technologies. So much change, so quickly. What do you do if you have to build a solution from the ground up that is expected to live in the field for at least 5-10 years? This is the challenge we faced when we looked to refresh our existing 10-year-old custom hardware stack to measure the fullness of trash cans and compactors.
The emerging Internet of Everything creates tremendous new opportunities for customer engagement and business model innovation. However, enterprises must overcome a number of critical challenges to bring these new solutions to market. In his session at @ThingsExpo, Michael Martin, CTO/CIO at nfrastructure, outlined these key challenges and recommended approaches for overcoming them to achieve speed and agility in the design, development and implementation of Internet of Everything solutions wi...
Node.js and io.js are increasingly being used to run JavaScript on the server side for many types of applications, such as websites, real-time messaging and controllers for small devices with limited resources. For DevOps it is crucial to monitor the whole application stack and Node.js is rapidly becoming an important part of the stack in many organizations. Sematext has historically had a strong support for monitoring big data applications such as Elastic (aka Elasticsearch), Cassandra, Solr, S...
There's a lot of things we do to improve the performance of web and mobile applications. We use caching. We use compression. We offload security (SSL and TLS) to a proxy with greater compute capacity. We apply image optimization and minification to content. We do all that because performance is king. Failure to perform can be, for many businesses, equivalent to an outage with increased abandonment rates and angry customers taking to the Internet to express their extreme displeasure.
SYS-CON Events announced today that 910Telecom will exhibit at the 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. Housed in the classic Denver Gas & Electric Building, 910 15th St., 910Telecom is a carrier-neutral telecom hotel located in the heart of Denver. Adjacent to CenturyLink, AT&T, and Denver Main, 910Telecom offers connectivity to all major carriers, Internet service providers, Internet backbones and ...
Before becoming a developer, I was in the high school band. I played several brass instruments - including French horn and cornet - as well as keyboards in the jazz stage band. A musician and a nerd, what can I say? I even dabbled in writing music for the band. Okay, mostly I wrote arrangements of pop music, so the band could keep the crowd entertained during Friday night football games. What struck me then was that, to write parts for all the instruments - brass, woodwind, percussion, even k...
Right off the bat, Newman advises that we should "think of microservices as a specific approach for SOA in the same way that XP or Scrum are specific approaches for Agile Software development". These analogies are very interesting because my expectation was that microservices is a pattern. So I might infer that microservices is a set of process techniques as opposed to an architectural approach. Yet in the book, Newman clearly includes some elements of concept model and architecture as well as p...
DevOps at Cloud Expo – being held November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA – announces that its Call for Papers is open. Born out of proven success in agile development, cloud computing, and process automation, DevOps is a macro trend you cannot afford to miss. From showcase success stories from early adopters and web-scale businesses, DevOps is expanding to organizations of all sizes, including the world's largest enterprises – and delivering real results. Am...

Modern organizations face great challenges as they embrace innovation and integrate new tools and services. They begin to mature and move away from the complacency of maintaining traditional technologies and systems that only solve individual, siloed problems and work “well enough.” In order to build...

The post Gearing up for Digital Transformation appeared first on Aug. 30, 2016 05:45 PM EDT  Reads: 1,709

This complete kit provides a proven process and customizable documents that will help you evaluate rapid application delivery platforms and select the ideal partner for building mobile and web apps for your organization.
Internet of @ThingsExpo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 19th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago. All major researchers estimate there will be tens of billions devices - comp...
Thomas Bitman of Gartner wrote a blog post last year about why OpenStack projects fail. In that article, he outlined three particular metrics which together cause 60% of OpenStack projects to fall short of expectations: Wrong people (31% of failures): a successful cloud needs commitment both from the operations team as well as from "anchor" tenants. Wrong processes (19% of failures): a successful cloud automates across silos in the software development lifecycle, not just within silos.
19th Cloud Expo, taking place November 1-3, 2016, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterpri...

Let's just nip the conflation of these terms in the bud, shall we?

"MIcro" is big these days. Both microservices and microsegmentation are having and will continue to have an impact on data center architecture, but not necessarily for the same reasons. There's a growing trend in which folks - particularly those with a network background - conflate the two and use them to mean the same thing.

They are not.

One is about the application. The other, the network. T...

As the world moves toward more DevOps and Microservices, application deployment to the cloud ought to become a lot simpler. The Microservices architecture, which is the basis of many new age distributed systems such as OpenStack, NetFlix and so on, is at the heart of Cloud Foundry - a complete developer-oriented Platform as a Service (PaaS) that is IaaS agnostic and supports vCloud, OpenStack and AWS. Serverless computing is revolutionizing computing. In his session at 19th Cloud Expo, Raghav...
SYS-CON Events announced today that eCube Systems, a leading provider of middleware modernization, integration, and management solutions, will exhibit at @DevOpsSummit at 19th International Cloud Expo, which will take place on November 1–3, 2016, at the Santa Clara Convention Center in Santa Clara, CA. eCube Systems offers a family of middleware evolution products and services that maximize return on technology investment by leveraging existing technical equity to meet evolving business needs. ...
The following fictional case study is a composite of actual horror stories I’ve heard over the years. Unfortunately, this scenario often occurs when in-house integration teams take on the complexities of DevOps and ALM integration with an enterprise service bus (ESB) or custom integration. It is written from the perspective of an enterprise architect tasked with leading an organization’s effort to adopt Agile to become more competitive. The company has turned to Scaled Agile Framework (SAFe) as ...
If you are within a stones throw of the DevOps marketplace you have undoubtably noticed the growing trend in Microservices. Whether you have been staying up to date with the latest articles and blogs or you just read the definition for the first time, these 5 Microservices Resources You Need In Your Life will guide you through the ins and outs of Microservices in today’s world.
This is a no-hype, pragmatic post about why I think you should consider architecting your next project the way SOA and/or microservices suggest. No matter if it’s a greenfield approach or if you’re in dire need of refactoring. Please note: considering still keeps open the option of not taking that approach. After reading this, you will have a better idea about whether building multiple small components instead of a single, large component makes sense for your project. This post assumes that you...
Monitoring of Docker environments is challenging. Why? Because each container typically runs a single process, has its own environment, utilizes virtual networks, or has various methods of managing storage. Traditional monitoring solutions take metrics from each server and applications they run. These servers and applications running on them are typically very static, with very long uptimes. Docker deployments are different: a set of containers may run many applications, all sharing the resource...