Click here to close now.




















Welcome!

Microservices Expo Authors: Pat Romanski, Elizabeth White, VictorOps Blog, SmartBear Blog, Liz McMillan

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Linux Containers, @CloudExpo, @DevOpsSummit

Containers Expo Blog: Blog Post

Theresa Lanowitz on Solving Age-Old Problems in the Enterprise

Extreme Automation, Service Virtualization, and More

Lanowitz_picBy Noel Wurst, Managing Editor at Skytap

This article was originally published on the Skytap Blog

Noel: Hello, this is Noel Wurst with Skytap and I am speaking with Theresa Lanowitz today, who is the founder of voke. Theresa is going to be giving a keynote at this year's STAREAST conference on May 8, in Orlando, Florida. The keynote is titled "Extreme Automation: Software Quality for the Next Generation Enterprise." I wanted to speak with her about what exactly extreme automation involves, trying to define the "next generation enterprise," and to find out more about what she does and what voke does. Theresa, how are you today?

Theresa: I'm great, and thanks for inviting me to do this interview.

Noel: You're welcome! So, let's learn a little bit more about what you do with voke and what voke does. I was reading about some of your company's services on your website-particularly those that relate to application development at the enterprise level. I saw where voke helps companies evaluate a variety of application lifecycle solutions. Actually, I'll go ahead and let you talk about that first before I move on.

Theresa: Okay, so I'll just give you a little bit of a background about who we are. We are an independent industry analyst firm and I'm the founder, I founded it 2006. What we do at voke is we really focus on the application lifecycle, the entire application lifecycle, and the transformation of that application lifecycle- including technology such as virtualization, cloud computing, embedded systems, mobile and device software and so on. We provide strategic, independent, impartial advice and market observations through both quantitative and qualitative research. That's just a little bit about who we are and what we do.

Noel: So, when you're working with clients and you're trying to help them make these decisions that involve the entire lifecycle, I'm sure there are numerous questions, obviously, but I was curious- are there any questions that you tend to ask, or you're trying to get the answers to some questions that perhaps clients kind of tend to forget or overlook, or not maybe think about when you're dealing with the entire lifecycle?

Theresa: Yeah, when you're looking at really evaluating application lifecycle solutions one of the things we always want to understand from people that we're working with is, how mature is the organization? Do you have one part of the organization that might be a little bit more mature than the other? Maybe, is your QA organization really, really, mature with their practices and processes and tooling and maybe other parts of the organization may not be as mature.

We really want to understand the maturity of the organization. Then we also want to understand whether or not there is parity between the development, the quality assurance organization, and the operations organization, so those three pillars,those three classic pillars of IT. Do you have parity across those? Are all three, dev, QA and operations, are they really working to support the line of business to deliver high quality valuable business outcomes?

Another really important thing that we see going on right now is we want to find out if there is a change agent at the executive level in the organization. Because one of the things we know now is, there is really great technology in the market to help us overcome some of those traditional age-old computing problems that we've had. Things such as virtualization, things such as virtual lab management capabilities, service virtualization capability that free up a lot of time from people in dev, QA, and operations to do far more strategic things. If there is a change agent in the organization, that change agent is really able to effect change it will really get buy-in from the senior level management to make these changes happen. Finding out whether or not there is a change agent at the executive level is really important.

Then, if there is a change agent, how committed really is that executive team to implementing the change? Are they just saying, "You know, we think this is a good thing to do because it seems to be one of the things that people are talking about." What type of commitment is there? Another thing that's really, really, important is, how valued are requirements within the organization? Are you really willing to take more time to get requirements right to prevent those defects later on? Do you really understand what your cost of quality is? Do you really understand what the cost of building that software is actually going to be? How committed are you to those requirement and to getting it right?

Then I think another important thing that we really look at is what is most important to the organization? Are they more concerned about cost, quality, or schedule? Ideally, you want to be equally concerned about cost, quality, and schedule. But as we see from so many big catastrophic failures that happen in the news these days is that often, people are more concerned about schedule. Faster is better than correct, or faster is better than high quality.

If you're willing to take that risk of having those catastrophic events, what do you do about your cost, what do you do about your quality? If you are willing to take that risk and have those catastrophic events out there, how willing are you to have your brand impacted? Because if you think about it, every company, every government agency is a software company, because you're building software that are going to deliver these business outcomes and software is the differentiator for your business. What we see are these big, big catastrophic failures making headlines and we have to ask ourselves why are these failures making headlines?

One thing is, during the global financial crisis we really saw a lack of investment in IT. IT budget remained flat or they declined. Then we have this idea of faster was greater than being right or having higher quality. Faster is not really equal to better. In many organizations, we see a lot of old technology. Organizations are not up-to-date on the software platform that they're using and a lot of organizations are really not leveraging the power of a lot of these really wonderful modern solutions that are out there.

Noel: That really is a complete transformation as you kind of listed some of those things as far as virtualization and dev/test environments and the cloud, hybrid applications, continuous integration, etc. All of these things are being adopted by companies that are doing it right but they're also things that some are having to embrace all at the same time. It really is a complete transformation from collecting those requirements to delivering better software or faster, it's not just, "oh, we only needed to do one of those things to get it right."

Theresa: Yeah, and it's the reality of understanding cost, quality, schedule, where you're willing to make the sacrifices, and then also looking at the people process and the technology. Do you have the right skillsets in place? Do you have a relationship with professional service providers? Do you have a relationship with the software vendor that you're using? Do you have the right process in place for each project, because process is not a one size fits all? Do you have the right tooling?

In many, many cases like I said, we see organizations using versions of software that are several, several, versions old and really not embracing some of these new technologies that you just discussed. If you don't have the skillsets internally, look to a good professional services organization to help you really bring these new technologies in. Because a lot of the things that you were doing in the past, some of these very manually-related activities, can now be done through the use of these modern tools and have really wonderful return on investment with these tools. Such as lowering the number of defects going into production, testing on more platforms, having environments available anytime people want them for testing. These are all really great things that these newer technologies offer to organizations.

Noel: Let's talk about your keynote for a little bit. Again, the title is, "Extreme Automation: Software Quality for the Next Generation Enterprise." You're employing all of these different technologies and skills and processes to build this next generation enterprise, so I was curious to get your definition as to what makes a piece of enterprise software "next generation?" What it makes it different from a piece of software in the past?

Theresa: Okay, one of the things that we hold core to our beliefs is that virtualization; the technology of virtualization is really the hub of the modern application lifecycle. Using things like virtual lab management or VLM, dev/test clouds, service virtualization, defect virtualization, device virtualization, bringing that virtualization technology to the pre-production environment, because we know how great virtualization worked in the production environment for the data center, for the operations team in terms of saving capital investments on hardware, reducing the footprint in the data center, reducing energy consumption, just making things far more efficient. We do believe that virtualization is really the hub of the modern application lifecycle and bringing it to the preproduction site is something that we've been really bullish on since we founded the company in 2006.

If you look at the next generation enterprise, that next generation enterprise is really about business connectivity. It's about a global marketplace. Your customers are everywhere and you're powered by software but that software has to be ready and available and working anytime, anywhere, any place. If you think about software, software only has to do three things: software has to work, software has to perform, and software has to be secure. When you think of it in terms of, "does my software work, is it fast enough, does it perform well, and is it secure enough," those are three very, very basic fundamental questions, but that has to be right.

It has to have the quality aspect associated with that. That's what you're going to see in the next generation enterprise, the technology is really optimized for the business outcomes to make sure that people are having that software experience that works with them, that is performing enough, and does have a high degree of security.

Noel: To look at the other half of your keynote's title, "extreme automation." I'm always a fan of writing about automation and reading people's opinions on it. It tends to stir up a debate sometimes where you have some people who are talking about automation is the key to this, and the automation is the key to that, or I feel like sometimes they think it's the key to everything, but then you have others who are kind of holding their hands up and saying, "automation isn't going to solve everything." Is that kind of a tough decision sometimes to figure out when automation is absolutely necessary and when it's not?

Theresa: Well, I think if you look at what's going on in the enterprise we know that the enterprise does not embrace automation as much as it could, given the capability of a lot of the new tooling that's out there. If we look at extreme automation, the definition of extreme automation is the concept of solving classic computing problems across the lifecycle with the use of modern tooling technology. You're removing barriers and you're facilitating communication, collaboration and connectivity of the development team, the QA team and the operations team to support the line of business and that insatiable demand for quality software. It's this idea of using modern tooling, removing those barriers, using people, processes, and technology to really deliver on that demand for high quality software and that's how we define extreme automation.

Noel: I was just writing it down as you were saying that it's "solving classic problems with new technology and new tooling. That almost seems like a gentler way of saying "extreme automation." I wonder if maybe it wouldn't scare as many people when they hear "automation!" I love that, because, it's not solving problems people don't know they have, or haven't ever heard of, it's problems that they know they have, and have always had, and new technologies are available to solve those. That's great.

Theresa: Yeah and you're right they are problems that people have known that they have always had, so take for example, a test environment. What do people typically do? People will typically have to wait. We have survey data that says that 96% of people wait to get access to a test environment.

Noel: Wow!

Theresa: To get access to the test environment in a typical organization, without using virtualization, they have to wait for the operations team to provision, so that becomes a bottleneck. Quite honestly, the skills of the operations team should be used far more strategically, and the skills of the QA team waiting to get into that test environment should not be used waiting for an environment to be provisioned.

If you're using something like virtual lab management technology, or dev/test cloud technology, you can spin up those virtual environments for tests that give people an environment as close to production as possible for as long as they want it to test whatever they need to test.

That's really something that's really beneficial because everybody today has to work with a third party supply chain for their software, so you have your entire software supply chain where they're using outsourcers to do a portion of your development or testing, or whether you're taking code drops from a partner that you might be working with, or whether you're working on some type of collaborative project with another business partner. We have this software supply chain that we have to work with, and not waiting for those tactical things to happen really gives you a big, big benefit.

Noel: Absolutely. Well, for my last question, I feel like it's all led up to this, you've got developers and testers able to work alongside each other and not have to wait, and IT not have to spend so much time provisioning and managing: it all leads to collaboration. I wanted to look specifically at the collaboration between developers and testers.

I feel like we're not hearing, and I'm not reading as much about as the incredible differences between those two groups. Obviously there are still differences, but there's just much more talk about them working together and realizing that by collaborating and working together that's what ends up building better software faster. We're not hearing anywhere near as much about the headbutting of these two groups in particular.

Theresa: Yeah, there is absolutely has to be collaboration, communication, and connectivity. I think one of the things you have to look at is, "is there really parity across development, QA, and operations to support that line of business?" With the development team really delivering architectural readiness, the QA team really delivering customer readiness, and the operations team really delivering production readiness. That line of business is really the requirements communicator, the keeper of profit and loss, and everybody in the IT organization, those three pillars of IT, they're working to deliver those valuable business outcomes for that line of business.

Now, having said that, the line of business also has to be involved as well. You can't just run around and code something and say, "Okay here you go line of business, you know, this is what we think you wanted." The line of business also has to be involved as well.

If you think about the idea of parity, so you want to have parity between development, quality assurance, and operation to support that line of business. If there is parity, are the groups really collaborative or are they functionally isolated? You wanted have that collaboration and functionally there is still a need to have specialization of resources but you don't want them to be isolated.

If there is no parity, is one group more dominant than the other? Is the operations team driving everything at the expense of what dev and the QA teams are doing? Collaboration across the groups is really essential and one of the things we've been talking about I think during the course of this discussion is that we have really good technology available to eliminate those age-old issues among the groups. Virtualization, as I said, we believe is the hub of the modern application lifecycle, so what you do with this collaboration using tooling such as virtualization is you don't have to wait for operations to spin up the environments for testing. You have as many test environments for as long as you need.

You eliminate that friction between development and QA, where QA identifies the defect and then says, "Okay, dev team here is the defect" and the dev team comes back and says, "Well, I can't really identify this because it works on my machine." So, we eliminate that phrase, "it works on my machine."

It's really, really wonderful just to take that out of the IT vocabulary. It's a big, big win. One of the people that we've talked to in one of the many market research surveys that was actually done on virtualization, one participant said that virtual lab management brings about peace between developers and testers ...

Peace in the IT world. I looked at that, and I look at just the idea of virtual lab management without bringing any other piece of virtualization into the preproduction environment and say, "if we can really eliminate this friction between the three pillars of IT to really work, to support what that line of business needs we're really in a good position." It's great that there are these collaborative tools out there that allow more collaboration, allow more communication, allow more connectivity. Organizations should not be struggling with that anymore because the tools do exist.

It's been really great and I think that we've seen this really happen in the past couple or three years-we've really seen these tools come to a new level where there is not this reluctance to say, "Well, I'm not really sure if these tool is going to work, this tool might be too difficult." The tools are getting easier and easier to use. The tools are really robust.

Again, if you don't have the skillsets in your organization, go out and reach out to professional services organizations and make sure that you have that relationship with the professional services team. Leverage those relationships with the professional service providers. Leverage the relationships with the vendors.

One of the things that I always like to say to people in working with the software vender is talk to that vendor, have a relationship with the vendor, tell the vendor what you want to see in terms of features and functionality of the tooling. Vendors are very, very open and very receptive to hearing from their customers and from their potential customers, so leverage that relationship.

And then, if you're thinking about bringing in some new technology-select a pilot project. Don't say, "We're going to bring this in and put it right in to the organization and have everybody use it." Select a pilot project, figure out where you can get that really quick good return on investment and be able to go around and do some internal public relations about how it's working, how it's making a difference.

The best thing I can say about technology is to get current and stay current on your existing tools. Also, go out and evaluate and new technologies that you may not have, or get it as a way to complement and supplement what you're already doing. Leverage the whole people, process, and technology portion to deliver high quality software on time and on budget.

Noel: That's great. That really kind of sums everything up. I love the bit about these tools actually bringing peace to these organizations. I feel like sometimes like if your boss comes to you and says you got a new tool that's going to help you work faster or help you work harder it's kind of like, "I didn't know I needed to work faster or harder." But you find out that it's actually going to also bring peace to the environment around you, that's another attractive selling point of this technology.

Theresa: Yeah, it lets you work smarter and it allows you to focus your attention, your activities on far more strategic things rather than sitting around waiting for a lab, manually scheduling a lab, trying to get into that lab, hoping that ... as a test team, hoping that you don't run into any unforeseen problems and the team in front of you didn't run into any unforeseen problems, they were able to get out when they were supposed to get out in the lab and you're able to get on that lab when you were supposed to get at the lab.

But that says "all right, now we're limiting our testing, what if I now have to test on maybe my line of business, you know what we really need to support a new tablet device so now I have to test on multiple platforms." And if you're in a physical lab you may not have time to do everything." So having that environment it's close to production as possible when you need for as long as you need it is ... you know, you're right, it brings ... it's a very peaceful environment running around doing a lot of tactical things.

Noel: That's great. Thank you so much for speaking with me today.

Theresa: Oh you're quite welcome.

Noel: Thank you. Everybody, again, this is Theresa Lanowitz, who is the founder of voke and you can hear Theresa's keynote or visit it in person at STAREAST on May 8 ... it' s Thursday, May 8. The title again is "Extreme Automation: Software Quality for the Next Generation Enterprise. Thanks so much again.

Theresa: Thank you.

More from Theresa at the SDLC Acceleration Summit: A Deep Dive into Delivering Better Software Faster

SDLCAccelerationSummitUnder pressure to deliver more software, more frequently-and with zero defects? Want to explore SDLC acceleration best practices, trends, and insights with your peers and industry experts (industry Theresa Lanowitz)? Join us on May 13 in San Francisco for the SDLC Acceleration Summit.

The SDLC Acceleration Summit is your forum for asking questions and sharing ideas about accelerating development and test cycles to ensure that top-quality applications are delivered on time and on budget. Join us as we delve into topics such as:

  • The Future of the SDLC
  • Integrity within the Software Supply Chain
  • Reassessing the True Cost of Software Quality
  • Gaining a Competitive Advantage via an Advanced Software Delivery Process

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Technical Writer at Parasoft, authors technical articles, documentation, white papers, case studies, and other marketing communications—currently specializing in service virtualization, API testing, DevOps, and continuous testing. She has also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

@MicroservicesExpo Stories
Skeuomorphism usually means retaining existing design cues in something new that doesn’t actually need them. However, the concept of skeuomorphism can be thought of as relating more broadly to applying existing patterns to new technologies that, in fact, cry out for new approaches. In his session at DevOps Summit, Gordon Haff, Senior Cloud Strategy Marketing and Evangelism Manager at Red Hat, discussed why containers should be paired with new architectural practices such as microservices rathe...
SYS-CON Events announced today that G2G3 will exhibit at SYS-CON's @DevOpsSummit Silicon Valley, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Based on a collective appreciation for user experience, design, and technology, G2G3 is uniquely qualified and motivated to redefine how organizations and people engage in an increasingly digital world.
Any Ops team trying to support a company in today’s cloud-connected world knows that a new way of thinking is required – one just as dramatic than the shift from Ops to DevOps. The diversity of modern operations requires teams to focus their impact on breadth vs. depth. In his session at DevOps Summit, Adam Serediuk, Director of Operations at xMatters, Inc., will discuss the strategic requirements of evolving from Ops to DevOps, and why modern Operations has begun leveraging the “NoOps” approa...
Puppet Labs has announced the next major update to its flagship product: Puppet Enterprise 2015.2. This release includes new features providing DevOps teams with clarity, simplicity and additional management capabilities, including an all-new user interface, an interactive graph for visualizing infrastructure code, a new unified agent and broader infrastructure support.
Early in my DevOps Journey, I was introduced to a book of great significance circulating within the Web Operations industry titled The Phoenix Project. (You can read our review of Gene’s book, if interested.) Written as a novel and loosely based on many of the same principles explored in The Goal, this book has been read and referenced by many who have adopted DevOps into their continuous improvement and software delivery processes around the world. As I began planning my travel schedule last...
Several years ago, I was a developer in a travel reservation aggregator. Our mission was to pull flight and hotel data from a bunch of cryptic reservation platforms, and provide it to other companies via an API library - for a fee. That was before companies like Expedia standardized such things. We started with simple methods like getFlightLeg() or addPassengerName(), each performing a small, well-understood function. But our customers wanted bigger, more encompassing services that would "do ...
SYS-CON Events announced today that DataClear Inc. will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. The DataClear ‘BlackBox’ is the only solution that moves your PC, browsing and data out of the United States and away from prying (and spying) eyes. Its solution automatically builds you a clean, on-demand, virus free, new virtual cloud based PC outside of the United States, and wipes it clean...
Docker containerization is increasingly being used in production environments. How can these environments best be monitored? Monitoring Docker containers as if they are lightweight virtual machines (i.e., monitoring the host from within the container), with all the common metrics that can be captured from an operating system, is an insufficient approach. Docker containers can’t be treated as lightweight virtual machines; they must be treated as what they are: isolated processes running on hosts....
Culture is the most important ingredient of DevOps. The challenge for most organizations is defining and communicating a vision of beneficial DevOps culture for their organizations, and then facilitating the changes needed to achieve that. Often this comes down to an ability to provide true leadership. As a CIO, are your direct reports IT managers or are they IT leaders? The hard truth is that many IT managers have risen through the ranks based on their technical skills, not their leadership ab...
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
The pricing of tools or licenses for log aggregation can have a significant effect on organizational culture and the collaboration between Dev and Ops teams. Modern tools for log aggregation (of which Logentries is one example) can be hugely enabling for DevOps approaches to building and operating business-critical software systems. However, the pricing of an aggregated logging solution can affect the adoption of modern logging techniques, as well as organizational capabilities and cross-team ...
DevOps has traditionally played important roles in development and IT operations, but the practice is quickly becoming core to other business functions such as customer success, business intelligence, and marketing analytics. Modern marketers today are driven by data and rely on many different analytics tools. They need DevOps engineers in general and server log data specifically to do their jobs well. Here’s why: Server log files contain the only data that is completely full and accurate in th...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advance...
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs. The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy ...
Whether you like it or not, DevOps is on track for a remarkable alliance with security. The SEC didn’t approve the merger. And your boss hasn’t heard anything about it. Yet, this unruly triumvirate will soon dominate and deliver DevSecOps faster, cheaper, better, and on an unprecedented scale. In his session at DevOps Summit, Frank Bunger, VP of Customer Success at ScriptRock, will discuss how this cathartic moment will propel the DevOps movement from such stuff as dreams are made on to a prac...
It’s been proven time and time again that in tech, diversity drives greater innovation, better team productivity and greater profits and market share. So what can we do in our DevOps teams to embrace diversity and help transform the culture of development and operations into a true “DevOps” team? In her session at DevOps Summit, Stefana Muller, Director, Product Management – Continuous Delivery at CA Technologies, answered that question citing examples, showing how to create opportunities for ...
What does “big enough” mean? It’s sometimes useful to argue by reductio ad absurdum. Hello, world doesn’t need to be broken down into smaller services. At the other extreme, building a monolithic enterprise resource planning (ERP) system is just asking for trouble: it’s too big, and it needs to be decomposed.
The Microservices architectural pattern promises increased DevOps agility and can help enable continuous delivery of software. This session is for developers who are transforming existing applications to cloud-native applications, or creating new microservices style applications. In his session at DevOps Summit, Jim Bugwadia, CEO of Nirmata, will introduce best practices, patterns, challenges, and solutions for the development and operations of microservices style applications. He will discuss ...