Click here to close now.




















Welcome!

Microservices Expo Authors: Tim Hinds, AppDynamics Blog, Liz McMillan, Pat Romanski, Trevor Parsons

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, Linux Containers, @CloudExpo, @DevOpsSummit

Containers Expo Blog: Blog Post

Theresa Lanowitz on Solving Age-Old Problems in the Enterprise

Extreme Automation, Service Virtualization, and More

Lanowitz_picBy Noel Wurst, Managing Editor at Skytap

This article was originally published on the Skytap Blog

Noel: Hello, this is Noel Wurst with Skytap and I am speaking with Theresa Lanowitz today, who is the founder of voke. Theresa is going to be giving a keynote at this year's STAREAST conference on May 8, in Orlando, Florida. The keynote is titled "Extreme Automation: Software Quality for the Next Generation Enterprise." I wanted to speak with her about what exactly extreme automation involves, trying to define the "next generation enterprise," and to find out more about what she does and what voke does. Theresa, how are you today?

Theresa: I'm great, and thanks for inviting me to do this interview.

Noel: You're welcome! So, let's learn a little bit more about what you do with voke and what voke does. I was reading about some of your company's services on your website-particularly those that relate to application development at the enterprise level. I saw where voke helps companies evaluate a variety of application lifecycle solutions. Actually, I'll go ahead and let you talk about that first before I move on.

Theresa: Okay, so I'll just give you a little bit of a background about who we are. We are an independent industry analyst firm and I'm the founder, I founded it 2006. What we do at voke is we really focus on the application lifecycle, the entire application lifecycle, and the transformation of that application lifecycle- including technology such as virtualization, cloud computing, embedded systems, mobile and device software and so on. We provide strategic, independent, impartial advice and market observations through both quantitative and qualitative research. That's just a little bit about who we are and what we do.

Noel: So, when you're working with clients and you're trying to help them make these decisions that involve the entire lifecycle, I'm sure there are numerous questions, obviously, but I was curious- are there any questions that you tend to ask, or you're trying to get the answers to some questions that perhaps clients kind of tend to forget or overlook, or not maybe think about when you're dealing with the entire lifecycle?

Theresa: Yeah, when you're looking at really evaluating application lifecycle solutions one of the things we always want to understand from people that we're working with is, how mature is the organization? Do you have one part of the organization that might be a little bit more mature than the other? Maybe, is your QA organization really, really, mature with their practices and processes and tooling and maybe other parts of the organization may not be as mature.

We really want to understand the maturity of the organization. Then we also want to understand whether or not there is parity between the development, the quality assurance organization, and the operations organization, so those three pillars,those three classic pillars of IT. Do you have parity across those? Are all three, dev, QA and operations, are they really working to support the line of business to deliver high quality valuable business outcomes?

Another really important thing that we see going on right now is we want to find out if there is a change agent at the executive level in the organization. Because one of the things we know now is, there is really great technology in the market to help us overcome some of those traditional age-old computing problems that we've had. Things such as virtualization, things such as virtual lab management capabilities, service virtualization capability that free up a lot of time from people in dev, QA, and operations to do far more strategic things. If there is a change agent in the organization, that change agent is really able to effect change it will really get buy-in from the senior level management to make these changes happen. Finding out whether or not there is a change agent at the executive level is really important.

Then, if there is a change agent, how committed really is that executive team to implementing the change? Are they just saying, "You know, we think this is a good thing to do because it seems to be one of the things that people are talking about." What type of commitment is there? Another thing that's really, really, important is, how valued are requirements within the organization? Are you really willing to take more time to get requirements right to prevent those defects later on? Do you really understand what your cost of quality is? Do you really understand what the cost of building that software is actually going to be? How committed are you to those requirement and to getting it right?

Then I think another important thing that we really look at is what is most important to the organization? Are they more concerned about cost, quality, or schedule? Ideally, you want to be equally concerned about cost, quality, and schedule. But as we see from so many big catastrophic failures that happen in the news these days is that often, people are more concerned about schedule. Faster is better than correct, or faster is better than high quality.

If you're willing to take that risk of having those catastrophic events, what do you do about your cost, what do you do about your quality? If you are willing to take that risk and have those catastrophic events out there, how willing are you to have your brand impacted? Because if you think about it, every company, every government agency is a software company, because you're building software that are going to deliver these business outcomes and software is the differentiator for your business. What we see are these big, big catastrophic failures making headlines and we have to ask ourselves why are these failures making headlines?

One thing is, during the global financial crisis we really saw a lack of investment in IT. IT budget remained flat or they declined. Then we have this idea of faster was greater than being right or having higher quality. Faster is not really equal to better. In many organizations, we see a lot of old technology. Organizations are not up-to-date on the software platform that they're using and a lot of organizations are really not leveraging the power of a lot of these really wonderful modern solutions that are out there.

Noel: That really is a complete transformation as you kind of listed some of those things as far as virtualization and dev/test environments and the cloud, hybrid applications, continuous integration, etc. All of these things are being adopted by companies that are doing it right but they're also things that some are having to embrace all at the same time. It really is a complete transformation from collecting those requirements to delivering better software or faster, it's not just, "oh, we only needed to do one of those things to get it right."

Theresa: Yeah, and it's the reality of understanding cost, quality, schedule, where you're willing to make the sacrifices, and then also looking at the people process and the technology. Do you have the right skillsets in place? Do you have a relationship with professional service providers? Do you have a relationship with the software vendor that you're using? Do you have the right process in place for each project, because process is not a one size fits all? Do you have the right tooling?

In many, many cases like I said, we see organizations using versions of software that are several, several, versions old and really not embracing some of these new technologies that you just discussed. If you don't have the skillsets internally, look to a good professional services organization to help you really bring these new technologies in. Because a lot of the things that you were doing in the past, some of these very manually-related activities, can now be done through the use of these modern tools and have really wonderful return on investment with these tools. Such as lowering the number of defects going into production, testing on more platforms, having environments available anytime people want them for testing. These are all really great things that these newer technologies offer to organizations.

Noel: Let's talk about your keynote for a little bit. Again, the title is, "Extreme Automation: Software Quality for the Next Generation Enterprise." You're employing all of these different technologies and skills and processes to build this next generation enterprise, so I was curious to get your definition as to what makes a piece of enterprise software "next generation?" What it makes it different from a piece of software in the past?

Theresa: Okay, one of the things that we hold core to our beliefs is that virtualization; the technology of virtualization is really the hub of the modern application lifecycle. Using things like virtual lab management or VLM, dev/test clouds, service virtualization, defect virtualization, device virtualization, bringing that virtualization technology to the pre-production environment, because we know how great virtualization worked in the production environment for the data center, for the operations team in terms of saving capital investments on hardware, reducing the footprint in the data center, reducing energy consumption, just making things far more efficient. We do believe that virtualization is really the hub of the modern application lifecycle and bringing it to the preproduction site is something that we've been really bullish on since we founded the company in 2006.

If you look at the next generation enterprise, that next generation enterprise is really about business connectivity. It's about a global marketplace. Your customers are everywhere and you're powered by software but that software has to be ready and available and working anytime, anywhere, any place. If you think about software, software only has to do three things: software has to work, software has to perform, and software has to be secure. When you think of it in terms of, "does my software work, is it fast enough, does it perform well, and is it secure enough," those are three very, very basic fundamental questions, but that has to be right.

It has to have the quality aspect associated with that. That's what you're going to see in the next generation enterprise, the technology is really optimized for the business outcomes to make sure that people are having that software experience that works with them, that is performing enough, and does have a high degree of security.

Noel: To look at the other half of your keynote's title, "extreme automation." I'm always a fan of writing about automation and reading people's opinions on it. It tends to stir up a debate sometimes where you have some people who are talking about automation is the key to this, and the automation is the key to that, or I feel like sometimes they think it's the key to everything, but then you have others who are kind of holding their hands up and saying, "automation isn't going to solve everything." Is that kind of a tough decision sometimes to figure out when automation is absolutely necessary and when it's not?

Theresa: Well, I think if you look at what's going on in the enterprise we know that the enterprise does not embrace automation as much as it could, given the capability of a lot of the new tooling that's out there. If we look at extreme automation, the definition of extreme automation is the concept of solving classic computing problems across the lifecycle with the use of modern tooling technology. You're removing barriers and you're facilitating communication, collaboration and connectivity of the development team, the QA team and the operations team to support the line of business and that insatiable demand for quality software. It's this idea of using modern tooling, removing those barriers, using people, processes, and technology to really deliver on that demand for high quality software and that's how we define extreme automation.

Noel: I was just writing it down as you were saying that it's "solving classic problems with new technology and new tooling. That almost seems like a gentler way of saying "extreme automation." I wonder if maybe it wouldn't scare as many people when they hear "automation!" I love that, because, it's not solving problems people don't know they have, or haven't ever heard of, it's problems that they know they have, and have always had, and new technologies are available to solve those. That's great.

Theresa: Yeah and you're right they are problems that people have known that they have always had, so take for example, a test environment. What do people typically do? People will typically have to wait. We have survey data that says that 96% of people wait to get access to a test environment.

Noel: Wow!

Theresa: To get access to the test environment in a typical organization, without using virtualization, they have to wait for the operations team to provision, so that becomes a bottleneck. Quite honestly, the skills of the operations team should be used far more strategically, and the skills of the QA team waiting to get into that test environment should not be used waiting for an environment to be provisioned.

If you're using something like virtual lab management technology, or dev/test cloud technology, you can spin up those virtual environments for tests that give people an environment as close to production as possible for as long as they want it to test whatever they need to test.

That's really something that's really beneficial because everybody today has to work with a third party supply chain for their software, so you have your entire software supply chain where they're using outsourcers to do a portion of your development or testing, or whether you're taking code drops from a partner that you might be working with, or whether you're working on some type of collaborative project with another business partner. We have this software supply chain that we have to work with, and not waiting for those tactical things to happen really gives you a big, big benefit.

Noel: Absolutely. Well, for my last question, I feel like it's all led up to this, you've got developers and testers able to work alongside each other and not have to wait, and IT not have to spend so much time provisioning and managing: it all leads to collaboration. I wanted to look specifically at the collaboration between developers and testers.

I feel like we're not hearing, and I'm not reading as much about as the incredible differences between those two groups. Obviously there are still differences, but there's just much more talk about them working together and realizing that by collaborating and working together that's what ends up building better software faster. We're not hearing anywhere near as much about the headbutting of these two groups in particular.

Theresa: Yeah, there is absolutely has to be collaboration, communication, and connectivity. I think one of the things you have to look at is, "is there really parity across development, QA, and operations to support that line of business?" With the development team really delivering architectural readiness, the QA team really delivering customer readiness, and the operations team really delivering production readiness. That line of business is really the requirements communicator, the keeper of profit and loss, and everybody in the IT organization, those three pillars of IT, they're working to deliver those valuable business outcomes for that line of business.

Now, having said that, the line of business also has to be involved as well. You can't just run around and code something and say, "Okay here you go line of business, you know, this is what we think you wanted." The line of business also has to be involved as well.

If you think about the idea of parity, so you want to have parity between development, quality assurance, and operation to support that line of business. If there is parity, are the groups really collaborative or are they functionally isolated? You wanted have that collaboration and functionally there is still a need to have specialization of resources but you don't want them to be isolated.

If there is no parity, is one group more dominant than the other? Is the operations team driving everything at the expense of what dev and the QA teams are doing? Collaboration across the groups is really essential and one of the things we've been talking about I think during the course of this discussion is that we have really good technology available to eliminate those age-old issues among the groups. Virtualization, as I said, we believe is the hub of the modern application lifecycle, so what you do with this collaboration using tooling such as virtualization is you don't have to wait for operations to spin up the environments for testing. You have as many test environments for as long as you need.

You eliminate that friction between development and QA, where QA identifies the defect and then says, "Okay, dev team here is the defect" and the dev team comes back and says, "Well, I can't really identify this because it works on my machine." So, we eliminate that phrase, "it works on my machine."

It's really, really wonderful just to take that out of the IT vocabulary. It's a big, big win. One of the people that we've talked to in one of the many market research surveys that was actually done on virtualization, one participant said that virtual lab management brings about peace between developers and testers ...

Peace in the IT world. I looked at that, and I look at just the idea of virtual lab management without bringing any other piece of virtualization into the preproduction environment and say, "if we can really eliminate this friction between the three pillars of IT to really work, to support what that line of business needs we're really in a good position." It's great that there are these collaborative tools out there that allow more collaboration, allow more communication, allow more connectivity. Organizations should not be struggling with that anymore because the tools do exist.

It's been really great and I think that we've seen this really happen in the past couple or three years-we've really seen these tools come to a new level where there is not this reluctance to say, "Well, I'm not really sure if these tool is going to work, this tool might be too difficult." The tools are getting easier and easier to use. The tools are really robust.

Again, if you don't have the skillsets in your organization, go out and reach out to professional services organizations and make sure that you have that relationship with the professional services team. Leverage those relationships with the professional service providers. Leverage the relationships with the vendors.

One of the things that I always like to say to people in working with the software vender is talk to that vendor, have a relationship with the vendor, tell the vendor what you want to see in terms of features and functionality of the tooling. Vendors are very, very open and very receptive to hearing from their customers and from their potential customers, so leverage that relationship.

And then, if you're thinking about bringing in some new technology-select a pilot project. Don't say, "We're going to bring this in and put it right in to the organization and have everybody use it." Select a pilot project, figure out where you can get that really quick good return on investment and be able to go around and do some internal public relations about how it's working, how it's making a difference.

The best thing I can say about technology is to get current and stay current on your existing tools. Also, go out and evaluate and new technologies that you may not have, or get it as a way to complement and supplement what you're already doing. Leverage the whole people, process, and technology portion to deliver high quality software on time and on budget.

Noel: That's great. That really kind of sums everything up. I love the bit about these tools actually bringing peace to these organizations. I feel like sometimes like if your boss comes to you and says you got a new tool that's going to help you work faster or help you work harder it's kind of like, "I didn't know I needed to work faster or harder." But you find out that it's actually going to also bring peace to the environment around you, that's another attractive selling point of this technology.

Theresa: Yeah, it lets you work smarter and it allows you to focus your attention, your activities on far more strategic things rather than sitting around waiting for a lab, manually scheduling a lab, trying to get into that lab, hoping that ... as a test team, hoping that you don't run into any unforeseen problems and the team in front of you didn't run into any unforeseen problems, they were able to get out when they were supposed to get out in the lab and you're able to get on that lab when you were supposed to get at the lab.

But that says "all right, now we're limiting our testing, what if I now have to test on maybe my line of business, you know what we really need to support a new tablet device so now I have to test on multiple platforms." And if you're in a physical lab you may not have time to do everything." So having that environment it's close to production as possible when you need for as long as you need it is ... you know, you're right, it brings ... it's a very peaceful environment running around doing a lot of tactical things.

Noel: That's great. Thank you so much for speaking with me today.

Theresa: Oh you're quite welcome.

Noel: Thank you. Everybody, again, this is Theresa Lanowitz, who is the founder of voke and you can hear Theresa's keynote or visit it in person at STAREAST on May 8 ... it' s Thursday, May 8. The title again is "Extreme Automation: Software Quality for the Next Generation Enterprise. Thanks so much again.

Theresa: Thank you.

More from Theresa at the SDLC Acceleration Summit: A Deep Dive into Delivering Better Software Faster

SDLCAccelerationSummitUnder pressure to deliver more software, more frequently-and with zero defects? Want to explore SDLC acceleration best practices, trends, and insights with your peers and industry experts (industry Theresa Lanowitz)? Join us on May 13 in San Francisco for the SDLC Acceleration Summit.

The SDLC Acceleration Summit is your forum for asking questions and sharing ideas about accelerating development and test cycles to ensure that top-quality applications are delivered on time and on budget. Join us as we delve into topics such as:

  • The Future of the SDLC
  • Integrity within the Software Supply Chain
  • Reassessing the True Cost of Software Quality
  • Gaining a Competitive Advantage via an Advanced Software Delivery Process

More Stories By Cynthia Dunlop

Cynthia Dunlop, Lead Technical Writer at Parasoft, authors technical articles, documentation, white papers, case studies, and other marketing communications—currently specializing in service virtualization, API testing, DevOps, and continuous testing. She has also co-authored and ghostwritten several books on software development and testing for Wiley and Wiley-IEEE Press. Dunlop holds a BA from UCLA and an MA from Washington State University.

@MicroservicesExpo Stories
Approved this February by the Internet Engineering Task Force (IETF), HTTP/2 is the first major update to HTTP since 1999, when HTTP/1.1 was standardized. Designed with performance in mind, one of the biggest goals of HTTP/2 implementation is to decrease latency while maintaining a high-level compatibility with HTTP/1.1. Though not all testing activities will be impacted by the new protocol, it's important for testers to be aware of any changes moving forward.
The Internet of Things. Cloud. Big Data. Real-Time Analytics. To those who do not quite understand what these phrases mean (and let’s be honest, that’s likely to be a large portion of the world), words like “IoT” and “Big Data” are just buzzwords. The truth is, the Internet of Things encompasses much more than jargon and predictions of connected devices. According to Parker Trewin, Senior Director of Content and Communications of Aria Systems, “IoT is big news because it ups the ante: Reach out ...
"We got started as search consultants. On the services side of the business we have help organizations save time and save money when they hit issues that everyone more or less hits when their data grows," noted Otis Gospodnetić, Founder of Sematext, in this SYS-CON.tv interview at @DevOpsSummit, held June 9-11, 2015, at the Javits Center in New York City.
"We've just seen a huge influx of new partners coming into our ecosystem, and partners building unique offerings on top of our API set," explained Seth Bostock, Chief Executive Officer at IndependenceIT, in this SYS-CON.tv interview at 16th Cloud Expo, held June 9-11, 2015, at the Javits Center in New York City.
Learn how to solve the problem of keeping files in sync between multiple Docker containers. In his session at 16th Cloud Expo, Aaron Brongersma, Senior Infrastructure Engineer at Modulus, discussed using rsync, GlusterFS, EBS and Bit Torrent Sync. He broke down the tools that are needed to help create a seamless user experience. In the end, can we have an environment where we can easily move Docker containers, servers, and volumes without impacting our applications? He shared his results so yo...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Arch...
Auto-scaling environments, micro-service architectures and globally-distributed teams are just three common examples of why organizations today need automation and interoperability more than ever. But is interoperability something we simply start doing, or does it require a reexamination of our processes? And can we really improve our processes without first making interoperability a requirement for how we choose our tools?
Microservices are individual units of executable code that work within a limited framework. They are extremely useful when placed within an architecture of numerous microservices. On June 24th, 2015 I attended a webinar titled “How to Share Share-Nothing Microservices,” hosted by Jason Bloomberg, the President of Intellyx, and Scott Edwards, Director Product Marketing for Service Virtualization at CA Technologies. The webinar explained how to use microservices to your advantage in order to deliv...
Public Cloud IaaS started its life in the developer and startup communities and has grown rapidly to a $20B+ industry, but it still pales in comparison to how much is spent worldwide on IT: $3.6 trillion. In fact, there are 8.6 million data centers worldwide, the reality is many small and medium sized business have server closets and colocation footprints filled with servers and storage gear. While on-premise environment virtualization may have peaked at 75%, the Public Cloud has lagged in adop...
How do you securely enable access to your applications in AWS without exposing any attack surfaces? The answer is usually very complicated because application environments morph over time in response to growing requirements from your employee base, your partners and your customers. In his session at @DevOpsSummit, Haseeb Budhani, CEO and Co-founder of Soha, shared five common approaches that DevOps teams follow to secure access to applications deployed in AWS, Azure, etc., and the friction an...
The Software Defined Data Center (SDDC), which enables organizations to seamlessly run in a hybrid cloud model (public + private cloud), is here to stay. IDC estimates that the software-defined networking market will be valued at $3.7 billion by 2016. Security is a key component and benefit of the SDDC, and offers an opportunity to build security 'from the ground up' and weave it into the environment from day one. In his session at 16th Cloud Expo, Reuven Harrison, CTO and Co-Founder of Tufin,...
Digital Transformation is the ultimate goal of cloud computing and related initiatives. The phrase is certainly not a precise one, and as subject to hand-waving and distortion as any high-falutin' terminology in the world of information technology. Yet it is an excellent choice of words to describe what enterprise IT—and by extension, organizations in general—should be working to achieve. Digital Transformation means: handling all the data types being found and created in the organizat...
JavaScript is primarily a client-based dynamic scripting language most commonly used within web browsers as client-side scripts to interact with the user, browser, and communicate asynchronously to servers. If you have been part of any web-based development, odds are you have worked with JavaScript in one form or another. In this article, I'll focus on the aspects of JavaScript that are relevant within the Node.js environment.
Discussions about cloud computing are evolving into discussions about enterprise IT in general. As enterprises increasingly migrate toward their own unique clouds, new issues such as the use of containers and microservices emerge to keep things interesting. In this Power Panel at 16th Cloud Expo, moderated by Conference Chair Roger Strukhoff, panelists addressed the state of cloud computing today, and what enterprise IT professionals need to know about how the latest topics and trends affect t...
This week, I joined SOASTA as Senior Vice President of Performance Analytics. Given my background in cloud computing and distributed systems operations — you may have read my blogs on CNET or GigaOm — this may surprise you, but I want to explain why this is the perfect time to take on this opportunity with this team. In fact, that’s probably the best way to break this down. To explain why I’d leave the world of infrastructure and code for the world of data and analytics, let’s explore the timing...
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at DevOps Summit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Microservices are hot. And for good reason. To compete in today’s fast-moving application economy, it makes sense to break large, monolithic applications down into discrete functional units. Such an approach makes it easier to update and add functionalities (text-messaging a customer, calculating sales tax for a specific geography, etc.) and get those updates / adds into production fast. In fact, some would argue that microservices are a prerequisite for true continuous delivery. But is it too...
Countless business models have spawned from the IaaS industry. Resell Web hosting, blogs, public cloud, and on and on. With the overwhelming amount of tools available to us, it's sometimes easy to overlook that many of them are just new skins of resources we've had for a long time. In his General Session at 16th Cloud Expo, Phil Jackson, Lead Technology Evangelist at SoftLayer, broke down what we've got to work with and discuss the benefits and pitfalls to discover how we can best use them to d...
You often hear the two titles of "DevOps" and "Immutable Infrastructure" used independently. In his session at DevOps Summit, John Willis, Technical Evangelist for Docker, covered the union between the two topics and why this is important. He provided an overview of Immutable Infrastructure then showed how an Immutable Continuous Delivery pipeline can be applied as a best practice for "DevOps." He ended the session with some interesting case study examples.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.