Welcome!

Microservices Expo Authors: Pat Romanski, Mehdi Daoudi, Stackify Blog, Liz McMillan, Jason Bloomberg

Related Topics: Containers Expo Blog, Java IoT, Microservices Expo, ColdFusion, @CloudExpo, Cloud Security

Containers Expo Blog: Article

Virtualization Takes Cloud to the Next Level

Banking services provider BancVue leverages VMware server virtualization to generate cloud benefits and increased agility

Server virtualization success quickly set the stage for private-cloud benefits for banking services provider, BancVue. And that cloud enablement then provided business agility to BancVue's community bank customers, enabling them to better compete against mega banks on such critical areas as customer service and end-user portals.

Learn here how BancVue creates the services that empower its customers to beat the giants in their field by better leveraging agile IT. Sunny Nair, Vice President of IT and Systems Operations at BancVue in Austin, Texas, discusses the journey with BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
G
ardner:
Many companies these days need to tackle the dual task of cutting costs, while also increasing agility and providing better services and response times to their constituents. How did you accomplish both?

Nair: The first thing we wanted to do was to abstract the applications and the operating system from the hardware so that a hardware failure wouldn’t bring down our systems. For that, of course, we went to virtualization. We experimented with various virtualization products. Out of those trials, vSphere was the best software for a heterogeneous environment like ours, where we had Windows and different flavors of Linux.

So we stuck with VMware, and that helped us abstract the hardware layer and our software layer, so we can move our operating systems and our virtual servers to different pieces of hardware, when there was a hardware issue on one server, enabling us to be more agile.

Instead of running just one server on one piece of hardware, we were able to run anywhere between 12 and 20 different servers. All servers weren’t utilized at 100 percent all the time. We were able to leverage the CPU to its full capacity and run many more servers. So we had, at a minimum, a 12x increase in our server capacity on each piece of hardware. That definitely did help our costs.


G
ardner:
Tell us a little bit about BancVue.

Marketing expertise

Nair: BancVue is a financial services software and marketing company. We help community financial institutions compete with mega banks by providing them marketing expertise, software expertise, and data consultation expertise, and all those things require technology and software.

For many of our partners we provide the website that many people land on when they search for the website on the Internet. And we also provide the gateway to their online banking. So it's extremely important for the website to stay up and online.

In addition to that, we also provide rewards checking calculations, interest rate calculations, which customer is qualified for certain products, and so on. We are definitely a part of the ecosystem for the financial institution.

Gardner: Once you settled on your strategy for virtualizing your workloads and supporting your heterogeneity issues, how did that unfold?

It was a step-by-step approach of wading deeper into the virtualization world.

Nair: It was a step-by-step approach of wading deeper into the virtualization world. Our first step was just getting that abstraction layer that I was talking about by virtualizing our servers. Then, we looked at it and we said, "Well, from vSphere we can use vMotion and move our virtual servers around. And we can consolidate our storage on a storage-attached network (SAN)." That helped us disengage further from each piece of hardware.

Then, we can look at vCenter Operations Manager and predict when a server is going to run out of capacity. That was one of the areas where we started experimenting, and that proved very fruitful. That experiment was just earlier this year.

Once we did that, we downloaded some trial software with the help of VMware, which is one of the benefits that we found. We didn’t have to pay up immediately. We could see if it suited our needs first.

We used vCloud Director as a trial, and vShield and vCenter Orchestrator together. Once we put all those pieces together, we were able to get the true benefit of virtualization, which is being in a cloud where not only are you abstracted out, but you can also predict when your hardware is going to run out.

You can move to a different data center, if the need happens to be there and just run your server farm like a power utility would run their power station, building out the computing resources necessary for a user or a customer, and then shutting that off when it’s no longer necessary, all within the same hardware grid.


Fit for purpose

Gardner: I suppose it also gets to that point of cutting your total costs, when you can manage that as a fit-for-purpose exercise. It's the Goldilocks approach -- not too much, not too little. That’s especially important, when you have an ecosystem play, where you can’t always predict what your customers are going to be doing or demanding.

Nair: Yes, and that’s true internally as well as externally. We could have our development group ask for a bunch of servers all of a sudden to do some QA, and we've scripted out using the JavaScript system within vCloud Director and vCenter Orchestrator, building machines automatically. We could reduce our cost and our effort in putting those servers online, because we've automated them. Then the vCloud Director could tear them down automatically later.

One admin can do the work of at least three admins, once we’ve fully implemented the cloud, because the buildup and takedown are some of the most expensive portions of creating a server. You can automate that fully and not have to worry about the takedown, because you can say, "Three days from now please remove the server from the grid." Then, the admin can go do some other tasksWe run Dell hardware, Dell servers, and Dell blades, and that's where we run production. In development, we also use Dell hardware, where we just use the R610s, 710s, and 810s, basically small machines, but with a fairly good amount of power. We can load up to 20 servers on in development, and as many as 12 in production. We run about 275 VMs today.


Cutting-edge technologies

Our production software is software as a service (SaaS), so a majority of that runs on IIS Web servers, with SQL backend. We also use some new cutting-edge database technologies, MongoDB, which also runs on a virtual system.

In addition, we have our infrastructure, like our customer relationship management (CRM), for which we use SugarCRM, and our ticketing system, which is JIRA, and our collaboration tool called Confluence, as well as our build system, which is TeamCity.

All run on VMs. Our infrastructure is powered on VMs, so it’s pretty important that it stays up. It’s one of the reasons that we think running it on a SAN, with the ability to use VMotion, does help our uptime.

A few different things attracted us to VMware. One of them was the fact that VMware fully supported different operating systems. A I said earlier, we run Red Hat, as well as Debian and Windows. When we ran those on different public and other proprietary virtualization products, we found different issues in each one.

We wanted to be able to pick up the phone, ask someone immediately, and get knowledgeable support.

For example, one of them had a time drift, where it didn’t keep time as well as it did on Windows. On Linux the time always seemed to drift a little bit. Apparently they hadn’t mastered that. Some free products did not have the ability to run Windows. They could run other versions of Linux. They couldn't run Windows properly at the time we were testing. But VMware, out of the box, could run all those operating systems.

The second thing was the support level. We didn’t want to be running our production system, put a bug out there in the community, and wait for someone to answer while we were down. We wanted to be able to pick up the phone, ask someone immediately, and get knowledgeable support. So support was a key ingredient in our selection.

We do have that option today when we have an issue. We can call up VMware and get that support. So it was support, compatibility, and the overall ecosystem. We knew that as we grew, we wouldn’t have to switch to another vendor to get cloud. We knew that we could go to VMware and get the cloud solution, as well as the virtualization solution, because virtualization was just the first step to us to become fully virtualized in a private cloud environment, with software, security like vShield and vCenter Operations Manager.

Virtualization lab

We actually had a little virtualization lab, where we practiced these things, because as the old adage says, practice does make perfect. The next thing was that we rolled it out in incremental steps to one product, and then eventually to a larger development group.

Gardner: Looking to the future, is there anything about mobile support or increasing the types of services that you're going to provide to your community banks, more along the lines of extended services that you provide and they brand? Do you think that this cloud environment is going to enable you to pursue that?

Nair: Yes, we’ve already started down that path. We have mobile support for the websites that we’ve created, and we’ve just implemented that earlier this year. Eventually, we plan to go into the online banking space and provide online banking for mobile devices. All that will be done in our cloud infrastructure. So yes, it’s here to stay.

We want to look further at the automation that the cloud products would give us, especially with security in vShield. It’s pretty interesting how we can have a virtual firewall with our VMs and look at the other mobile software that's available.

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

@MicroservicesExpo Stories
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network. In practical terms, this ...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
As today's digital disruptions bounce and smash their way through conventional technologies and conventional wisdom alike, predicting their path is a multifaceted challenge. So many areas of technology advance on Moore's Law-like exponential curves that divining the future is fraught with danger. Such is the problem with artificial intelligence (AI), and its related concepts, including cognitive computing, machine learning, and deep learning.
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
While some vendors scramble to create and sell you a fancy solution for monitoring your spanking new Amazon Lambdas, hear how you can do it on the cheap using just built-in Java APIs yourself. By exploiting a little-known fact that Lambdas aren’t exactly single-threaded, you can effectively identify hot spots in your serverless code. In his session at @DevOpsSummit at 21st Cloud Expo, Dave Martin, Product owner at CA Technologies, will give a live demonstration and code walkthrough, showing how ...
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
There are several reasons why businesses migrate their operations to the cloud. Scalability and price are among the most important factors determining this transition. Unlike legacy systems, cloud based businesses can scale on demand. The database and applications in the cloud are not rendered simply from one server located in your headquarters, but is instead distributed across several servers across the world. Such CDNs also bring about greater control in times of uncertainty. A database hack ...
Did you know that you can develop for mainframes in Java? Or that the testing and deployment can be automated across mobile to mainframe? In his session and demo at @DevOpsSummit at 21st Cloud Expo, Dana Boudreau, a Senior Director at CA Technologies, will discuss how increasingly teams are developing with agile methodologies, using modern development environments, and automating testing and deployments, mobile to mainframe.
As DevOps methodologies expand their reach across the enterprise, organizations face the daunting challenge of adapting related cloud strategies to ensure optimal alignment, from managing complexity to ensuring proper governance. How can culture, automation, legacy apps and even budget be reexamined to enable this ongoing shift within the modern software factory?
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
If you cannot explicitly articulate how investing in a new technology, changing the approach or re-engineering the business process will help you achieve your customer-centric vision of the future in direct and measurable ways, you probably shouldn’t be doing it. At Intellyx, we spend a lot of time talking to technology vendors. In our conversations, we explore emerging new technologies that are either disrupting the way enterprise organizations work or that help enable those organizations to co...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...