Welcome!

Microservices Expo Authors: Elizabeth White, Mamoon Yunus, Jason Bloomberg, Pat Romanski, Automic Blog

Related Topics: @CloudExpo, Industrial IoT, Microservices Expo, Containers Expo Blog, Cloud Security, @BigDataExpo

@CloudExpo: Blog Post

Cloud Services Deliver at Scale

AT&T cloud services built on VMware vCloud Datacenter meet evolving business demands for advanced IaaS

The next BriefingsDirect IT leadership discussion focuses on how global telecommunications giant AT&T has created advanced cloud services for its business customers. We'll see how AT&T has developed the ability to provide virtual private clouds and other computing capabilities as integrated services at scale.

To learn more about implementing cloud technology to deliver and commercialize an adaptive and reliable cloud services ecosystem, we sat down with Chris Costello, Assistant Vice President of AT&T Cloud Services. The interview was conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Why are business cloud services such an important initiative for you?

Costello: AT&T has been in the hosting business for over 15 years, and so it was only a natural extension for us to get into the cloud services business to evolve with customers' changing business demands and technology needs.

Chris Costello

We have cloud services in several areas. The first is our AT&T Synaptic Compute as a Service. This is a hybrid cloud that allows VMware clients to extend their private clouds into AT&T's network-based cloud using a virtual private network (VPN). And it melds the security and performance of VPN with the economics and flexibility of a public cloud. So the service is optimized for VMware's more than 350,000 clients.

If you look at customers who have internal clouds today or private data centers, they like the control, the security, and the leverage that they have, but they really want the best of both worlds. There are certain workloads where they want to burst into a service provider’s cloud.

We give them that flexibility, agility, and control, where they can simply point and click, using free downloadable tools from VMware, to instantly turn up workloads into AT&T's cloud.

Another capability that we have in this space is AT&T Platform as a Service. This is targeted primarily to independent software vendors (ISVs), IT leaders, and line-of-business managers. It allows customers to choose from 50 pre-built applications, instantly mobilize those applications, and run them in AT&T's cloud, all without having to write a single line of code.

So we're really starting to get into more of the informal buyers, those line-of-business managers, and IT managers who don't have the budget to build it all themselves, or don't have the budget to buy expensive software licenses for certain application environments.

Examples of some of the applications that we support with our platform as a service (PaaS) are things like salesforce automation, quote and proposal tools, and budget management tools.

Storage space

The third key category of AT&T's Cloud Services is in the storage space. We have our AT&T Synaptic Storage as a Service, and this gives customers control over storage, distribution, and retrieval of their data, on the go, using any web-enabled device. In a little bit, I can get into some detail on use cases of how customers are using our cloud services.

This is a very important initiative for AT&T. We're seeing customer demand of all shapes and sizes. We have a sizable business and effort supporting our small- to medium-sized business (SMB) customers, and we have capabilities that we have tailor-developed just to reach those markets.

As an example, in SMB, it's all about the bundle. It's all about simplicity. It's all about on demand. And it's all about pay per use and having a service provider they can trust.

It's all about simplicity. It's all about on demand.

In the enterprise space, you really start getting into detailed discussions around security. You also start getting into discussions with many customers who already have private networking solutions from AT&T that they trust. When you start talking with clients around the fact that they can run a workload, turn up a server in the cloud, behind their firewall, it really resonates with CIOs that we're speaking with in the enterprise space.

Also in enterprises, it's about having a globally consistent experience. So as these customers are reaching new markets, it's all about not having to stand up an additional data center, compute instance, or what have you, and having a very consistent experience, no matter where they do business, anywhere in the world.

New era for women in tech

Gardner: The fact is that a significant majority of CIOs and IT executives are men, and that’s been the case for quite some time. But I'm curious, does cloud computing and the accompanying shift towards IT becoming more of a services brokering role change that? Do you think that with the consensus building among businesses and partner groups being more important in that brokering role, this might bring in a new era for women in tech?

Costello: I think it is a new era for women in tech. Specifically to my experience in working at AT&T in technology, this company has really provided me with an opportunity to grow both personally and professionally.

I currently lead our Cloud Office at AT&T and, prior to that, ran AT&T’s global managed hosting business across our 38 data centers. I was also lucky enough to be chosen as one of the top women in wireline services.

The key to success of being a woman working in technology is being able to build offers that solve customers' business problem.

What drives me as a woman in technology is that I enjoy the challenge of creating offers that meet customer needs, whether they be in the cloud space, things like driving eCommerce, high performance computing environment, or disaster recovery (DR) solutions.

I love spending time with customers. That’s my favorite thing to do. I also like to interact with many partners and vendors that I work with to stay current on trends and technologies. The key to success of being a woman working in technology is being able to build offers that solve customers' business problem, number one.

Number two is being able to then articulate the value of a lot of the complexity around some of these solutions, and package the value in a way that’s very simple for customers to understand.

Some of the challenge and also opportunity of the future is that, as technology continues to evolve, it’s about reducing complexity for customers and making the service experience seamless. The trend is to deliver more and more finished services, versus complex infrastructure solutions.

I've had the opportunity to interact with many women in leadership, whether they be my peer group, managers that work as a part of my team, and/or mentors that I have within AT&T that are senior leaders within the business.

I also mentor three women at AT&T, whether they be in technology, sales, or an operations role. So I'm starting to see this trend continue to grow.

It enables us to deliver a mobile cloud as well. That helps customers to transform their businesses.

Gardner: You have a lot of customers who are already using your business network services. I imagine there are probably some good cost-efficiencies in moving them to cloud services as well.

Costello: Absolutely. We've embedded cloud capabilities into the AT&T managed network. It enables us to deliver a mobile cloud as well. That helps customers to transform their businesses. We're delivering cloud services in the same manner as voice and data services, intelligently routed across our highly secure, reliable network.

AT&T's cloud is not sitting on top of or attached to our network, but it's fully integrated to provide customers a seamless, highly secure, low-latency, and high-performing experience.

Gardner: Why did you chose VMware and vCloud Datacenter Services as a core to the AT&T Synaptic Compute as a Service?

Multiple uses

Costello: AT&T uses VMware in several of our hosting application and cloud solutions today. In the case of AT&T Synaptic Compute as a Service, we use that in several ways, both to serve customers in public cloud and hybrid, as well as private cloud solutions.

We've also been using VMware technology for a number of years in AT&T’s Synaptic Hosting offer, which is our enterprise-grade utility computing service. We've also been serving customers with server virtualization solutions available in AT&T data centers around the world and also can be extended into customer or third-party locations.

Just to drill down on some of the key differentiators of AT&T Synaptic Compute as a Service, it’s two-fold.

One is that we integrate with AT&T private networking solutions. Some of the benefits that customers enjoy as a result of that are orchestration of resources, where we'll take the amount of compute storage and networking resources and provide the exact amount of resources at the exact right time to customers on-demand.

Our solutions offer enterprise-grade security. The fact that we've integrated our AT&T Synaptic Compute as a Service with private networking solution allows customers to extend their cloud into our network using VPN.

An engineering firm can now perform complex mathematical computations and extend from their private cloud into AT&T’s hybrid solution instantaneously, using their native VMware toolset.

Let me touch upon VMware vCloud Datacenter Services for a minute. We think that’s another key differentiator for us, in that we can allow clients to seamlessly move workloads to our cloud using native VMware toolsets. Essentially, we're taking technical complexity and interoperability challenges off the table.

With the vCloud Datacenter program that we are part of with VMware, we're letting customers have access to copy and paste workloads and to see all of their virtual machines, whether it be in their own private cloud environment or in a hybrid solution provided by AT&T. Providing that seamless access to view all of their virtual machines and manage those through single interface is key in reducing technical complexity and speeding time to market.

We've been doing business with VMware for a number of years. We also have a utility-computing platform called AT&T Synaptic Hosting. We learned early on, in working with customers’ managed utility computing environments, that VMware was the virtualization tool of choice for many of our enterprise customers.

As technologies evolved over time and cloud technologies have become more prevalent, it was absolutely paramount for us to pick a virtualization partner that was going to provide the global scale that we needed to serve our enterprise customers, and to be able to handle the large amount of volume that we receive, given the fact that we have been in the hosting business for over 15 years.

As a natural extension of our Synaptic Hosting relationship with VMware for many years, it only made sense that we joined the VMware vCloud Datacenter program. VMware is baked into our Synaptic Compute as a Service capability. And it really lets customers have a simplified hybrid cloud experience. In five simple steps, customers can move workloads from their private environment into AT&T's cloud environment.

Think that you are the IT manager and you are coming into start your workday. All of a sudden, you hit 85 percent utilization in your environment, but you want to very easily access additional resources from AT&T. You can use the same console that you use to perform your daily job for the data center that you run in-house.

In five clicks, you're viewing your in-house private-cloud resources that are VMware based and your AT&T virtual machines (VMs) running in AT&T's cloud, our Synaptic Compute as a Service capability. That all happens in minutes' time.

Gardner: I should also think that the concepts around the software-defined datacenter and software-defined networking play a part in this. Is that something you are focused on?

If we start with enterprise, the security aspects of the solution had to prove out for the customers that we do business with.

Costello: Software-defined datacenter and software-defined networks are essentially what we're talking about here with some uniqueness that AT&T Labs has built within our networking solutions. We essentially take our edge, our edge routers, and the benefits that are associated with AT&T networking solutions around redundancy, quality of service, etc., and extend that into cloud solutions, so customers can extend their cloud into our network using VPN solutions.

Added efficiency

Previously many customers would have to buy a router and try to pull together a solution on their own. It can be costly and time consuming. There's a whole lot of efficiency that comes with having a service provider being able to manage your compute storage and networking capabilities end to end.

Global scale was also very critical to the customers who we've been talking to. The fact that AT&T has localized and distributed resources through a combination of our 38 data centers around the world, as well as central offices, makes it very attractive to do business with AT&T as a service provider.

Gardner: We've certainly seen a lot of interest in hybrid cloud. Is that one of the more popular use cases?

We see a lot of customers looking for a more efficient way to be able to have business continuity,  have the ability to fail over in the event of a disaster

Costello: I speak with a lot of customers who are looking to be able to virtually expand. They have data-center, systems, and application investments and they have global headquarters locations, but they don't want to have to stand up another data center and/or virtually expand and/or ship staff out to other location. So certainly one use case that's very popular with customers is, "I can expand my virtual data-center environment and use AT&T as a service provider to help me to do that."

Another use case that's very popular with our customers is disaster recovery. We see a lot of customers looking for a more efficient way to be able to have business continuity,  have the ability to fail over in the event of a disaster, and also get in and test their plans more frequently than they're doing today.

For many of the solutions that are in place today, clients are saying they are expensive and/or they're just not meeting their service-level agreements (SLAs) to their business unit. One of the solutions that we recently put in place for a client is that we put them in two of AT&T's geographically diverse data centers. We wrapped it with AT&T's private-networking capability and then we solutioned our AT&T Synaptic Compute as a Service and Storage as a Service.

The customer ended up with a better SLA and a very powerful return on investment (ROI) as well, because they're only paying for the cloud resources when the meter is running. They now have a stable environment so that they can get in and test their plans as often as they'd like to and they're only paying for a very small storage fee in the event that they actually need to invoke in the event of a disaster. So DR plans are very popular.

Another use case that’s very popular among our clients is short-term compute. We work with a lot of customers who have massive mathematical calculations and they do a lot of number crunching.

Finally, in the compute space, we're seeing a lot of customers start to hang virtual desktop solutions off of their compute environment. In the past, when I would ask clients about virtual desktop infrastructure (VDI), they'd say, "We're looking at it, but we're not sure. It hasn’t made the budget list." All of a sudden, it’s becoming one of the most highly requested use cases from customers, and AT&T has solutions to cover all those needs.

The fact that we have 38 data centers around the world, a global reach from a networking perspective, and all the foundational cloud capabilities makes a whole lot of sense.

Gardner: Do you think that this will extend to some of the big data and analytics crunching that we've heard about?

Costello: I don’t think anyone is in a better position than AT&T to be able to help customers to manage their massive amounts of data, given the fact that a lot of this data has to reside on very strong networking solutions. The fact that we have 38 data centers around the world, a global reach from a networking perspective, and all the foundational cloud capabilities makes a whole lot of sense.

Speaking about this type of a "bursty" use case, we host some of the largest brand name retailers in the world. When you think about it, a lot of these retailers are preparing for the holidays, and their servers are going underutilized much of year. So how attractive is it to be able to look at AT&T, as a service provider, to provide them robust SLAs and a platform that they only have to pay for when they need to utilize it, versus sitting and going very underutilized much of the year?

We also host many online gaming customers. When you think about the gamers that are out there, there is a big land rush when the buzz occurs right before the launch of a new game. We work very proactively with those gaming customers to help them size their networking needs well in advance of a launch. Also we'll monitor it in real time to ensure that those gamers have a very positive experience when that launch does occur.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

More Stories By Dana Gardner

At Interarbor Solutions, we create the analysis and in-depth podcasts on enterprise software and cloud trends that help fuel the social media revolution. As a veteran IT analyst, Dana Gardner moderates discussions and interviews get to the meat of the hottest technology topics. We define and forecast the business productivity effects of enterprise infrastructure, SOA and cloud advances. Our social media vehicles become conversational platforms, powerfully distributed via the BriefingsDirect Network of online media partners like ZDNet and IT-Director.com. As founder and principal analyst at Interarbor Solutions, Dana Gardner created BriefingsDirect to give online readers and listeners in-depth and direct access to the brightest thought leaders on IT. Our twice-monthly BriefingsDirect Analyst Insights Edition podcasts examine the latest IT news with a panel of analysts and guests. Our sponsored discussions provide a unique, deep-dive focus on specific industry problems and the latest solutions. This podcast equivalent of an analyst briefing session -- made available as a podcast/transcript/blog to any interested viewer and search engine seeker -- breaks the mold on closed knowledge. These informational podcasts jump-start conversational evangelism, drive traffic to lead generation campaigns, and produce strong SEO returns. Interarbor Solutions provides fresh and creative thinking on IT, SOA, cloud and social media strategies based on the power of thoughtful content, made freely and easily available to proactive seekers of insights and information. As a result, marketers and branding professionals can communicate inexpensively with self-qualifiying readers/listeners in discreet market segments. BriefingsDirect podcasts hosted by Dana Gardner: Full turnkey planning, moderatiing, producing, hosting, and distribution via blogs and IT media partners of essential IT knowledge and understanding.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
API Security is complex! Vendors like Forum Systems, IBM, CA and Axway have invested almost 2 decades of engineering effort and significant capital in building API Security stacks to lockdown APIs. The API Security stack diagram shown below is a building block for rapidly locking down APIs. The four fundamental pillars of API Security - SSL, Identity, Content Validation and deployment architecture - are discussed in detail below.
@DevOpsSummit at Cloud Expo taking place Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center, Santa Clara, CA, is co-located with the 21st International Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is ...
We define Hybrid IT as a management approach in which organizations create a workload-centric and value-driven integrated technology stack that may include legacy infrastructure, web-scale architectures, private cloud implementations along with public cloud platforms ranging from Infrastructure-as-a-Service to Software-as-a-Service.
There are several reasons why businesses migrate their operations to the cloud. Scalability and price are among the most important factors determining this transition. Unlike legacy systems, cloud based businesses can scale on demand. The database and applications in the cloud are not rendered simply from one server located in your headquarters, but is instead distributed across several servers across the world. Such CDNs also bring about greater control in times of uncertainty. A database hack ...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
Docker is on a roll. In the last few years, this container management service has become immensely popular in development, especially given the great fit with agile-based projects and continuous delivery. In this article, I want to take a brief look at how you can use Docker to accelerate and streamline the software development lifecycle (SDLC) process.
In his session at 20th Cloud Expo, Chris Carter, CEO of Approyo, discussed the basic set up and solution for an SAP solution in the cloud and what it means to the viability of your company. Chris Carter is CEO of Approyo. He works with business around the globe, to assist them in their journey to the usage of Big Data in the forms of Hadoop (Cloudera and Hortonwork's) and SAP HANA. At Approyo, we support firms who are looking for knowledge to grow through current business process, where even 1%...
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
Cloud adoption is often driven by a desire to increase efficiency, boost agility and save money. All too often, however, the reality involves unpredictable cost spikes and lack of oversight due to resource limitations. In his session at 20th Cloud Expo, Joe Kinsella, CTO and Founder of CloudHealth Technologies, tackled the question: “How do you build a fully optimized cloud?” He will examine: Why TCO is critical to achieving cloud success – and why attendees should be thinking holistically ab...
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network. In practical terms, this ...
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...