Welcome!

Microservices Expo Authors: Pat Romanski, Liz McMillan, Mamoon Yunus, Stackify Blog, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog, Agile Computing, @BigDataExpo, SDN Journal, FinTech Journal

@CloudExpo: Article

DEvOps and SDDC Among Top 10 Strategic Technology Trends for 2014

Gartner defines a strategic technology as one with the potential for significant impact on the enterprise in the next 3 years

Gartner, Inc. on Tuesday highlighted the top ten technologies and trends that will be strategic for most organizations in 2014. Analysts presented their findings during Gartner Symposium/ITxpo, being held here through October 10.

Gartner defines a strategic technology as one with the potential for significant impact on the enterprise in the next three years. Factors that denote significant impact include a high potential for disruption to IT or the business, the need for a major dollar investment, or the risk of being late to adopt.

A strategic technology may be an existing technology that has matured and/or become suitable for a wider range of uses. It may also be an emerging technology that offers an opportunity for strategic business advantage for early adopters or with potential for significant market disruption in the next five years. These technologies impact the organization's long-term plans, programs and initiatives.

“We have identified the top 10 technologies that companies should factor into their strategic planning processes,” said David Cearley. “This does not necessarily mean adoption and investment in all of the listed technologies, but companies should look to make deliberate decisions about them during the next two years.”

Mr. Cearley said that the Nexus of Forces, the convergence of four powerful forces: social, mobile, cloud and information, continues to drive change and create new opportunities, creating demand for advanced programmable infrastructure that can execute at web-scale.

The top ten strategic technology trends for 2014 include:

Mobile Device Diversity and Management
Through 2018, the growing variety of devices, computing styles, user contexts and interaction paradigms will make "everything everywhere" strategies unachievable. The unexpected consequence of bring your own device (BYOD) programs is a doubling or even tripling of the size of the mobile workforce. This is placing tremendous strain on IT and Finance organizations. Enterprise policies on employee-owned hardware usage need to be thoroughly reviewed and, where necessary, updated and extended. Most companies only have policies for employees accessing their networks through devices that the enterprise owns and manages. Set policies to define clear expectations around what they can and can't do. Balance flexibility with confidentiality and privacy requirements

Mobile Apps and Applications
Gartner predicts that through 2014, improved JavaScript performance will begin to push HTML5 and the browser as a mainstream enterprise application development environment. Gartner recommends that developers focus on creating expanded user interface models including richer voice and video that can connect people in new and different ways. Apps will continue to grow while applications will begin to shrink. Apps are smaller, and more targeted, while a larger application is more comprehensive. Developers should look for ways to snap together apps to create larger applications. Building application user interfaces that span a variety of devices require an understanding of fragmented building blocks and an adaptable programming structure that assembles them into optimized content for each device. The market for tools to create consumer and enterprise facing apps is complex with well over 100 potential tools vendors. For the next few years no single tool will be optimal for all types of mobile application so expect to employ several. The next evolution in user experience will be to leverage intent, inferred from emotion and actions, to motivate changes in end-user behavior.

The Internet of Everything
The Internet is expanding beyond PCs and mobile devices into enterprise assets such as field equipment, and consumer items such as cars and televisions. The problem is that most enterprises and technology vendors have yet to explore the possibilities of an expanded internet and are not operationally or organizationally ready. Imagine digitizing the most important products, services and assets. The combination of data streams and services created by digitizing everything creates four basic usage models – Manage; Monetize; Operate; Extend. These four basic models can be applied to any of the four "internets” (people, things, information and places). Enterprises should not limit themselves to thinking that only the Internet of Things (i.e., assets and machines) has the potential to leverage these four models. Enterprises from all industries (heavy, mixed, and weightless) can leverage these four models.

Hybrid Cloud and IT as Service Broker
Bringing together personal clouds and external private cloud services is an imperative. Enterprises should design private cloud services with a hybrid future in mind and make sure future integration/interoperability is possible. Hybrid cloud services can be composed in many ways, varying from relatively static to very dynamic. Managing this composition will often be the responsibility of something filling the role of cloud service broker (CSB), which handles aggregation, integration and customization of services. Enterprises that are expanding into hybrid cloud computing from private cloud services are taking on the CSB role. Terms like "overdrafting" and "cloudbursting" are often used to describe what hybrid cloud computing will make possible. However, the vast majority of hybrid cloud services will initially be much less dynamic than that. Early hybrid cloud services will likely be more static, engineered compositions (such as integration between an internal private cloud and a public cloud service for certain functionality or data). More deployment compositions will emerge as CSBs evolve (for example, private infrastructure as a service [IaaS] offerings that can leverage external service providers based on policy and utilization).

Cloud/Client Architecture
Cloud/client computing models are shifting. In the cloud/client architecture, the client is a rich application running on an Internet-connected device, and the server is a set of application services hosted in an increasingly elastically scalable cloud computing platform. The cloud is the control point and system or record and applications can span multiple client devices. The client environment may be a native application or browser-based; the increasing power of the browser is available to many client devices, mobile and desktop alike. Robust capabilities in many mobile devices, the increased demand on networks, the cost of networks and the need to manage bandwidth use creates incentives, in some cases, to minimize the cloud application computing and storage footprint, and to exploit the intelligence and storage of the client device. However, the increasingly complex demands of mobile users will drive apps to demand increasing amounts of server-side computing and storage capacity.

The Era of Personal Cloud
The personal cloud era will mark a power shift away from devices toward services. In this new world, the specifics of devices will become less important for the organization to worry about, although the devices will still be necessary. Users will use a collection of devices, with the PC remaining one of many options, but no one device will be the primary hub. Rather, the personal cloud will take on that role. Access to the cloud and the content stored or shared from the cloud will be managed and secured, rather than solely focusing on the device itself.

Software Defined Anything
Software-defined anything (SDx) is a collective term that encapsulates the growing market momentum for improved standards for infrastructure programmability and data center interoperability driven by automation inherent to cloud computing, DevOps and fast infrastructure provisioning. As a collective, SDx also incorporates various initiatives like OpenStack, OpenFlow, the Open Compute Project and Open Rack, which share similar visions. As individual SDx technology silos evolve and consortiums arise, look for emerging standards and bridging capabilities to benefit portfolios, but challenge individual technology suppliers to demonstrate their commitment to true interoperability standards within their specific domains. While openness will always be a claimed vendor objective, different interpretations of SDx definitions may be anything but open. Vendors of SDN (network), SDDC (data center), SDS (storage), and SDI (infrastructure) technologies are all trying to maintain leadership in their respective domains, while deploying SDx initiatives to aid market adjacency plays. So vendors who dominate a sector of the infrastructure may only reluctantly want to abide by standards that have the potential to lower margins and open broader competitive opportunities, even when the consumer will benefit by simplicity, cost reduction and consolidation efficiency.

Web-Scale IT
Web-scale IT is a pattern of global-class computing that delivers the capabilities of large cloud service providers within an enterprise IT setting by rethinking positions across several dimensions. Large cloud services providers such as Amazon, Google, Facebook, etc., are re-inventing the way IT in which IT services can be delivered. Their capabilities go beyond scale in terms of sheer size to also include scale as it pertains to speed and agility. If enterprises want to keep pace, then they need to emulate the architectures, processes and practices of these exemplary cloud providers. Gartner calls the combination of all of these elements Web-scale IT. Web-scale IT looks to change the IT value chain in a systemic fashion. Data centers are designed with an industrial engineering perspective that looks for every opportunity to reduce cost and waste. This goes beyond re-designing facilities to be more energy efficient to also include in-house design of key hardware components such as servers, storage and networks. Web-oriented architectures allows developers to build very flexible and resilient systems that recover from failure more quickly.

Smart Machines
Through 2020, the smart machine era will blossom with a proliferation of contextually aware, intelligent personal assistants, smart advisors (such as IBM Watson), advanced global industrial systems and public availability of early examples of autonomous vehicles. The smart machine era will be the most disruptive in the history of IT. New systems that begin to fulfill some of the earliest visions for what information technologies might accomplish — doing what we thought only people could do and machines could not —are now finally emerging. Gartner expects individuals will invest in, control and use their own smart machines to become more successful. Enterprises will similarly invest in smart machines. Consumerization versus central control tensions will not abate in the era of smart-machine-driven disruption. If anything, smart machines will strengthen the forces of consumerization after the first surge of enterprise buying commences.

3-D Printing
Worldwide shipments of 3D printers are expected to grow 75 percent in 2014 followed by a near doubling of unit shipments in 2015. While very expensive “additive manufacturing” devices have been around for 20 years, the market for devices ranging from $50,000 to $500, and with commensurate material and build capabilities, is nascent yet growing rapidly. The consumer market hype has made organizations aware of the fact 3D printing is a real, viable and cost-effective means to reduce costs through improved designs, streamlined prototyping and short-run manufacturing.

More Stories By Elizabeth White

News Desk compiles and publishes breaking news stories, press releases and latest news articles as they happen.

@MicroservicesExpo Stories
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network. In practical terms, this ...
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
As today's digital disruptions bounce and smash their way through conventional technologies and conventional wisdom alike, predicting their path is a multifaceted challenge. So many areas of technology advance on Moore's Law-like exponential curves that divining the future is fraught with danger. Such is the problem with artificial intelligence (AI), and its related concepts, including cognitive computing, machine learning, and deep learning.