Welcome!

Microservices Expo Authors: Pat Romanski, Liz McMillan, Mamoon Yunus, Stackify Blog, Elizabeth White

Related Topics: @CloudExpo, Containers Expo Blog, @BigDataExpo, @ThingsExpo, @DevOpsSummit

@CloudExpo: Article

IaC App Delivery Lifecycle | @CloudExpo @Tasktop #IaC #DevOps #API #DX

Infrastructure as Code (IaC) enables infrastructure management through a software-defined layer

How Infrastructure as Code Automates IT Operations in the Application Delivery Lifecycle

As more technologies become software-defined, their adoption demands a significant shift in thinking about how a business organizes its value stream. This shift may be difficult, but it enables you to anticipate changes and position your business to react when software-defined technologies emerge.

Recently a new set of tools and practices has emerged to create and manage environments. Known as Infrastructure as Code (IaC), it enables infrastructure management through a software-defined layer.

Application Delivery: Getting It Right
All applications run inside what we call an "environment" - a stack of hardware and software components built to support the application. This stack includes: networking, storage, virtual machines, operating systems, databases, libraries, dependencies and the application itself. Building an environment requires many activities to bring up that stack, provisioning and configuring each component according to the requirements of the application.

All of this is done to serve the application, which demands that things be "just so." The processes used to get an environment ‘just right' (and keep it that way) have been the subject of much analysis and design over the years, becoming part of the body of work known as IT Service Management (ITSM).

Infrastructure as Code
IaC replaces many of the ITSM processes involved in the deployment and ongoing management of the complete hardware-software environment in which an application will run.

While IT professionals have always used some automation such as scripting to help deploy environments, IaC is a recent development characterized by use of the following:

  1. Code - At the core of IaC is the code: definition files that declare the specification for each component of the environment and how it is configured. These files might be written in YAML or JSON, and will be checked into a version control system like Git.
  2. Automation tooling - Specialized tools read the definition files and use them to construct the environment and configure components according to specification.
  3. Application Programming Interfaces (APIs) - Automation tools perform the actions described in the definition files against APIs. Not only will the automation tools use APIs to provision and configure the components of the environment being managed, but the tool itself will be programmable through its own API.

The development of powerful automation tools, along with the widespread proliferation of APIs, has allowed IaC to emerge as a very effective means of managing IT operations processes.

Rather than working with GUIs, scripts and command-line interfaces to perform actions, we are able to work with documents (code) that exhaustively describe the environment. These are easily shared, reviewed and versioned.

With IaC, the actions at each step are executed, not performed, and are therefore much less prone to human error. The following steps demonstrate how IaC is actually used.

Putting it to work
While setting up an environment requires a number of different components and services, we can group these into three distinct steps:

  1. Provisioning - The first step is to provision the foundational infrastructure systems - servers, networks, databases and storage. Provisioning tools perform this task and are usually supplied by the infrastructure vendor. For example, Amazon provides CloudFormation to create VPCs (networks) and spin up EC2 instances (Servers) and, likewise, Azure gives us Resource Manager to create Network Security Groups and bring up virtual machines. There are also some provisioning tools like Terraform that are vendor agnostic, making switching between infrastructure vendors easier.
  2. Configuration - The second step is to configure the provisioned components; configuration management tools accomplish this task. This is a broader set of tools used to perform operations like transferring files, installing services, configuring settings and so on. There are many tools in this space, but the "Big Three" are Puppet, Chef and Ansible. Each has its own advantages and disadvantages; however they all accomplish the same goal - configure the components with the required dependencies and settings.
  3. Deployment - The third step is to deploy the application. More and more this involves the use of container technologies like Docker. Container technologies are a recent advancement in IT that deserve their own article and explanation; suffice it to say that a container allows an application and its dependencies to be wrapped up into a package that is easy to deploy into its own isolated space on a machine. Containers provide an additional layer of abstraction from the provisioning and configuration.

The tooling landscape used to perform these steps is highly fragmented, and strategies often use an opinionated, best-of-breed approach with a different tool at each of the three steps. For example, a team might use CloudFormation to set up the virtual machines and connect them to the network, Chef to configure and secure the virtual machines, and Docker to load the application into an isolated container.

There is no "correct" way to set up your automation stack - this will depend on the limitations of the tools and the needs of the organization - but it should be understood that adopting this technology will invariably change the way the organization manages IT work.

Anticipating Change
IaC presents us with a large and increasingly complex software-defined layer that is used to perform infrastructure management functions. It is important to note, however, that what becomes software-defined here is not the infrastructure itself, although software-defined infrastructure is a prerequisite for IaC; rather they are the systems and processes that are used to manage the infrastructure, such as asset management, change management, configuration management and more. These functions can now be emulated in code.

This can create major challenges for traditional service management strategies - the skills, roles, responsibilities, methods and practices used to manage infrastructure change considerably. On the other hand, it creates great opportunities by providing a catalyst to launch DevOps initiatives and increase the scope of Agile practices across the value stream.

By understanding IaC as a software-defined technology, we can gain insight into the impact on the enterprise.

Redefining IT
As a software-defined technology, we should immediately expect that IaC will be highly impactful on the business and how it is organized. It is important to recognize IaC as a software-defined technology, because looking at it through this lens gives us an understanding of what to expect as we start to adopt this technology in the enterprise. Analysis using three key attributes of software definition will shed some light on the impacts:

1. Abstraction
IaC requires that all operations on infrastructure are declared in definition files and executed using an automation tool. This automation layer provides an abstraction from operations like deploying, configuring and managing components. This means that these operations shift left in the software supply chain - they are performed earlier and all together, rather than sequentially at the final stages of activity.

With this abstraction, the skill specialization for managing infrastructure shifts from traditional vendor and application specific sysadmin skills to the ability to write code and think through the abstraction. The roles and responsibilities for managing infrastructure can move to anyone proficient in writing code.

2. Control
Since infrastructure can be fully documented in code, we can "read" the environment - see everything that was deployed and how it was configured by simply reading the definition files.

The focus of service management and control systems therefore shifts to managing the automation tooling and definition files. For example, use of a version control system to manage the definition files brings about the idea of "versioned infrastructure," where the change record is reflected in the version history.

Similarly, change management can be accomplished through code reviews performed individually when the changes are checked in, rather than putting batched changes before a change review board.

3. Mutability
The environment is exhaustively described in definition files, which are not dependent on infrastructure attributes. Ideally, the infrastructure itself is "immutable" - no changes are made directly to the infrastructure once it is deployed. Infrastructure components are locked down and not directly accessible to humans - changes are deployed only with the automation tooling.

This has two impacts: first, it introduces commoditization to deployment process. The same definition file can be used to bring up one server or a hundred. Additional assets can be deployed as needed, just-in-time, and torn down when they are no longer required. Elastic assets means infrastructure is always ‘right-sized.'

Second, the underlying infrastructure becomes modular. We can use our definition file to bring up our components on-premise, or in the cloud - on AWS or on Azure. While there are some dependencies involved in the automation tooling, overall the infrastructure layer has few enough hardware or vendor dependencies that operators have more freedom to choose where to host their infrastructure.

Adoption Through Transformation
For a business that relies on a software supply chain to deliver value to customers, IaC represents a significant opportunity to increase efficiency, lower costs and reduce risk.

Further, IaC has emerged as a vital driver in transforming and modernizing the software supply chain. Digital-first businesses like Amazon, Netflix and Facebook live and breathe software-defined infrastructure. This is because they have been able to build their businesses, cultures and value streams in a greenfield without the encumbrance of legacy systems and practices.

As with all software-defined technologies, incumbent enterprises that have a well-established value stream will have difficulty with wide-scale adoption. Some of the challenges they face include:

  • Culture: All software-defined technologies present the significant challenge of redefining roles, shifting responsibilities and altering work structures. Part of the transformation to adopt these technologies, therefore, involves a cultural change across the technical side of the organization. DevOps, as a culture, has emerged from this, and embraces these new roles and responsibilities. This needs to be nurtured and allowed to grow through a process of sharing and collaborating.
  • Practices: IT professionals are typically accustomed to working within project-based work structures like PMP and Prince2. IaC, however, begs for the use of software development practices like Agile to manage work. Implementing a software-defined infrastructure will have the effect of proliferating management techniques like Scrum and Kanban. This is a great opportunity, but equally a challenge to get everyone onboard and trained on the new methodologies and the tools they use.
  • Value: A fundamental characteristic of software-defined technologies is that they recast management strategies. With IaC, the IT supply chain is altered beyond recognition, requiring new thinking about service management strategies. The changes in where, when and how infrastructure will be managed and deployed mean organizations need to undergo a paradigm shift in thinking about how value delivery is organized.

In order to fully adopt software-defined infrastructure, incumbent enterprises will need to be ready to undergo a transformation. There needs to be a commitment and willingness to invest in new tools, new skills and new relationships.

Automation of any kind is a threat to the status quo, and change often brings on a crisis of identity for those who are content to stay the same. In this case, IT automation through IaC challenges the processes and systems used by ITSM. However, IT leaders will do well to be open-minded. Through a process of experimentation and discovery we can ease the adoption of this software-defined technology and use it as a catalyst to grow the value of IT across the business.

Internet of @ThingsExpo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world.

Download Show Prospectus ▸ Here

The Internet of Things (IoT) is the most profound change in personal and enterprise IT since the creation of the Worldwide Web more than 20 years ago.

All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.

With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be!

Sponsors of Internet of @ThingsExpo will benefit from unmatched branding, profile building and lead generation opportunities through:

  • Featured on-site presentation and ongoing on-demand webcast exposure to a captive audience of industry decision-makers.
  • Showcase exhibition during our new extended dedicated expo hours
  • Breakout Session Priority scheduling for Sponsors that have been guaranteed a 35 minute technical session
  • Online advertising in SYS-CON's i-Technology Publications
  • Capitalize on our Comprehensive Marketing efforts leading up to the show with print mailings, e-newsletters and extensive online media coverage.
  • Unprecedented PR Coverage: Editorial Coverage on ITweetup to over 75,000 plus followers, press releases sent on major wire services to over 500 industry analysts.

For more information on sponsorship, exhibit, and keynote opportunities, contact Carmen Gonzalez by email at events (at) sys-con.com, or by phone 201 802-3021.

With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo, October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.

Track 1. Enterprise Cloud | Cloud-Native
Track 2.
Big Data | Analytics
Track 3. Internet of Things | IIoT | Smart Cities

Track 4. DevOps | Digital Transformation (DX)

Track 5. APIs | Cloud Security | Mobility

Track 6.
AI | ML | DL | Cognitive
Track 7.
Containers | Microservices | Serverless
Track 8. FinTech | InsurTech | Token Economy

Cloud Expo | @ThingsExpo 2017 Silicon Valley
(October 31 - November 2, 2017, Santa Clara Convention Center, CA)

Cloud Expo | @ThingsExpo 2018 New York 
(June 12-14, 2018, Javits Center, Manhattan)

Download Show Prospectus ▸ Here

Every Global 2000 enterprise in the world is now integrating cloud computing in some form into its IT development and operations. Midsize and small businesses are also migrating to the cloud in increasing numbers.  

Companies are each developing their unique mix of cloud technologies and services, forming multi-cloud and hybrid cloud architectures and deployments across all major industries. Cloud-driven thinking has become the norm in financial services, manufacturing, telco, healthcare, transportation, energy, media, entertainment, retail and other consumer industries, and the public sector.

Cloud Expo is the single show where technology buyers and vendors can meet to experience and discus cloud computing and all that it entails. Sponsors of Cloud Expo will benefit from unmatched branding, profile building and lead generation opportunities through:

  • Featured on-site presentation and ongoing on-demand webcast exposure to a captive audience of industry decision-makers.
  • Showcase exhibition during our new extended dedicated expo hours
  • Breakout Session Priority scheduling for Sponsors that have been guaranteed a 35-minute technical session
  • Online advertising in SYS-CON's i-Technology Publications
  • Capitalize on our Comprehensive Marketing efforts leading up to the show with print mailings, e-newsletters and extensive online media coverage.
  • Unprecedented PR Coverage: Editorial Coverage on Cloud Computing Journal.
  • Tweetup to over 75,000 plus followers
  • Press releases sent on major wire services to over 500 industry analysts.

For more information on sponsorship, exhibit, and keynote opportunities, contact Carmen Gonzalez by email at events (at) sys-con.com, or by phone 201 802-3021.

The World's Largest "Cloud Digital Transformation" Event

@CloudExpo | @ThingsExpo 2017 Silicon Valley
(Oct. 31 - Nov. 2, 2017, Santa Clara Convention Center, CA)

@CloudExpo | @ThingsExpo 2018 New York 
(June 12-14, 2018, Javits Center, Manhattan)

Full Conference Registration Gold Pass and Exhibit Hall ▸ Here

Register For @CloudExpo ▸ Here via EventBrite

Register For @ThingsExpo ▸ Here via EventBrite

Register For @DevOpsSummit ▸ Here via EventBrite

Sponsorship Opportunities

Sponsors of Cloud Expo | @ThingsExpo will benefit from unmatched branding, profile building and lead generation opportunities through:

  • Featured on-site presentation and ongoing on-demand webcast exposure to a captive audience of industry decision-makers
  • Showcase exhibition during our new extended dedicated expo hours
  • Breakout Session Priority scheduling for Sponsors that have been guaranteed a 35 minute technical session
  • Online targeted advertising in SYS-CON's i-Technology Publications
  • Capitalize on our Comprehensive Marketing efforts leading up to the show with print mailings, e-newsletters and extensive online media coverage
  • Unprecedented Marketing Coverage: Editorial Coverage on ITweetup to over 100,000 plus followers, press releases sent on major wire services to over 500 industry analysts

For more information on sponsorship, exhibit, and keynote opportunities, contact Carmen Gonzalez (@GonzalezCarmen) today by email at events (at) sys-con.com, or by phone 201 802-3021.

Secrets of Sponsors and Exhibitors ▸ Here
Secrets of Cloud Expo Speakers ▸ Here

All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades.

With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend @CloudExpo@ThingsExpo, October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-4, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.

Delegates to Cloud Expo | @ThingsExpo will be able to attend 8 simultaneous, information-packed education tracks.

There are over 120 breakout sessions in all, with Keynotes, General Sessions, and Power Panels adding to three days of incredibly rich presentations and content.

Join Cloud Expo | @ThingsExpo conference chair Roger Strukhoff (@IoT2040), October 31 - November 2, 2017, Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, for three days of intense Enterprise Cloud and 'Digital Transformation' discussion and focus, including Big Data's indispensable role in IoT, Smart Grids and (IIoT) Industrial Internet of Things, Wearables and Consumer IoT, as well as (new) Digital Transformation in Vertical Markets.

Financial Technology - or FinTech - Is Now Part of the @CloudExpo Program!

Accordingly, attendees at the upcoming 21st Cloud Expo | @ThingsExpo October 31 - November 2, 2017, Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, will find fresh new content in a new track called FinTech, which will incorporate machine learning, artificial intelligence, deep learning, and blockchain into one track.

Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expensive intermediate processes from their businesses.

FinTech brings efficiency as well as the ability to deliver new services and a much improved customer experience throughout the global financial services industry. FinTech is a natural fit with cloud computing, as new services are quickly developed, deployed, and scaled on public, private, and hybrid clouds.

More than US$20 billion in venture capital is being invested in FinTech this year. @CloudExpo is pleased to bring you the latest FinTech developments as an integral part of our program, starting at the 21st International Cloud Expo October 31 - November 2, 2017 in Silicon Valley, and June 12-14, 2018, in New York City.

@CloudExpo is accepting submissions for this new track, so please visit www.CloudComputingExpo.com for the latest information.

Speaking Opportunities

The upcoming 21st International @CloudExpo@ThingsExpo, October 31 - November 2, 2017, Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY announces that its Call For Papers for speaking opportunities is open.

Submit your speaking proposal today! ▸ Here

About SYS-CON Media & Events
SYS-CON Media (www.sys-con.com) has since 1994 been connecting technology companies and customers through a comprehensive content stream - featuring over forty focused subject areas, from Cloud Computing to Web Security - interwoven with market-leading full-scale conferences produced by SYS-CON Events. The company's internationally recognized brands include among others Cloud Expo® (@CloudExpo), Big Data Expo® (@BigDataExpo), DevOps Summit (@DevOpsSummit), @ThingsExpo® (@ThingsExpo), Containers Expo (@ContainersExpo) and Microservices Expo (@MicroservicesE).

Cloud Expo®, Big Data Expo® and @ThingsExpo® are registered trademarks of Cloud Expo, Inc., a SYS-CON Events company.

More Stories By John Rauser

John Rauser is the IT Manager at Tasktop Technologies, a global enterprise software company. He also serves as VP Operations at the board of the Project Management Institute - Canadian West Coast Chapter, providing leadership and expertise on technology issues. He has a passion for discussing the business impacts of technology and analyzing strategies for managing IT.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@MicroservicesExpo Stories
With Cloud Foundry you can easily deploy and use apps utilizing websocket technology, but not everybody realizes that scaling them out is not that trivial. In his session at 21st Cloud Expo, Roman Swoszowski, CTO and VP, Cloud Foundry Services, at Grape Up, will show you an example of how to deal with this issue. He will demonstrate a cloud-native Spring Boot app running in Cloud Foundry and communicating with clients over websocket protocol that can be easily scaled horizontally and coordinate...
In his session at 20th Cloud Expo, Scott Davis, CTO of Embotics, discussed how automation can provide the dynamic management required to cost-effectively deliver microservices and container solutions at scale. He also discussed how flexible automation is the key to effectively bridging and seamlessly coordinating both IT and developer needs for component orchestration across disparate clouds – an increasingly important requirement at today’s multi-cloud enterprise.
IT organizations are moving to the cloud in hopes to approve efficiency, increase agility and save money. Migrating workloads might seem like a simple task, but what many businesses don’t realize is that application migration criteria differs across organizations, making it difficult for architects to arrive at an accurate TCO number. In his session at 21st Cloud Expo, Joe Kinsella, CTO of CloudHealth Technologies, will offer a systematic approach to understanding the TCO of a cloud application...
DevOps at Cloud Expo, taking place October 31 - November 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 21st Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The widespread success of cloud computing is driving the DevOps revolution in enterprise IT. Now as never before, development teams must communicate and collaborate in a dynamic, 24/7/365 environment. There is no time to w...
Most companies are adopting or evaluating container technology - Docker in particular - to speed up application deployment, drive down cost, ease management and make application delivery more flexible overall. As with most new architectures, this dream takes a lot of work to become a reality. Even when you do get your application componentized enough and packaged properly, there are still challenges for DevOps teams to making the shift to continuous delivery and achieving that reduction in cost ...
API Security has finally entered our security zeitgeist. OWASP Top 10 2017 - RC1 recognized API Security as a first class citizen by adding it as number 10, or A-10 on its list of web application vulnerabilities. We believe this is just the start. The attack surface area offered by API is orders or magnitude larger than any other attack surface area. Consider the fact the APIs expose cloud services, internal databases, application and even legacy mainframes over the internet. What could go wrong...
The goal of Continuous Testing is to shift testing left to find defects earlier and release software faster. This can be achieved by integrating a set of open source functional and performance testing tools in the early stages of your software delivery lifecycle. There is one process that binds all application delivery stages together into one well-orchestrated machine: Continuous Testing. Continuous Testing is the conveyer belt between the Software Factory and production stages. Artifacts are m...
Web services have taken the development world by storm, especially in recent years as they've become more and more widely adopted. There are naturally many reasons for this, but first, let's understand what exactly a web service is. The World Wide Web Consortium (W3C) defines "web of services" as "message-based design frequently found on the Web and in enterprise software". Basically, a web service is a method of sending a message between two devices through a network. In practical terms, this ...
In his session at @DevOpsSummit at 20th Cloud Expo, Kelly Looney, director of DevOps consulting for Skytap, showed how an incremental approach to introducing containers into complex, distributed applications results in modernization with less risk and more reward. He also shared the story of how Skytap used Docker to get out of the business of managing infrastructure, and into the business of delivering innovation and business value. Attendees learned how up-front planning allows for a clean sep...
In IT, we sometimes coin terms for things before we know exactly what they are and how they’ll be used. The resulting terms may capture a common set of aspirations and goals – as “cloud” did broadly for on-demand, self-service, and flexible computing. But such a term can also lump together diverse and even competing practices, technologies, and priorities to the point where important distinctions are glossed over and lost.
Enterprise architects are increasingly adopting multi-cloud strategies as they seek to utilize existing data center assets, leverage the advantages of cloud computing and avoid cloud vendor lock-in. This requires a globally aware traffic management strategy that can monitor infrastructure health across data centers and end-user experience globally, while responding to control changes and system specification at the speed of today’s DevOps teams. In his session at 20th Cloud Expo, Josh Gray, Chie...
"At the keynote this morning we spoke about the value proposition of Nutanix, of having a DevOps culture and a mindset, and the business outcomes of achieving agility and scale, which everybody here is trying to accomplish," noted Mark Lavi, DevOps Solution Architect at Nutanix, in this SYS-CON.tv interview at @DevOpsSummit at 20th Cloud Expo, held June 6-8, 2017, at the Javits Center in New York City, NY.
We have already established the importance of APIs in today’s digital world (read about it here). With APIs playing such an important role in keeping us connected, it’s necessary to maintain the API’s performance as well as availability. There are multiple aspects to consider when monitoring APIs, from integration to performance issues, therefore a general monitoring strategy that only accounts for up-time is not ideal.
In his session at 20th Cloud Expo, Mike Johnston, an infrastructure engineer at Supergiant.io, discussed how to use Kubernetes to set up a SaaS infrastructure for your business. Mike Johnston is an infrastructure engineer at Supergiant.io with over 12 years of experience designing, deploying, and maintaining server and workstation infrastructure at all scales. He has experience with brick and mortar data centers as well as cloud providers like Digital Ocean, Amazon Web Services, and Rackspace. H...
All organizations that did not originate this moment have a pre-existing culture as well as legacy technology and processes that can be more or less amenable to DevOps implementation. That organizational culture is influenced by the personalities and management styles of Executive Management, the wider culture in which the organization is situated, and the personalities of key team members at all levels of the organization. This culture and entrenched interests usually throw a wrench in the work...
As many know, the first generation of Cloud Management Platform (CMP) solutions were designed for managing virtual infrastructure (IaaS) and traditional applications. But that’s no longer enough to satisfy evolving and complex business requirements. In his session at 21st Cloud Expo, Scott Davis, Embotics CTO, will explore how next-generation CMPs ensure organizations can manage cloud-native and microservice-based application architectures, while also facilitating agile DevOps methodology. He wi...
When you focus on a journey from up-close, you look at your own technical and cultural history and how you changed it for the benefit of the customer. This was our starting point: too many integration issues, 13 SWP days and very long cycles. It was evident that in this fast-paced industry we could no longer afford this reality. We needed something that would take us beyond reducing the development lifecycles, CI and Agile methodologies. We made a fundamental difference, even changed our culture...
Docker is sweeping across startups and enterprises alike, changing the way we build and ship applications. It's the most prominent and widely known software container platform, and it's particularly useful for eliminating common challenges when collaborating on code (like the "it works on my machine" phenomenon that most devs know all too well). With Docker, you can run and manage apps side-by-side - in isolated containers - resulting in better compute density. It's something that many developer...
These days, change is the only constant. In order to adapt and thrive in an ever-advancing and sometimes chaotic workforce, companies must leverage intelligent tools to streamline operations. While we're only at the dawn of machine intelligence, using a workflow manager will benefit your company in both the short and long term. Think: reduced errors, improved efficiency and more empowered employees-and that's just the start. Here are five other reasons workflow automation is leading a revolution...
As today's digital disruptions bounce and smash their way through conventional technologies and conventional wisdom alike, predicting their path is a multifaceted challenge. So many areas of technology advance on Moore's Law-like exponential curves that divining the future is fraught with danger. Such is the problem with artificial intelligence (AI), and its related concepts, including cognitive computing, machine learning, and deep learning.