|By Emily Burns||
|November 30, 2012 02:00 PM EST||
Efficiency may be the most commonly used term in enterprise software marketing - that or "ensure." And not without reason - efficiency is one of the key value propositions of most enterprise software, from collaboration tools, to productivity tools, to integration tools and beyond. At a certain point though, the gains to be achieved from efficiency become smaller and smaller and of lesser and lesser business significance.
This is resulting in a shift in focus from efficiency to effectiveness. At times, these goals are twin, but in many cases, they are not - the most effective allocation of resources may not be the most efficient - at least in the short-term. Managing an organization with an eye toward effectiveness can be a challenge, because business metrics are often tied to processes and other types of "discrete" pieces of work, and how quickly/efficiently they are completed. As a result, when an organization makes the shift to managing for effectiveness rather than efficiency, the metrics used to evaluate success typically have to be "leveled-up," that is, taken up to the level that really matters to the business. An example of this leveling up occurred several years back when customer service organizations changed their focus from shortening call times to increasing the rate of first call resolution. Resolving a customer issue on the first call may result in increasing the length of the call, but over the long term it is a more effective approach, because it may result in a shorter overall expenditure of the Customer Service Representatives' aggregated time, and will certainly result in more satisfied customers.
Operationalizing this "leveling-up" is not an easy task. Most of the greatest challenges associated with doing so relate to data. First, organizations must have an idea that their current efficiency-based metrics are not serving them well. The only way to know that your current practices are ill-serving you is to capture data to make that point. In the CSR example above, that means being able to find out that a customer has called multiple times. But the way that calls are typically handled, a case is created for each one, meaning that the data doesn't tell a story of a customer calling multiple times and taking the time of many different CSRs; instead, the data tells of ten individual calls, each of which lasted three minutes. The complexity of the problem is actually greater than this, because what happens more often than not in such cases is that a customer will try to resolve the problem by contacting the organization through multiple different channels - phone, Web, email, chat. Because the data is so often fragmented, organizations will typically find out about such broken practices through a series of irate letters and phone calls, or in the worst case scenario, in a drop-off in customers. Whatever the means of notification, at some point it becomes clear to the organization that they not only have a problem of misaligned incentives, but also a data problem. They then turn to the data to understand what has been going on in their organization and how to manage more effectively.
The story likely can be pieced together from the data, but the organization must still make sure they are asking the right questions - if "number of cold calls made" is not the right metric, what is? Once the right questions have been identified, then it's time to turn to the data. Because in most organizations the data to be captured was not set up with these higher-level goals in mind, getting the right answer from the data requires some work. The data across these various systems must be integrated and federated - all of the necessary data must be extracted from the various systems inside and out of the organization and loosely coupled so that the data is telling the whole story. It also requires cleansing the data and rationalizing it such that data about the same thing being captured in different systems is in sync.
It may be that even after having all of the data rationalized and accessible, the crucial data needed to manage the business more effectively is not currently being captured. This is a relatively small problem, with practically everything digitized and virtualized, there is very likely a way to capture the data an organization seeks. A common scenario is that the data is being captured, but in an off-premise cloud-based application or in a partner's application or it may be that the data is embedded in the activities carried out on social networks. In all of these cases, new technology makes the data accessible and manageable. As a result, so, too, are the answers to the real business questions of how to manage the business more effectively.
Data integration tools make it possible to integrate and federate data from cloud-based applications with on-premise systems, to incorporated data from third parties. The ability to use Hadoop MapReduce to take in and manage unprecedented volumes of data from social networks and other non-traditional sources makes it possible to truly have, manage and analyze all of your data. New social MDM technology means that you can tap into the data embedded in social interactions on social networks and use this to create an even more fully fleshed-out golden record for your customers.
In truth, it is the gains we have made in efficiency, in finding ever-more efficient ways to access, store and analyze data that make this turn towards effectiveness possible. Without being able to do all of the above in a time- and cost-efficient manner, it is not possible to use the data to manage more effectively.
In many ways, this is what the hype about Big Data is all about. The unarticulated and implicit excitement about Big Data is really about being able to take advantage of the data in which we are all awash and use it to manage our organizations more effectively than ever before. Managing for effectiveness looks different in every industry. In retail, managing for effectiveness is understanding customers - catering to them when, where, how and with what they want. In pharma, managing for effectiveness is limiting physician wash out, getting more clinical trial data more quickly, and being able to complete or pull the plug on trials faster based on the results of that data. In every industry, managing for effectiveness means using the power of data to make the best business decisions possible, getting a true return on data.
Now, with more hardware! September 21, 2015, Don MacVittie, Sr. Solutions Architect. The “continuous” trend is continuing (get it..?), and we’ll soon reach the peek of the hype cycle, with continuous everything. At the pinnacle of the hype cycle, do not be surprised to see DDOS attacks re-branded as “continuous penetration testing!” and a fee … Read More Continuous Provisioning
Oct. 13, 2015 07:15 AM EDT
Containers are all the rage among developers and web companies, but they also represent two very substantial benefits to larger organizations. First, they have the potential to dramatically accelerate the application lifecycle from software builds and testing to deployment and upgrades. Second they represent the first truly hybrid-approach to consuming infrastructure, allowing organizations to run the same workloads on any cloud, virtual machine or physical server. Together, they represent a ver...
Oct. 13, 2015 06:45 AM EDT Reads: 207
As operational failure becomes more acceptable to discuss within the software industry, the necessity for holding constructive, actionable postmortems increases. But most of what we know about postmortems from "pop culture" isn't actually relevant for the software systems we work on and within. In his session at DevOps Summit, J. Paul Reed will look at postmortem pitfalls, techniques, and tools you'll be able to take back to your own environment so they will be able to lay the foundations for h...
Oct. 13, 2015 06:30 AM EDT Reads: 183
SYS-CON Events announced today the Containers & Microservices Bootcamp, being held November 3-4, 2015, in conjunction with 17th Cloud Expo, @ThingsExpo, and @DevOpsSummit at the Santa Clara Convention Center in Santa Clara, CA. This is your chance to get started with the latest technology in the industry. Combined with real-world scenarios and use cases, the Containers and Microservices Bootcamp, led by Janakiram MSV, a Microsoft Regional Director, will include presentations as well as hands-on...
Oct. 13, 2015 06:00 AM EDT Reads: 160
Ten years ago, there may have been only a single application that talked directly to the database and spit out HTML; customer service, sales - most of the organizations I work with have been moving toward a design philosophy more like unix, where each application consists of a series of small tools stitched together. In web example above, that likely means a login service combines with webpages that call other services - like enter and update record. That allows the customer service team to writ...
Oct. 13, 2015 05:45 AM EDT Reads: 541
SYS-CON Events announced today that Super Micro Computer, Inc., a global leader in high-performance, high-efficiency server, storage technology and green computing, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Supermicro (NASDAQ: SMCI), the leading innovator in high-performance, high-efficiency server technology is a premier provider of advanced server Building Block Solutions® for Data ...
Oct. 13, 2015 05:15 AM EDT Reads: 194
In today's digital world, change is the one constant. Disruptive innovations like cloud, mobility, social media, and the Internet of Things have reshaped the market and set new standards in customer expectations. To remain competitive, businesses must tap the potential of emerging technologies and markets through the rapid release of new products and services. However, the rigid and siloed structures of traditional IT platforms and processes are slowing them down – resulting in lengthy delivery ...
Oct. 13, 2015 05:00 AM EDT Reads: 1,093
Last month, my partners in crime – Carmen DeArdo from Nationwide, Lee Reid, my colleague from IBM and I wrote a 3-part series of blog posts on DevOps.com. We titled our posts the Simple Math, Calculus and Art of DevOps. I would venture to say these are must-reads for any organization adopting DevOps. We examined all three ascpects – the Cultural, Automation and Process improvement side of DevOps. One of the key underlying themes of the three posts was the need for Cultural change – things like t...
Oct. 13, 2015 05:00 AM EDT Reads: 403
There once was a time when testers operated on their own, in isolation. They’d huddle as a group around the harsh glow of dozens of CRT monitors, clicking through GUIs and recording results. Anxiously, they’d wait for the developers in the other room to fix the bugs they found, yet they’d frequently leave the office disappointed as issues were filed away as non-critical. These teams would rarely interact, save for those scarce moments when a coder would wander in needing to reproduce a particula...
Oct. 13, 2015 05:00 AM EDT Reads: 389
It is with great pleasure that I am able to announce that Jesse Proudman, Blue Box CTO, has been appointed to the position of IBM Distinguished Engineer. Jesse is the first employee at Blue Box to receive this honor, and I’m quite confident there will be more to follow given the amazing talent at Blue Box with whom I have had the pleasure to collaborate. I’d like to provide an overview of what it means to become an IBM Distinguished Engineer.
Oct. 13, 2015 04:00 AM EDT Reads: 361
Containers are changing the security landscape for software development and deployment. As with any security solutions, security approaches that work for developers, operations personnel and security professionals is a requirement. In his session at @DevOpsSummit, Kevin Gilpin, CTO and Co-Founder of Conjur, will discuss various security considerations for container-based infrastructure and related DevOps workflows.
Oct. 13, 2015 03:00 AM EDT Reads: 326
If you are new to Python, you might be confused about the different versions that are available. Although Python 3 is the latest generation of the language, many programmers still use Python 2.7, the final update to Python 2, which was released in 2010. There is currently no clear-cut answer to the question of which version of Python you should use; the decision depends on what you want to achieve. While Python 3 is clearly the future of the language, some programmers choose to remain with Py...
Oct. 13, 2015 02:00 AM EDT Reads: 323
Between the compelling mockups and specs produced by analysts, and resulting applications built by developers, there exists a gulf where projects fail, costs spiral, and applications disappoint. Methodologies like Agile attempt to address this with intensified communication, with partial success but many limitations. In his session at DevOps Summit, Charles Kendrick, CTO and Chief Architect at Isomorphic Software, will present a revolutionary model enabled by new technologies. Learn how busine...
Oct. 13, 2015 02:00 AM EDT Reads: 392
IT data is typically silo'd by the various tools in place. Unifying all the log, metric and event data in one analytics platform stops finger pointing and provides the end-to-end correlation. Logs, metrics and custom event data can be joined to tell the holistic story of your software and operations. For example, users can correlate code deploys to system performance to application error codes.
Oct. 13, 2015 02:00 AM EDT Reads: 330
Achim Weiss is Chief Executive Officer and co-founder of ProfitBricks. In 1995, he broke off his studies to co-found the web hosting company "Schlund+Partner." The company "Schlund+Partner" later became the 1&1 web hosting product line. From 1995 to 2008, he was the technical director for several important projects: the largest web hosting platform in the world, the second largest DSL platform, a video on-demand delivery network, the largest eMail backend in Europe, and a universal billing syste...
Oct. 13, 2015 02:00 AM EDT Reads: 352
When I describe Continuous Delivery to people I generally spend a fair amount of time impressing on them that it is not about tools and technicalities. It is not even about the relationship between developers and operations or product owners and testers. Continuous Delivery is about minimizing the gap between having an idea and getting that idea, in the form of working software, into the hands of users and seeing what they make of it. This vital feedback loop is at the core of not just good deve...
Oct. 13, 2015 01:00 AM EDT Reads: 167
Containers have changed the mind of IT in DevOps. They enable developers to work with dev, test, stage and production environments identically. Containers provide the right abstraction for microservices and many cloud platforms have integrated them into deployment pipelines. DevOps and containers together help companies achieve their business goals faster and more effectively. In his session at DevOps Summit, Ruslan Synytsky, CEO and Co-founder of Jelastic, will review the current landscape of...
Oct. 13, 2015 12:15 AM EDT Reads: 246
There’s no shortage of guides and blog posts available to provide you with best practices in architecting microservices. While all this information is helpful, what doesn’t seem to be available in such a great number are hands-on guidelines regarding how microservices can be scaled. Following a little research and sifting through lots of theoretical discussion, here is how load-balancing microservices is done in practice by the big players.
Oct. 12, 2015 11:15 PM EDT Reads: 156
DevOps is here to stay because it works. Most businesses using this methodology are already realizing a wide range of real, measurable benefits as a result of implementing DevOps, including the breakdown of inter-departmental silos, faster delivery of new features and more stable operating environments. To take advantage of the cloud’s improved speed and flexibility, development and operations teams need to work together more closely and productively. In his session at DevOps Summit, Prashanth...
Oct. 12, 2015 09:45 PM EDT Reads: 269
In a report titled “Forecast Analysis: Enterprise Application Software, Worldwide, 2Q15 Update,” Gartner analysts highlighted the increasing trend of application modernization among enterprises. According to a recent survey, 45% of respondents stated that modernization of installed on-premises core enterprise applications is one of the top five priorities. Gartner also predicted that by 2020, 75% of
Oct. 12, 2015 08:00 PM EDT Reads: 452