cutover-for-technology-resilience
cutover-for-it-system-and-application-start-of-day-process-checks-solution
cutover-for-cloud-disaster-recovery-solution
cutover-for-cyber-resilience-solution
cutover-for-it-disaster-recovery-brochure
new-press-release
calculate-your-roi

Cookie consent

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Blog
June 9, 2017

Machine Learning & Enterprise Software - Training Data Unlocks Optimization

The Missing Ingredient

Systems to help optimize the world of work and the enterprise have struggled in the past to deliver the productivity gains that were promised because the cost of accuracy in such a complex system quickly becomes prohibitive and data rapidly goes out of date.

Machine learning promises to deliver optimization in these conditions but the missing piece is a great training data set. The problem with achieving a great data set is that it’s hard to capture it without significant manual intervention from the user to record progress.

The Future of AI

Frank Chen from A16Z has set out a good summary of AI progress and its potential future implications on software, predicting that it will have a similarly wide impact to the relational database revolution. He identifies a number of areas that AI will make cheaper or more possible, including understanding the world via generative adversarial networks and the creation of content.

One of the most impactful uses he mentions is the ability to optimize complex systems.

This is an area of personal fascination for me, especially in the complex system of the enterprise and work. As a species, our ability to coordinate between large numbers of us sets us apart but the ability to optimize within that system for specific steps such as what to work on next remains a challenge.

To date, we have had to tackle this in a “bubble sort” manner where the effort involved in trying to optimize work in the enterprise has increased faster than the size of enterprise problems, leading to significant inefficiency. We only have to look at the overflowing email inbox being used as a poorly operating work instruction queue to see this in action.

The long “compute” time of these systems results in the optimization answer being invalid on delivery as the environment or work context has moved on e.g. strategic planning in the 80s. This has become accepted as a background condition and something to manage around as individuals and teams struggle to get context and self-steer what to tackle next.

To work around such inefficiency, work is packaged into smaller agile chunks to minimize the impact of the accepted trend of suboptimal choices. We also accept that messages get lost in the noise and that parties will not read their emails.

Using an optimization algorithm to help with these choices has significant potential. The idea of a central capability guiding calculations had been dumped in the past due to failures but largely these were due to poor human judgment rather than machine optimization.

The Devil’s in the Data

Three key developments have helped to unlock progress here. Firstly, the compute power and GPU architectures with increasing parallelism, secondly, the steady improvement of algorithms and thirdly, the data.

The last component is easily overlooked. We would not let a toddler cross a main road but we may be happy that a teenager has the capability. The difference between the two is a decade of training data on what is important in that system.

The trouble in “enterprise work” is collecting the specific data over and above general sentiment data sets. Systems requiring the user to manually enter what they did and when are poorly adopted and largely inaccurate. If Google and Tesla did not have sensor-packed cars driven around roads for millions of hours there would be no autonomous vehicles.  How do we get the equivalent in the enterprise work problem space?

The principles we consider to help deliver the right data sets for learning are as follows:

  1. User Benefits for Sharing Data. Users on Facebook traded privacy for social value, Tesla drivers trade being tracked for the early benefits of hailing their cars.  How can some early value be shown to the enterprise user for their data?  This can be as simple as the benefit of autocomplete that Google offered based on listening to your’s and other’s most common inputs.
  2. Capture the data as part of the work. Avoiding users having to do anything extra to provide the data leads far better adoption. Additional manual entry is also subject to bias and error. How can the data be captured as part of the usual flows of work and existing golden sources be utilized?
  3. Start with the Data Model. In the case of accruing data and benefit it’s critical to ensure as much as possible that from the outset you are capturing a data set that will allow you to make the inferences to deliver lots of user values.

If we can capture the data efficiently and train a system to provide us with the insights we need to make better decisions we have a chance of optimizing this problem that has traditionally been seen as “too hard”. We have a chance to address optimization rather than at best adopt tactics to provide flexibility for decision making that is often wrong.

 

Ky Nichol is the CEO of Cutover. 

Cutover, headquartered in London, leads the way in managing change events with the Cutover platform to enable the complex human orchestration, collaboration and communications required for critical change events and disaster recovery tests. Cutover is used globally by major financial and enterprise organizations to remove risk and increase efficiency. Customers include Capita, Deloitte, Nationwide Building Society, Barclaycard and Barclays Bank.

No items found.
More from the blog
No items found.
No items found.
No items found.