How to scale AI with a high degree of customization


Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.


In a previous post, I outlined four challenges to scaling AI: customization, data, talent, and trust. In this post, I’m going to dig deeper into that first challenge of customization.

Scaling machine learning programs is very different to scaling traditional software because they have to be adapted to fit any new problem you approach. As the data you’re using changes (whether because you’re attacking a new problem or simply because time has passed), you will likely need to build and train new models. This takes human input and supervision. The degree of supervision varies, and that is critical to understanding the scalability challenge.

A second issue is that the humans involved in training the machine learning model and interpreting the output require domain-specific knowledge that may be unique. So someone who trained a successful model for one business unit of your company can’t necessarily do the same for a different business unit where they lack domain knowledge. Moreover, the way an ML system needs to be integrated into the workflow in one business unit could be very different from how it needs to be integrated in another, so you can’t simply replicate a successful ML deployment elsewhere.

Finally, an AI system’s alignment to business objectives may be specific to the group developing it. For example, consider an AI system designed to predict customer churn. Two organizations with this same objective could need vastly different implementations. First, their training datasets are going to be structured differently based on how their Customer Relationship Management (CRM) system’s data is organized. Next, each organization may have different domain-specific knowledge of the impact of seasonality — or other factors — on the sale of specific products that is not readily reflected in the data; they would need to bring in humans to optimize those parameters.

And those are just the technical considerations. Other considerations arise on the business process side. An online digital services company will look at a customer churn problem on a near real-time basis, requiring its AI system to deal with streaming datasets and quick inference timelines. But a boutique apparel shop may have the luxury of working with monthly or quarterly churn numbers, so its AI systems can be made to work with batches of data rather than streaming datasets, considerably reducing the complexity of the deployment.

Due to the unique technical and business process requirements each business faces, it’s clear that customization is key for any high-output AI deployment. Buying off-the-shelf solutions that are not optimized for your specific needs means a compromise on performance and outcomes.

The cost of having to “re-compose” AI systems every time, for every problem, for every customer is not just systems costs and human hours costs but also the cumulative costs of the time lag between starting a new AI project and being able to glean value from that implemenation. This is why most AI Centers of Excellence set up in large organizations fail to deliver on their initial expectations — although they’re a necessary part of building customized AI capabilities. On top of that, once an AI system is live and in production, maintaining it, optimizing, and governing it is another ongoing challenge.

Nonetheless, it is possible to customize AI projects at scale. What it requires is a portfolio approach to your AI strategy. Here’s what that approach looks like:

1. Build a modular AI infrastructure layer for re-use and repeatability. Easier said than done, this means addressing model-building tools, libraries, and integrated development environments strategically. Left unchecked, the vast array of options and researcher/engineer preferences can lead to an architectural nightmare. Successful organizations I have worked with put a foundational infrastructure strategy in place, through a process of standardization and modularity. That means a standardized set of recommendations for training and inference computing infrastructure (cloud vs on-premises, GPUs vs. CPUs), a standard set of libraries, model packaging recommendations, and API-level integration requirements for all ML development within the organization. The goal is to modularize to accelerate time to value through reuse, but without compromising flexibility.

2. Foster collaboration across the organization: This can be achieved with 2 specific steps: First, build an internal marketplace for all ML and data assets. This means any team across the enterprise can contribute their ML development for reuse with clear instructions on use. In addition to being a great way to manage the outputs of AI investments, this also drives organizational knowledge-building and creates a forum where people can enhance each other’s innovations. Second, empower both your data scientists and non-technical users to rapidly experiment and deploy different use cases. In addition to having a library of tools, techniques like Auto-ML may help here. Bridging the operational complexity of packaging ML models and lowering the barrier for experimentation is a requirement for this.

3. Time-bound your AI experiments. We’ve all heard about the dire success rates for ML and AI projects. Beating these odds requires a healthy experimental environment focused on innovating around new problems and business use cases with a rapid path to validating hypotheses (deciding which meet the criteria to get into production). It’s critical to plan these experiments in short development sprints, with very clear criteria that can be continuously evaluated to see if it makes sense to move forward with the project or not. One approach here is to evaluate all of your AI projects/use cases across two vectors — the expected business value and the time it takes to implement in production (due to complexity in data acquisition, domain expertise needed etc.) — and use this as a guide to prioritize projects across a timeframe. It’s important to clearly define thresholds around quantified expected business value, cost/time to get into production, and availability of data and expertise.

Customization is critical for getting results with AI — but it doesn’t have to slow you down. If you put the right modular infrastructure in place and if business units across your organization can align to deliver AI initiatives with a focus on rapid iteration and experimentation, customization can be the great accelerator and the ultimate key to achieving AI at scale.

Ganesh Padmanabhan is VP, Global Business Development & Strategic Partnerships at BeyondMinds. He is also a member of the Cognitive World Think Tank on enterprise AI.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Source link

We will be happy to hear your thoughts

Leave a reply

Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0