Five Considerations when Getting Started with Artificial Intelligence
CIOReview
CIOREVIEW >> AI Canada

Five Considerations when Getting Started with Artificial Intelligence

Olly Downs, SVP, Data, Analytics & Machine Learning, BARK
Olly Downs, SVP, Data, Analytics & Machine Learning, BARK

Olly Downs, SVP, Data, Analytics & Machine Learning, BARK

You arethe enterprise CIO, you havegot your business onto the Cloud, and the fundamentals are running.  The business is pressing you on when they will start realizing the “big benefits” your vendor promised.  Your leadership team asks the hard-hitting question.  “When can we bring AI capabilities to our platform?”

Here are five questions to ask yourself before you commit to an answer.

1. Are there clear use cases for AI in my business?  Are there reasons to believe AI will solve those use cases well?

Whatever you do – don’t deploy AI capabilities for the sake of it.  Building and deploying an AI-enabled use case is a heavy lift.  Developing and implementing AI capabilities, even using “off the shelf” services, is an iterative process, with higher uncertainty on when and how well it will deliver.

If the use case is well understood, decouple the overall experience being built from the AI capability itself, and stand up a business-logic driven capability taking the same inputs and generating the similarly structured output as your intended AI-driven solution.  Is that good enough?  Baseline the performance of that capability - how much better should the AI-driven version of the capability perform to make it worthwhile?  Your product managers should help you answer this question and work with your business stakeholders to develop a roadmap of valuable use cases that can be delivered.

During my first year at BARK, we identified an opportunity to transform the business metric we use to optimize our product recommendations to better align with our objective to improve average-order-value and broaden engagement with our product lines through cross-sell.  This initiative fit well with our highest level strategy as a business and resolved merchandising challenges our stakeholders were having in trying to achieve these objectives through manual efforts.

2. Do I have the team to make this happen?

Four disciplines are required in a team focused on delivering an AI use case end-to-end to shape the use case and its requirements, develop the capability, implement it at scale and operate and optimize it.

1. Data Science –  Businesses tend to overload the asks of the data scientist, which can result in challenging operationalization, scale, monitoring or maintenance, mismatches in delivery of requirements, and challenges with business communication.  Keep your data scientists focused on the process of discovering the data signals and appropriate algorithmic approaches to deliver a great model.

2. Product Management – This skillset is critical for identifying appropriate use cases, detailing requirements and determining a roadmap for developing solutions.  Your product manager is critical in helping you and the team avoids pitfalls like selecting a use case that is ill-suited to the application of AI and successfully delivers business impact.

3. Data Engineering – For a successful enterprise AI initiative, get the data driving a model flowing efficiently through the enterprise.  The purpose of this is to present faithful representations of the required model feature data with appropriate latency that may need to converge from multiple different data systems and enable consistency with data and signals used for human decision-making in the business.

  At BARK, the critical piece for us has been augmenting our Product Management bandwidth with resources familiar with the non-linear nature of enterprise AI initiatives and playing a critical role in helping the team and business prioritize the “we could do” use cases into “we must do” use cases. 

4. Machine Learning Engineering – Once you’re able to develop a model that is validated to solve an important and well-formulated business problem and the required data assets flowing through your systems, the ML engineer can further optimize your AI model, wire up AI model inference, monitor and measure input data, model scores and any customer or system feedback/action and implement model retraining/optimization and validation at a cadence appropriate to the use case.

At BARK, the critical piece for us has been augmenting our Product Management bandwidth with resources familiar with the non-linear nature of enterprise AI initiatives and playing a critical role in helping the team and business prioritize the “we could do” use cases into “we must do” use cases.

3. Do I have the tools to make this happen?

Your cloud vendor likely sold you on their end-to-end AI capabilities and even their off-the-shelf AI-enabled services.  However, there are some capabilities you need that may not be part of your cloud platforms core services that are critical to buy or build internally:

 Experimentation – In addition to measuring and monitoring model metrics, you will be able to measure the business impact of your model and test the business impact of model improvements or alternative approaches to the use case side-by-side in a randomized A/B testing setting.  In some enterprises, experimentation platforms are already in place for testing and evaluating changes in customer or user experience.

 Measurement and Monitoring “ML Observability” – Your ML engineer will need the tooling to monitor and detect changes in the statistics of the features going into the model.  That may have an impact on the quality of its inferences, so they will also want to monitor those inferences and validate ongoing model accuracy – all to determine whether the currently live model continues to be effective in delivering on the use case and whether or not model retraining or renewed exploratory model development is required.

 Feature Store – Many cloud vendors’ “machine learning operations (MLOps)” platforms make it easy for your ML engineers to deploy a model as a container-based and easily scalable web service. The part they may skip over is the management and orchestration of the data flows into the model and maintaining a relationship between the feature computation logic between model training and model operation/inference.  A feature store will allow your team to manage and reuse feature computations and logic in a single place and take away a significant source of deployment effort and deployed model bugs and impaired performance for your ML and data engineers.

4. Do I have the data to make this happen?

The signals and data flow that the AI model depends on when it is put into production is one of the most challenging and recently developed areas of supporting technology.Data scientists and ML engineers may have trained a model with great performance when validated “offline” and got ready to expose the model as a service for inference, but that model will likely require a different flow of data when operationalized. 

In developing a model, the data scientist will have synthesized different data assets related to the business problem and determined those required to produce a model with high accuracy and explainability – typically based on historical data.  Depending on the use case, this historical data may have been synthesized to represent an operationalized use case in real time. For example, you want to make recommendations to an app user that are contextualized both on information they have shared with you about prior purchases and product interests, but also based on how their current app session has been evolving. In this case, you need to be able to capture, accumulate and process session behavior in real-time and make it available to the model. Your data engineers will need to help translate your data scientists’ training data development into operationalizable data flows that converge in your Feature Store.

At BARK, we’ve doubled down in this area, building out our Analytics Engineering and Business Intelligence resourcing and tools to deliver a semantic layer of well-curated data and metrics in support of every major function of our business, from Marketing to Supply Chain.  We view this as an investment in enabling great human decision-making in the near term and the acceleration of AI use cases in the long term.

5. Can I deliver business impact quickly, and set the business up for repeatable business impact with multiple use cases?

It’s common for an enterprise’s first handful of AI-enabled use cases to be delivered as “solutions” – often a wobbly stack of software and manual process, highly crafted to the use case at hand, with a high burden to operate and maintain.  This can mean the next use cases are as hard as or harder to develop and implement than the currently-live ones – since current resources can be occupied with operation and maintenance. 

While getting a first use case to show business value is important, making sure you are setting the objective that your team will be pursuing a roadmap to deliver three or more high-impact, distinct business use cases will drive the motivation to incur incremental effort in leveraging and building more platform-like capabilities, rather than a single-use solution along the way to delivering your first use case.

At BARK, we’ve used three data science use cases (product recommendation, demand forecasting and natural language processing) to help us understand the breadth of needs from our AI Platform capabilities. This has shown us the path to move from our first AI solution“add-to-box” product recommendation to a platformed capability for operating ML models.

Read Also

The Journey to Swift Digital Transformation

John Hill, Senior Vice President of Digital & Information Technology, Suncor

Will data protection law reform open the door to easier international...

Kitty Rosser, Legal Director, Head of Data Protection at Birketts

Virtual Immersive Learning: The Next Frontier in Higher Education

Dr. Frederic Lemieux, Georgetown University

Making the Case For Moving from Health IT to Health Analytics

Aaron Baird, Associate Professor, and Yusen Xia, Director of the Data Science in Business

Data as a Business

Ricardo Leite Raposo, Director of Data & Analytics at B3