top of page
Writer's pictureChelsea Wilkinson

Scaling AI pragmatically, particularly for PE-owned companies & investors

I wonder how many headlines exalting the virtues and advancements of AI you’ve read this past week?

 

I wonder how many of those headlines were actually written by AI! But that’s an article for another day.

 

Rather, my observation is that it’s hard not to get caught up in the hype and sheer potential of AI – particularly generative AI.

 

Which is why McKinsey's recent article Moving past generative AI’s Honeymoon Phase” resonated with our experience that there are several ‘hard truths’ companies are facing in the transition from experimental AI pilots to scalable operations.

 

And – like McKinsey – we strongly endorse the need to move beyond the initial allure of AI's potential to a more pragmatic focus on production readiness and value creation.

 

Below are our approaches to mitigating McKinsey’s seven hard truths of scaling AI:

 

1. Focus on Impactful Use Cases

Instilling an R&D mindset is vital when developing AI use cases, so that non-performing pilots are rapidly eliminated, and resources (re)allocated to high-impact initiatives that promise substantial returns. 

 

Also, as strategic data consultants, we advocate for companies to craft a holistic Data & AI Strategy to guide this process of prioritisation and provide the ‘north star’ for KPI impact measurement. After all, simply delivering a data model doesn’t shift economic outcomes, that only happens when behaviours are changed, decisions made, and actions taken.

 

2. Integration over Individual parts

As generative AI solutions scale, the complexity of integrating and orchestrating various components increases exponentially. These components include multiple models, vector databases, prompt libraries, and applications. Each new addition creates a ripple effect, adding to the overall system complexity. 

 

To manage this, we prioritise effective orchestration by embedding domain and workflow expertise into the step-by-step management of models, data, and system interactions. This process is supported by end-to-end automation and advanced observability tools to ensure accuracy, compliance, and scalability. Meaning, building an MLOps function is vital. 

 

3. Cost Management

Managing hidden costs in scaling AI is essential. We actively support our clients with ‘build or buy’ decision-making when it comes to tools, talent, and resourcing. Our expertise in cost/budget analysis, change management, and optimisation helps portfolio companies manage and minimise AI-related expenses effectively, while realising the economies of scale (and learning) from developing reusable assets (see bullet 7). 

 

Furthermore, with a wide network of expert delivery partners, we can scale our team up or down ensuring we have the right expertise, at the right time, and at the right competitive price point for each client’s ambition. 

 

4. Streamline Tools & Tech

We advocate for a streamlined data technology stack, leveraging a selective approach to tools and platforms that best serve business goals. And where replaceability is king! Also, being software and technology agnostic means our advice is based on what we believe is best for you - mitigating technical debt and vendor lock in.  

 

5. Building Multifaceted Teams

Forming cross-functional teams that blend technical prowess with business acumen drives AI initiatives to fruition. However, few small to mid-cap size businesses have these skills in-house. So, in many client cases, we actively hire data teams and always entrench knowledge sharing rituals to create value beyond models. 

 

6. Data Prioritisation

We focus on the right data, not perfect data when it comes to data management.  All underpinned by a ‘business data twin’ (BDT) to ensure the most relevant and high-quality data is ‘analytics ready’ to drive AI models. Our BDT approach also means that both commercial and technical teams have a shared understanding of data sources, data flows, and data outputs. 

 

7. Reusable Assets

Encouraging the development of modular, reusable components accelerates AI deployment across a company, resulting in material economies of scale and learning. 

 

Plus, for savvy private equity investors, a mindset of reusability can quickly be converted into a Data & AI Playbook to share and hasten adoption – and value – across a portfolio of companies. 

 

 

With over 150 data projects, DataDiligence has the experience to overcome the ‘hard truths’ of scaling AI. And has cultivated pragmatic ways of working that emphasise reusable code and systems to accelerate the development and value of responsible AI and generative AI use cases.


By focusing on these areas, we help our clients move from the potential of AI to practical, scaleable solutions that drive real business value.

Kommentare


bottom of page