News and Insights

PE and AI: Weighing the Risks and Rewards

By Nestor Kok – These days, the subject of AI is never too far from any discussion table. While the initial public availability of large, generative language AI models (such as ChatGPT and Bard) caused much consternation in both the public and private sectors, the current consensus on AI use in non-tech-oriented businesses has become a lot more amicable. Companies worldwide, large and small alike, have begun to take steps to incorporate artificial intelligence into their business models. 

In the private equity field, the state of play is no different. Contrary to appearances, the worlds of private equity and artificial intelligence can be easily bridged. Much like any other business sector, the uses and possibilities for AI in the PE workplace are manifold… given enough time and capital for testing and development.

According to S&P Global, most private equity companies are currently focusing on how to help their portfolio companies respond to the effects of generative AI use becoming more accessible to the general public. This can either be proactive, i.e. helping portfolio companies seize opportunities to leverage AI usage to become more valuable, or defensive, i.e. ensuring that their portfolio companies’ values are not significantly impacted by the widespread availability of AI.

“Many funds are just stopping there,” said Richard Lichtenstein, a Bain & Co. expert partner currently leading a team building AI-powered software tools for the firm’s private equity clients. “Many funds are saying, ‘Find those companies, help them figure this out.’” Given this approach, the most common current uses of AI in the PE workplace are limited to automating labour-intensive tasks, such as drafting emails to clients or responding to inquiries from prospects and clients via AI chatbot. AI-powered lead engagement is another, more processor-intensive task — using AI to sift through and learn from data on leads from LinkedIn (or other business-oriented profiles such as Crunchbase), and then using that data to automate the writing of highly personalised lead generation emails.

With private equity employees already being saved countless hours thanks to the automation of tasks like these, the question that some might ask next is, “Why not go further?” Hypothetically, the availability of large-scale generative AI models which can train themselves on thousands of gigabytes of data in a matter of days should result in the easy automation of PE firms’ biggest tasks, namely due diligence and insight/risk analysis. But while public-use tools like ChatGPT and general media coverage of AI developments may make large-scale implementation of artificial intelligence look like a breeze, the reality is hardly that simple. 

All artificial intelligence models need to be trained on data to perform the tasks they are given. In the case of creating an AI tool for automated data and risk analysis, the AI would need to be given access to a PE firm’s proprietary data from portfolio companies, in addition to data obtained outside private markets. However, giving an AI model access to this data poses a huge security risk for PE firms, as this would introduce a point of vulnerability into the digital security measures used to keep this proprietary data out of the hands of a firm’s competitors.

“The biggest hurdle, [private equity firms] feel, is risk governance,” says Akash Takyar, CEO of LeewayHertz Technologies, a consulting and AI software development firm. Data breaches are no small matter, and PE firms usually preside over gigabytes upon gigabytes of proprietary data. Right now, for many firms, the trade-off between data security and due diligence automation does not seem worth the cost.

On the other hand, there are a small number of companies looking to harness AI to help the decision-making process. AI tools for risk analysis and investment decision-making in the private equity sector are indeed in development, by teams such as the one headed by Richard Lichtenstein at Bain & Co. However, AI software development and customisation is still a developing — and expensive — field. For many smaller PE firms, the cost of hiring out self-service AI tools or partnering with leading AI companies who have developed this tech may prove prohibitive.

Even for the firms who have enough capital to shoulder these costs, or who can laterally transfer AI development skills to their workflow via an AI-focused portfolio company, there is also the matter of time needed to train and test the AI decision-making tool rigorously. “Before you’re ready to give [the AI tool] a vote on your investment committee, you probably want to experiment with it a lot and test it and see where it’s biased and try to correct for those biases,” explains Lichtenstein. 

This reflects a danger seen even in the most commonly-used AI tools today. All generative AI models come with biases both obvious and subtle, as the data it is trained on comes from humans, who often operate on their own personal biases unconsciously, and require input and checking from colleagues and fellow investment committee members. Even ChatGPT’s official FAQ page states that the bot is not free from bias, and recent studies have shown that ChatGPT’s answers often reflect gender and political biases that businesses might want to avoid altogether. Hence, after the requisite testing period for any AI decision-making tool, the private equity workplace could (and arguably should) still remain only partially automated.

Ultimately, artificial intelligence and private equity can indeed work hand in hand, but within limits. As in any workplace, AI advancements should be combined with human intervention for successful integration. While AI automates processes and provides insights, human judgement remains crucial in contextualising those results, checking for biases, and acting by adapting strategies per evolving market dynamics.

Previous Post
Grady Campbell and Tipping Point Recognize 2024 Top CEOs in the Middle Market
Next Post
Celebrating Grady Campbell’s First 35 Years!