Please ensure Javascript is enabled for purposes of website accessibility

PwC Has Set Aside $1,000,000,000 for AI

AI generated image of a yellow forest

We’ve used that PwC Chad image way too many times recently so have this AI-generated fantasy forest instead.

It wasn’t that long ago that Big 4 accounting firms were cagey about staff playing around with ChatGPT on company equipment, consumed by the fear of sensitive client information being fed into the AI black hole. But then they got over it and both PwC and KPMG proudly announced proprietary AI tools, leading the way in what will no doubt be a transformative time for professional services. In PwC’s case, the new AI on the block was a ChatGPT-based platform that uses natural language processing, machine learning and data analytics to automate and enhance various aspects of legal work called Harvey. PwC’s Global Tax & Legal Services (TLS) says Harvey will catalyze the ability of Legal Business Solutions professionals to deliver comprehensive, cost-efficient and market-relevant solutions to our clients. That’s a direct quote btw, if you couldn’t tell. Harvey, which is backed by the OpenAI Startup Fund, may even end up bringing in its own business as is working with the startup to take the platform to market “to help clients further streamline their in-house legal processes.

But they didn’t stop there. Yesterday, PwC US announced plans to invest an eye-watering one billion dollars over the next three years to “expand and scale its artificial intelligence (AI) offerings and help clients reimagine their businesses through the power of generative AI.” This investment, says the press release, builds on PwC’s long-standing commitment to AI, strengthening its ability to deliver human-led and tech-powered solutions and to build trust and drive sustained outcomes in line with its global strategy, The New Equation. Again, that’s clearly a direct quote.

The firm is partnering with Microsoft to create a scalable offering using GPT-4 and Microsoft’s Azure OpenAI service.

“We are at a tipping point in business and society where AI will revolutionize how we work, live and interact at scale,” saaid Mohamed Kande, Vice Chair, US Consulting Solutions Co-Leader and Global Advisory Leader, PwC. “PwC has long been a pioneer in responsible AI and this latest investment and collaboration with Microsoft will help our people and clients realize the augmented productivity and new growth opportunities associated with generative AI, doing so in a responsible way while driving the right results.”

Although we did not see any press releases about it, it seems PwC had already been using Azure OpenAI for clients in various industries including insurance, aviation, and healthcare. These solutions have successfully enabled clients to save time and costs while helping accelerate revenue, the firm says.

In its Responsible AI framework, PwC lays out some risks associated with AI use in its current form, all things worth considering as we speed toward a future in which busy work is practically eliminated thanks to these clever tools. Let’s review them quickly.

  • Performance:
    • Risk of errors
    • Risk of bias and discrimination
    • Risk of opaqueness and lack of interpretability
    • Risk of performance instability
  • Security:
    • Adversarial attacks
    • Cyber intrusion and privacy risks
    • Open source software risks
  • Control:
    • Lack of human agency
    • Detecting rogue AI and unintended consequences
    • Lack of clear accountability
  • Economic:
    • Risk of job displacement
    • Enhancing inequality
    • Risk of power concentration within one or a few companies

The next two risk sets are particularly interesting, if not slightly unsettling: societal and enterprise.

The widespread adoption of complex and autonomous AI systems could result in “echo-chambers” developing between machines, and can have broader impacts on human-human interaction.

Societal risks include:

  • Risk of misinformation and manipulation
  • Risk of an intelligence divide
  • Risk of surveillance and warfare

AI solutions are designed with specific objectives in mind which may compete with overarching organisational and societal values within which they operate. Communities often have long informally agreed to a core set of values for society to operate against. There is a movement to identify sets of values and thereby the ethics to help drive AI systems, but there remains disagreement about what those ethics may mean in practice and how they should be governed. Thus, the above risk categories are also inherently ethical risks as well.

Enterprise risks include:

  • Risk to reputation
  • Risk to financial performance
  • Legal and compliance risks
  • Risk of discrimination
  • Risk of values misalignment

Oh, and while PwC is helping clients to understand the risks and benefits of AI, it will also “modernize its internal platforms to embed this new, secure generative AI environment, building on its existing foundation of using AI to deliver productivity gains across tax, audit and consulting services to clients.” Isn’t it funny the firm that was still using Lotus long after everyone migrated to Office and Google is now at the forefront of AI?

PwCers can look forward to a focus on upskilling as part of My+, all in the service of using AI “in order to work faster and smarter.” Sure you all will be excited about that.

“We are excited that PwC will utilize Azure OpenAI Service to transform the way they work, and to deliver innovative customer solutions that take advantage of the world’s most advanced AI models, backed by Azure’s trusted enterprise-grade capabilities and AI-optimized infrastructure,” said Eric Boyd, Corporate Vice President, AI Platform, Microsoft. “Our collaboration with PwC and OpenAI will be a game-changer that opens the floodgates for businesses to experience generative AI applications in a safe and secure manner.”

Exciting times we live in.