This stock photo might belong on r/itsaunixsystem but we’re quickly running out of AI stock photos so just deal.
Saw something interesting in Fortune yesterday and thought it worth sharing as it gives us a look at PwC’s AI upskilling plans and demonstrates how difficult it is to train your people on a technology moving faster than any technology before it.
Here’s what Fortune said:
For consulting firm PricewaterhouseCoopers LLP, that means rolling out mandatory training to its entire US workforce over the course of five months, starting in August. Given the concern among workers about what AI means for their jobs, PwC’s US Chief People Officer Yolanda Seals-Coffield said the first step is demystifying the technology. “The sooner we can get out and start to teach people about this technology, the sooner we can dispel some of that,” she said.
The company is dividing its workforce into three layers based on how deeply each needs to understand the new technology. The first and the broadest is mandatory training to bring all employees, regardless of role, up to speed on generative AI basics: what it is, how it works, best practices and how to use it ethically and responsibly.
A more defined second and third tier consist of software engineers, who need more technical training in order to integrate AI into internal systems, and senior leaders, who need a thorough understanding so that they can help clients transform their own businesses. “We don’t want and don’t need to have 75,000 deep subject matter technologists. That’s not the goal,” Seals-Coffield said.
Though the training roadmap is detailed, the firm explicitly chose not to extend it past December. “Quite frankly we didn’t go beyond that because we think the technology will continue to evolve,” Seals-Coffield said. “We want to make sure that we’re not stuck and committed to something that by January will need to be completely redone.”
This training got a mention in PwC’s April press release about investing $1 billion in AI as did PwC’s Responsible AI Framework, a set of warning flags for the various ways AI can go wrong. One set of risks from this framework in case you’re worried about sleeping too well at night:
Societal risks include:
- Risk of misinformation and manipulation
- Risk of an intelligence divide
- Risk of surveillance and warfare
AI solutions are designed with specific objectives in mind which may compete with overarching organisational and societal values within which they operate. Communities often have long informally agreed to a core set of values for society to operate against. There is a movement to identify sets of values and thereby the ethics to help drive AI systems, but there remains disagreement about what those ethics may mean in practice and how they should be governed. Thus, the above risk categories are also inherently ethical risks as well.
Don’t sweat it, we can trust PwC. They told us so.
PwC chief products and technology officer Joe Atkinson, fresh off a trip to World Economic Forum in Davos, said in May that the firm would be retraining its people with new technical skills so they’d be able to keep a job though their work might be very different from what they’re doing now.
Like it or not, the robot revolution is upon us. It’s going to be an awkward couple of years as historically way behind companies (looking at you, accounting firms) struggle to keep their people ahead of the curve without fully knowing what the curve could look like next month or even next week.