Humans, AI, and Organizational Upheaval

The general thought expressed in earlier postings here is that incorporating artificial intelligence into investment processes means figuring out what machines do best and what humans do best and structuring accordingly.

A book by Matt Beane, The Skill Code: How to Save Human Ability in an Age of Intelligent Machines, adds an important layer of consideration to that notion.

The transfer of skills

Beane’s ideas are focused on the relationships between experts and novices — and the risks to organizations when traditional modes of interaction are disrupted as artificial intelligence and other computing capabilities are substituted into processes.

His examples come from a variety of fields, including investments.  For example, if you have become a financial planner, you likely started with some basic training, much of which was book learning.  Then, “you got involved in practicing financial planners’ work, helping them in limited ways in the beginning, more complicated ways as you went on, and ultimately helped to mentor newbies as you were about to complete your training.”  That cycle of development, the passing of knowledge from those with expertise to those without it, is at the heart of much organizational learning.

The transfer mechanism is imperfect, of course.  The “experts” may not have the expertise that is assumed or are poor at (or uninterested in) conveying it to others.  Or they may feel like they don’t have the time, given the demands of their roles.  Instead of an apprenticeship of sorts, in some cases the relationships can amount to no more than the passing on of some heuristics and war stories, rather than a mechanism to propagate valuable knowledge across time.

Beane sees the bond between experts and novices as foundational for success, raising a concern:

In millions of workplaces, we’re blocking the ability to master new skills because we are separating junior workers from senior workers, novices from experts, by inserting technology between them.  In a grail-like quest to optimize productivity, we are disrupting the components of the skill code, taking for granted the necessary bundling of challenge, complexity, and connection that could help us build the skill we need to work with intelligent machines.

Reinvention

With the increasing availability of AI applications, leaders are faced with significant questions about whether and how to remake their organizations to take advantage of the possibilities.  Among the issues:

~ Some kinds of organizations, including most asset managers, have heavily promoted the consistency of their process.  That narrative will have to be restructured to talk about change.  That’s a good thing overall, but it will be tricky for many.

~ Errors related to AI applications that reveal a lack of understanding of their workings can result in regulatory scrutiny, recriminations from clients and stakeholders, and career risk.

~ Greenwashing has made those doing due diligence more sensitive to claims about ESG capabilities and processes.  In a similar way, “AI-washing” will be a top-of-list concern for anyone vetting an organization.

~ Many large organizations have been aggressive in adding AI staff, making it hard for others to hire the expertise needed to implement an appropriate strategy.

Those concerns will be in the forefront, but Beane’s book is a reminder that culture, social fabric, and organizational learning will be disrupted as AI capabilities are introduced.  Leaders must think holistically about the implications of those moves for the future of the organization.

The dynamics of learning

Beane covers a number of important concepts regarding learning in organizations:

Experiential learning.  While a base level of training is required, “experiencing the complexity of a situation is often better for skill development than significant explicit instruction.”  Providing detailed procedures to follow might seem to be the best way to convey how to do something, but it actually inhibits learning.  At each stage of development:

The skill code thrives in “goldilocks” territory;  not too much complexity, not too little.  Not too little direction or information, not too much.

Expertise can sometimes inhibit learning.

[There] is the blindness that comes with the “seen one, seen ’em all” phenomenon:  after a while, if you’ve got solid skill, it’s all too easy to fit your mental models onto most any complexity to predict what’s coming.

Talent capture.  Sometimes, “domain expertise is hoarded, and silos of skill are protected,” leading to unfulfilled employees and a lethargic organization.

Shadow learners move things forward.  By observing how things are working and seeing what could be made better, “shadow learners” come up with new ways of improving existing processes.  That requires a willingness to see past the existing norms and rules to make good things happen.  Most people are reluctant to push against established ways, especially in cultures that discourage such input from underlings.  But some people can’t help themselves — and they are the ones that drive progress.  (There are always latent ideas of worth in an organization and people who have an eye for improving things but may struggle to be heard.  Great leaders find ways to unlock those hidden assets.)

Inverting the learning.  In some situations, someone towards the top of a hierarchy needs to learn from those further down the ladder, in what is called an “inverted apprenticeship.”  Since it’s odd — with the senior person being the one “messing up, asking silly questions, struggling on task” — there are some risks involved if that person is worried about losing face in the eyes of others because of a lack of knowledge.  A common example is when there are technology capabilities that could be helpful for a senior person to master but junior employees are the ones who have the necessary knowledge.  In those cases, the roles are reversed.  (Beane uses an example from investment banking to illustrate that general principle.)

A new party at the table

Consider the possible scenarios over time for the integration of AI into an organization’s processes.  You have the potential down the road for “humans teaching an AI to teach humans to teach an AI to . . .”

It is the organizations that are already primed for learning that will have the best chance of success in that confusing new environment, but even they will face challenges as the shift to new processes occurs.

While the natural (and necessary) inclination will be to build AI expertise in your organization, laying the groundwork for a new cultural and collaborative framework will be just as important.

All of the learning dynamics cited above will come into play; existing team members will have to adjust to meaningful changes in methodology and communication; new roles will be created; and new skill sets will be required.  In short, an upheaval.

Are you ready?

 

To delve more deeply into impending issues of organizational change, please reach out.

Published: November 4, 2024

Subscribe

To comment, please send an email to editor@investmentecosystem.com. Comments are for the editor and are not viewable by readers.