4 minutes

Laura Rees offers new model to explore what happens when humans and AI team up

Management professor Laura Rees headshot. She has brown hair and is wearing rectangular black glasses and a black blazer with blue shirt.

Amid the sweeping promises and warnings surrounding AI, Laura Rees is focused on an existential question: When a powerful tool turns into a relationship between technology and human, what happens to human wellbeing?

The answer matters because businesses are adopting AI for everything from customer service and human resource management to product development and labor negotiations. Employees increasingly find themselves interacting with AI on an intense, long-term basis. And as AI developers incorporate features designed to observe and trigger emotional responses in humans, interactions with AI may have impacts on job satisfaction, trust and emotional health.

To help researchers address these issues, Laura Rees teamed up with Mehran Bahmani, a York University colleague, to develop the Relational Tradeoff Model. They describe their work in a paper in the journal Organizational Psychology Review.

Rees specializes in organizational behavior and is an associate professor of management in the OSU College of Business.

“Traditional models of technology adoption focused on the idea of acceptance and use,” she said. “Those models couldn’t really anticipate how technology has evolved, particularly socially interactive AI.”

Drawing on research in affective computing, management, and social psychology, Rees and Bahmani propose adding measures of short- and long-term human wellbeing to understand how AI is affecting the workforce.

They begin by noting that the emotional impact of social AI mimics the give and take of human relationships, but much depends on whether an AI system is designed to support human interaction or to provide oversight. Supportive — aka “affiliative” AI — tends to affirm the user’s role in carrying out tasks. AI that provides oversight — also called “distancing” AI — may be seen as working against or even trying to replace the human user. Some AI systems combine characteristics of both.

Other researchers have noted such differences, but Rees and Bahmani extend that distinction to the analysis of human-AI interactions.

“Not all AI is socially interactive," said Rees. "When I present this work, I make the joke that some humans may also not fully qualify as socially interactive.”

A broad trend toward social AI engagement has emerged in recent surveys and underscores the importance of evaluating workplace relationships between employees and technology. Surveys by Common Sense Media and Vantage Point Counseling have concluded that up to a third of teens and young adults report having intimate relationships with an AI system. It may be too early for researchers to focus on the possible spillover of such interactions into the workplace, said Rees, but it’s important to anticipate the trend.

“We need this kind of conceptual model now so that, in the future, we’re not retroactively saying, ‘Uh oh, what’s happened? What’s going on,’" Rees said. "We can get a little bit ahead of technology development to say, if this is the direction it’s heading, what do we think will happen? And what does it mean?”

Technology adoption models typically include factors such as acceptance and utility to explain how new tools fare in the marketplace. The model proposed by Rees and Bahmani adds short- and long-term human wellbeing to help analyze the impact of AI on the workforce.

At the heart of their model is the notion that there is a tradeoff between short-term acceptance of an AI relationship and long-term wellbeing. That tradeoff may be affected by both the nature of the AI and whether usage is required or voluntary. Existing theories of social interaction suggest that being forced to use technology leads to feelings of stagnation and loss, Rees and Bahmani wrote.

Their Relationship Tradeoff Model reveals that AI-human interaction can result in “short-term gains in acceptance and use but long-term harms to subjective wellbeing, particularly when use is forced and resistance is high.”

Rees sees the new model as an early step in helping social science researchers explore future AI impacts on employee performance. Scientists elsewhere are exploring how AI can deceive users or even cheat to serve its own goals in spite of the developer’s intentions. “But then do traditional incentive systems work? I don’t know,” said Rees. “Is there a risk? Will AI care if it gets fired?”

In other research, Rees has focused on how emotional factors affect the workplace. She has delved into anger management, emotional intelligence and negotiation. In her analysis of socially interactive AI, she foresees diving deeper into the emotional factors that define AI-human relationships.

“I would consider it successful if this paper prompts a lot more questions than we’ve tried to answer,” she said. Social scientists need to catch up to the technology and develop theories that capture impacts that may not have even been feasible in the recent past.

–Story by Nick Houtman