Canadian companies’ AI policies aim to balance risk with rewards

When talent search platform Plum noticed that ChatGPT was making waves in the tech world and beyond, it decided to go directly to the source to explain how staff could and couldn’t use the generative AI chatbot.

ChatGPT, which can turn simple text instructions into poems, essays, emails and more, put together a draft document last summer that took the Kitchener, Ont.-based company about 70 percent of the way to its final policy.

“There was nothing there that was wrong; there was nothing there that was crazy,” Plum CEO Caitlin MacGregor recalled. “But there was an opportunity to be a little more specific or make it a little more customized for our business.”

Plum’s final policy, a four-page document based on ChatGPT’s draft advice from other startups drawn up last summer, advises staff to keep customer and proprietary information out of AI systems, review everything what the technology spits out to verify its accuracy and attribute any content it generates. .

This makes Plum one of several Canadian organizations codifying its stance on AI as people increasingly rely on technology to increase their productivity at work.

Many were prompted to develop policies by the federal government, which published a set of AI guidelines for the public sector last fall. Now, dozens of startups and larger organizations have adapted them to their own needs or are developing their own versions.

These companies say their goal is not to limit the use of generative AI, but to ensure that workers feel empowered enough to use it responsibly.

“It would be a mistake not to harness the power of this technology. It has a lot of opportunities for productivity and functionality,” said Niraj Bhargava, founder of Nuenergy.ai, an Ottawa-based AI management software company.

“But on the other hand, if you use it without putting up barriers, there are a lot of risks. There are the existential risks to our planet, but then there are the practical risks of bias and equity or privacy issues.”

Striking a balance between the two is key, but Bhargava said “there is no single policy that works for all organizations.”

If you’re a hospital, you may have a very different answer to what’s acceptable than a private sector technology company, he said.

However, there are some principles that frequently arise in the guidelines.

This is not about feeding customer or proprietary data into AI tools because companies cannot guarantee that such information remains private. It could even be used to train the models that drive AI systems.

Another is to treat everything the AI ​​spits out as potentially false.

AI systems are not yet infallible. Tech startup Vectara estimates that AI chatbots make up information at least three percent of the time and, in some cases, up to 27 percent of the time.

A BC lawyer had to admit in court in February that she had cited two cases in a family dispute that were fabricated by ChatGPT.

A California lawyer similarly discovered accuracy issues when he asked the chatbot in April 2023 to compile a list of jurists who had sexually harassed someone. He incorrectly named an academic and cited a Washington Post article that did not exist.

Organizations that develop AI policies also often address transparency issues.

“If you wouldn’t attribute something someone else wrote as your own work, why would you attribute something ChatGPT wrote as your own work?” questioned Elissa Strome, executive director of pan-Canadian artificial intelligence strategy at the Canadian Institute for Advanced Research (CIFAR).

Many say that people should be informed when it is used to analyze data, write text or create images, videos or audio, but other cases are not so clear.

“We can use ChatGPT 17 times a day, but do we have to write an email saying it every time? Probably not if you’re calculating your travel itinerary and whether you should go by plane or car, something like that,” Bhargava said.

“There are many harmless cases where I don’t think I should disclose that I used ChatGPT.”

It’s unclear how many companies have explored all the ways staff could use AI and convey what is acceptable or not.

An April 2023 study by consulting firm KPMG of 4,515 Canadians found that 70 per cent of Canadians who use generative AI say their employer has a policy around the technology.

However, an October 2023 investigation by software company Salesforce and YouGov found that 41 percent of the 1,020 Canadians surveyed reported that their company had no policies on using generative AI for work. About 13 percent had only “vaguely defined” guidelines.

At Sun Life Financial Inc., staff cannot use external AI tools for work because the company cannot guarantee that financial, health, or client information will be kept private when these systems are used.

However, the insurer allows workers to use internal versions of Anthropic’s AI chatbot, Claude, and GitHub Copilot, an AI-based scheduling assistant, because the company has been able to ensure that both comply with its data privacy policies. said Chief Information Officer Laura Money.

So far, he’s seen staff use the tools to write code and put together memos and video scripts.

To encourage more people to experiment, the insurer has encouraged staff to sign up for a free, self-paced online course from CIFAR that teaches the principles of AI and its effects.

Of that move, Money said, “You want your employees to be familiar with these technologies because it can make them more productive and improve their work life and make work a little more fun.”

About 400 workers have signed up since the course was offered a few weeks ago.

Despite offering the course, Sun Life knows its approach to technology must continue to evolve because AI is advancing so quickly.

Plum and CIFAR, for example, released their policies before generative AI tools that go beyond text to create sound, audio or video were available.

“There wasn’t the same level of imaging as there is now,” MacGregor said of the summer of 2023, when Plum launched its AI policy with a hackathon in which it asked staff to write poems about the business with ChatGPT or experiment how it could solve some of the business problems.

“An annual review is definitely probably necessary.”

Bhargava agrees, but said many organizations still have to catch up because they don’t have a policy yet.

“The time has come to do it,” he said.

“If the genie is out of the bottle, it’s not like we can think ‘maybe next year we’ll do this.'”

This report by The Canadian Press was first published May 6, 2024.

Companies in this story: (TSX:SLF)


Leave a Comment