Leaders discuss the right speed for innovation and regulation at Remarkable AI conference

Leaders discuss the right speed for innovation and regulation at Remarkable AI conference

Toronto-based artificial intelligence (AI) hub Vector Institute held its latest annual AI conference, Remarkable, earlier this week.

BetaKit is here to break down a few of the most interesting points that panellists made regarding reinforcement learning, Canada’s AI compute capacity, and striking the right balance between caution and innovation.

Getting it wrong can hurt—but so can not moving fast enough

A common thread during the conference’s second day was the concern that AI regulation and adoption can be harmful if implemented badly, but that there is also some risk associated with moving too cautiously.

Laura Gilbert, head of AI for government at the Ellison Institute of Technology Oxford, said AI regulation is a “very difficult area to get right” given the rapid pace of AI development.

“Any regulation that looks to protect us against something in the future in any sort of specificity, means that we will set up regulation that is not flexible enough, that’s not future-proof and could actually put us at risk,” Gilbert argued during a fireside chat, noting that “drawing back and not rushing into [AI regulation] has been important.”

RELATED: An AI report to distract you from tariffs

Foteini Agrafioti, RBC senior vice-president and data and AI and chief science officer, said during one panel that the same is true when it comes to corporate adoption of AI. She warned other companies against jumping into AI investments “all in” without adequate research and consideration. “Test up front, validate your hypothesis,” Agrafioti said. “We learned that in a hard way, many, many times.”

In a separate discussion, CAN Health Network founder and chair Dante Morra noted that not moving quickly enough carries its own risk. Morra argued that Canada’s healthcare system is moving “way too slow” when it comes to AI adoption, and thinks it is possible for it to move faster while still doing so safely. 

“The paradigm of where the risk is, is completely wrong,” Morra said.

“Every single day, our access goes down,” he added. “Every single day, our chance to win in the new healthcare economy is less. There’s an existential challenge of adoption, but when you’re running a big organization, you’re more worried about the reputational risk of something going wrong with an AI company, so I think we actually have to completely tilt the table here.”

Reinforcement learning will be big

While generative AI is all the rage right now, Deloitte chief science officer Ian Scott believes that reinforcement learning (RL) also has an important role to play in terms of enterprise AI adoption, and thinks businesses ought to prepare accordingly.

RL is a technique for training agents through trial and error, with rewards for success. In RL, “rewards” are calculated mathematically, numbers are assigned to desirable outcomes, and algorithms run until they maximize the reward—eventually determining how to complete computing tasks in the most desirable and efficient manner.

RELATED: New Turing Award winner Richard Sutton calls doomers “out of line,” talks path to human-like AI

This week, University of Alberta and Alberta Machine Intelligence Institute’s Richard Sutton won the Turing Award for his pioneering work in RL. RL has played a role in the training of ChatGPT, OpenAI’s popular large-language model, through a technique called RL from human feedback.

Scott said in a panel that, given the limitations of generative AI, Deloitte and other players are using a lot of RL right now. “[RL] is going to be big, and I think we need to build an enterprise capability around it,” he added.

Our compute does not compute

The Deloitte executive also said more domestic computing capacity—which Vector president and CEO Tony Gaffney and others have called for—is at the top of his wishlist. Scott lamented the time it takes to get access to graphics processing units (GPUs), the chips that frequently power the expensive and energy-intensive computers needed for AI.

Maksims Volkovs, TD Bank senior vice-president and head of AI (and co-founder of TD-owned Layer 6), echoed Scott’s assertion. “If I had 100,000 GPUs, I would be so much faster,” Volkovs said during the panel discussion.

At the moment, computing capacity remains a key limiting factor in AI. OpenAI co-founder and CEO Sam Altman has said a lack of compute is delaying its products, noting last month that the company had run out of GPUs. Experts have argued Canada’s underinvestment in computing power threatens the country’s AI advantage.

RELATED: In 2024, Canada struggled to find its place in the global AI race

Speaking in reference to current geopolitical tensions and the ongoing trade war, Scott argued that, “as a nation, we have to solve access to compute fast—sooner rather than later.”

The federal government committed $2 billion to expand Canada’s computing capacity last spring via the Canadian Sovereign AI Compute Strategy. But that figure represents a drop in the bucket compared to the sums of money being poured into the space by tech giants and Canada’s peers.

Feature image courtesy of the Vector Institute. Photo by Jennifer Jenkins.

The post Leaders discuss the right speed for innovation and regulation at Remarkable AI conference first appeared on BetaKit.

 Canadian Startup News, AI, Events, hardware, Ontario, Toronto BetaKit

​Vector Institute event also explored the role of reinforcement learning and Canada’s compute needs.
The post Leaders discuss the right speed for innovation and regulation at Remarkable AI conference first appeared on BetaKit. 

Leave a Reply

Your email address will not be published. Required fields are marked *