The dramatic rise of artificial intelligence and its potential for profound change has led to calls for increased regulations. Kevin Hebner, Global Investment Strategist with TD Epoch, joins Kim Parlee to discuss the difficulties in trying to regulate a technology that’s still evolving.
Transcript
Kim Parlee: One of the dominant themes to emerge from this year’s World Economic Forum in Davos was artificial intelligence and, specifically, how to regulate it. It was a testament to AI’s rise and its potential impact on literally everything — our jobs, even our personal sense of privacy. My next guest has released part three of a five-part series on the topic, entitled “AI, How to Regulate an Emerging Technology.” Kevin Hebner is Global Investment Strategist at TD Epic. And we are thrilled to have him come here in person. Nice to see you.
Kevin Hebner: Thank you, Kim.
Kim Parlee: So we’ve been talking about AI for a while now. And you’ve been schooling us, I would say, in terms of the potential for AI and where it’s going. We’re talking about regulation of AI today. But I want to start off with just why. Why is there — what is the reason? What are the concerns around the need to regulate AI?
Kevin Hebner: So in the previous times we’ve talked, we talked about the opportunities coming with AI, what it’s going to mean for productivity, a big boost — well, I mean, for some sectors — education, health care, legal service, and so on. So big benefits out there. But there are some potential harms. We already know that AI tends to hallucinate sometimes, that there’s bias with it, that there are some privacy concerns, for example, with facial recognition.
There are copyright issues. There’s a big lawsuit between OpenAI and New York Times right now. And there’s also some more insidious things that could happen. It could be that you use AI to create different types of bioweapons. It could be used to have an AI-created cyber attack, for example.
So there’s a host of reasons. And this is true with any new technology. There are some benefits, and also, there are some potential harms. The idea is to put a regulatory framework in place so that you can get the benefits but without too many of the harms coming through.
Kim Parlee: Which we’ll talk about whether that’s possible and in terms of how that’s put together. Maybe we could start with what’s been done so far. What are, I’ll say, the nascent regulations that have been put in place?
Kevin Hebner: Yeah, so there’s been a US Executive Order that came through on October 30. It was 100 pages. It affects 50 agencies. There are 150 rules. It’s pretty enormous. There’s a question as to whether President Biden has the constitutional authority to do what he did. And I do think that is questionable.
But it’s so amorphous and the technology is so early days, it’s not really clear those regulatory actions will have a big effect. Probably the most important feature of that is that if an AI model is sufficiently big, so GPT-4 or larger, the company promoting that has to do red testing, red team testing. So that was — you’d have a —
Kim Parlee: What is red team testing?
Kevin Hebner: So you have an independent team within your company. And you come in, and you try to see if the model will do bad things if prompted aggressively enough. So will it have bias? Will it violate privacy? Will it create a cyber weapon or a bioweapon, things like this? And then you report those results to the federal government. So that is a requirement that all the major platforms have been signed on to. And that’s probably the most important impact of the Executive Order.
Kim Parlee: So I think, when you describe the potential – I mean, this is the thing. When something has exponential potential in pretty much every industry and every direction, I think that to use the phrase, and I think this phrase was tried to be used by Bill Gates when he was at Davos talking about Wayne Gretzky, “skate to where the puck is going,” well, the puck’s going everywhere in all directions. So how on earth do you even get your head around how to do — how to manage this?
Kevin Hebner: Yeah, so I think it’s a good metaphor. So you skate to where the puck is going. You want to regulate where the technology is going. We have no idea where the technology is going. And if you think even 15 months ago, when we were thinking about AI, we thought, first of all, it would affect sort of blue-collar different types of physical activities and then maybe white-collar knowledge workers and then, finally, creative.
But just in 15 months, that’s been turned upside down. It’s going after creative people — writers, coders, artists, composers, some knowledge workers, people in the banking sector, and so forth. And then it looks like blue-collar physical workers will be —
Kim Parlee: Protected.
Kevin Hebner: Yeah. So even in 15 months, our understanding of how AI is going to play out has totally changed. And Sam Altman, head of OpenAI, and just about everyone else agrees that, as with every new technology we’ve had over the last 500 years, you really have no clarity for a long time how it’s going to play out. And so to regulate where the puck is going, it strikes me as quite premature.
Kim Parlee: So you mentioned that regulation has been around to help grow industries, I’ll say, responsibly for a long time. And you cited, even in your report, talking about railroads and those types of things. So let’s just pretend that the same frameworks could apply here. What are some of the challenges, I would say, for doing this? And what are there — I think there are three common mistakes you talk about that can be made.
Kevin Hebner: Yeah. And so if you look at the history of industry regulation in the US, initially with railways around 1870, then automobiles, airplanes, nuclear power, and so forth, typically, there’s a lag of about 10 to 20 years from when you get a commercially viable product until you get a regulatory framework. And so we had the first commercially viable product a couple of months ago. So I think things are quite early.
So one type of error would just be to move too early before it’s clear what the product’s going to be. A second error is that you create a regulatory framework that benefits the incumbents and really entrenches incumbents. So right now, there’s very little AI expertise in the federal government in Canada or the United States or anywhere. So the people coming up with the details on the rules will be the industry. And those will be to favor themselves and keep out up-and-coming companies that could threaten their position.
And then a third is that you do put in place a set of regulatory rules. And it becomes hard and fast. And it both prevents you from receiving the benefits of the new technology but also doesn’t do anything to reduce the harms. And so you get sort of the worst of both worlds. And looking at the history of technology over the last 150 years, there are lots of examples of each of those three errors being made.
Kim Parlee: I want to ask you about a couple of charts that you have in the report. The first one is showing the total private investment in AI. This is in billions of dollars we’re showing. When you bring this up, why is that important? What is notable about this?
Kevin Hebner: Yeah. So I think in terms of US exceptionalism, US has dominated the computer age, the internet age, the iPhone age, and the cloud. And now it looks like the US will continue to dominate in the AI age. And it’s not because Americans are inherently smarter. I think we all know that’s not true. And in fact, many of the people leading the developments and leading the companies are not Americans. But America has a number of advantages.
One is the VC ecosystem. So there’s lots of funding. There’s lots of private investment, as this chart shows. A second advantage is a very light regulatory touch. So in terms of the trade-off between innovation and safety, America will tend to favor innovation, whereas places like Europe will tend to favor safety. So it does look like this will continue to be a US-centric, which often means a California-centric, technology.
Kim Parlee: And at the same time, we’ve got a chart here where you’re showing the AI’s exponential growth. This chart is amazing and scary all at the same time.
Kevin Hebner: Yes. And I think it gets scary when you think of the number of parameters in these models. And that is just unstructured data. You go to unstructured data. You go to images. You go to videos. The amount of data is going to be growing 100 fold, 100 fold, and 100 fold. And so models get bigger and bigger. We go from 7 billion parameters to 100 billion parameters beyond.
And the amount of computing and the expense to run these things, it means that it’s going to be a very small number of companies dominating. And that will probably be, in terms of platforms, three to four, similar to what we’ve had with the internet, the cloud, iPhone, and so forth.
Kim Parlee: The one thing that — and again, you address it in your paper — is this tends to be a winner-take-most landscape.
Kevin Hebner: Yes.
Kim Parlee: Yeah. So tell me how that plays out.
Kevin Hebner: So from the perspective of an investor?
Kim Parlee: Yes.
Kevin Hebner: Yes? So when we’re looking at digital technology, say, over the last 25 years, we’ve seen increased concentration in the market. Smaller and smaller numbers of companies win. And that reflects the fact that if you think of the digital tech part of the market, their margins, the return on equity, have doubled, sometimes tripled over the last 25 years, whereas the rest of the market, is basically flattish.
And we’ve seen that even going forward for this year. Consensus has earnings growth for the Magnificent Seven, so a small number of tech names, up 55%, where the rest of the market earnings growth, about 4.8%.
Kim Parlee: That’s astounding when you just say that.
Kevin Hebner: Yeah, so we have this enormous bifurcation of the market that’s been going on for 25 years and continues. So within that, from the perspective of investors, the platforms look interesting. So this could be Google (GOOG), Microsoft (MSFT), Meta (META), or Facebook. So the big platforms, they will continue to be interesting. And then you have sort of picks and shovels, say, the semiconductor companies. So within semiconductors, you have design companies like NVIDIA (NVDA). But there’s quite a few of those, many based in California. They look interesting.
You have the equipment companies. ASML (ASML) based in the Netherlands is the big one. But there’s Canon (OTCPK:CAJPY) in Tokyo, Electron (OTCPK:TOELF), Advantest (OTCPK:ATEYY) based in Japan as well. And then you have the fabs. And there are a small number of fabrication companies – TSMC (TSM) in Taiwan, Samsung in Korea, and Intel in the US. So semiconductor space looks interesting.
And then there are sort of applications. And one application that’s done recently well is Adobe (ADBE). So it’s making sort of creative digital content. And it’s a great platform. There’s many companies like Adobe that are publicly traded and literally hundreds and thousands of those coming up through the pipeline and starting to get funded at the VC stage.
And then there are industrial companies that are aggressively implementing AI. And one example of that we use is Deere (DE). And it’s hard to think of Deere, it’s an agriculture company as an AI play. But they’ve been very aggressive, hiring software engineers, and implementing AI into their combines. So combines do lots of things. They do tilling, planting, water, fertilizing, and harvesting. But they use AI intensively — for example, the water part, looking at the moisture level.
Kim Parlee: Deciding what needs to go there.
Kevin Hebner: So there’s also – and I think this is the interesting part, where AI broadens throughout the equity market and the economy as you get more companies like Adobe and more companies like Deere aggressively investing in AI.
Kim Parlee: Again, the list is long, and it’s exciting. I’ve only got about 30 seconds, Kevin, but I want to ask you, where do antitrust and all the regulators like FTC and those start to dig in? Because all these players are — the big ones are going to get even more entrenched, I assume.
Kevin Hebner: Yeah. And all the big platform companies, they’ve been buying hundreds of companies over recent years. Very few instances of the FTC or DOJ stopping them from doing that. And it looks, going forward, there’s pretty close to a green light for that continuing.
Kim Parlee: Yeah. And I guess people always have to kind of manage their own risk in terms of their own portfolios. But any watch-outs maybe to think about in the AI space?
Kevin Hebner: What do you mean by a watch-out?
Kim Parlee: Just, I mean, the growth seems very positive. Is there anything that could change that? I guess is it just the regulatory side in terms of how fast it comes in.
Kevin Hebner: I think it’s very unlikely that the regulatory side will come in more quickly than expected. The EU, China, the US, they’re all trying to do this. But it takes an awful long time. I’d say if we get super disappointed and AI can’t do any of the things we believe it can do, but all indicators are that AI continues to expand its capabilities even more quickly than we looked for. So there is always a risk that things go wrong. But right now, the skies look pretty good.
Kim Parlee: Yeah. Kevin, such a pleasure. Thanks so much.
Kevin Hebner: Thanks, Kim