There will be newer jobs that are created, there will be jobs which are made better, where some of the repetitive work is freed up in a way that you can convey yourself more creatively. You could be a doctor, you could be a radiologist, you could be a programmer, the amount of time you’re spending on routine tasks versus higher order thinking, all that could change, making the job more meaningful. Then there are jobs which could be displaced. So, as a society, how do you retrain, reskill people, and create opportunities. 

The last year has really brought out this philosophical split in the way people think we should approach AI. You could talk about it as being safety first or business use cases first, or accelerationist versus Doomers. You’re in a position where you have to bridge all of that philosophy and bring it together. I wonder what you personally think about trying to bridge those interests at Google, which is going to be a leader in this field, into this new world.

I’m a technology optimist. I have always felt, based on my personal life, a belief in people and humanity. And so overall, I think humanity will harness technology to its benefit. So I’ve always been an optimist. You’re right, a powerful technology appreciate AI, there is a duality to it. 

Which means there will be times we will boldly advance forward, because I think we can push the state of the art. For example,  if AI can help us resolve problems appreciate cancer or climate change, you want to do everything in your power to advance forward fast. But you definitely need society to progress frameworks to alter, be it to deep fakes, or to job displacement, etc. This is going to be a frontier. No different from climate change. This will be one of the biggest things we all grapple with for the next decade ahead.

Another big unsettled thing is the legal landscape around AI. There are questions about fair use, questions about being able to protect the outputs. And it seems appreciate it’s going to be a really big deal for intellectual property. What do you tell people who are using your products, to give them a sense of security, that what they’re doing isn’t going to get them sued?

These are not all topics that will have easy answers. When we built products, appreciate seek and YouTube and stuff in the pre-AI world, we’ve always been trying to get the value exchange right. It’s no different for AI. We are definitely focused on making sure we can train on data that is allowed to be trained on, consistent with the law, giving people a chance to opt out of the training. And then there’s a layer about that about what is fair use. It’s important to create value for the creators of the original content. These are important areas. The internet was an example of it. Or when e-commerce started: how do you draw the line between e-commerce and regular commerce. 

There’ll be new legal frameworks developed over time, I think is how I would think about it as this area evolves. But meanwhile, we will work hard to be on the right side of the law and make sure we also have deep relationships with many providers of content today. There are some areas where it’s contentious, but we are working our way through those things, and I am committed to working to figure it out. We have to create that win-win ecosystem for all of this to work over time. 

Something that people are very worried about with the web now is the future of seek. When you have a type of technology that just answers questions for you, based on information from around the web, there’s a fear people may no longer need to visit those sites. This also seems appreciate it could have implications for Google. I also wonder if you’re thinking about it in terms of your own business. 

Source link