One of the biggest areas to watch, of course, will be generative AI, particularly how it changes social media, political campaigning, and the fight over election misinformation. This confluence of new tech and big elections is also happening while the social media industry is going through major changes, including shifts in moderation approaches, legal battles, cuts to trust and safety teams, and platform shake-ups.
This is all poised to make the future of the fight against misinformation murky, to say the least. It’s a topic my colleagues and I take very seriously and have covered extensively in the past. And recently in MIT Technology Review, former Google boss Eric Schmidt penned an op-ed that lays out what he calls “a paradigm shift for social media platforms”:
The role of Facebook and others has conditioned our understanding of social media as centralized, global “public town squares” with a never-ending stream of content and frictionless feedback. Yet the mayhem on X (a.k.a. Twitter) and declining use of Facebook among Gen Z—alongside the ascent of apps admire TikTok and Discord—suggest that the future of social media may look very different. In pursuit of growth, platforms have embraced the amplification of emotions through attention-driven algorithms and recommendation-fueled feeds.
But that’s taken agency away from users (we don’t control what we see) and has instead left us with conversations full of hate and discord, as well as a growing epidemic of mental-health problems among teens … Now, with AI starting to make social media much more toxic, platforms and regulators need to act quickly to regain user trust and safeguard our democracy.
Schmidt goes on to lay out a six-point scheme social media companies can follow to confront the moment. One thing I was happy to see him cite is the importance of provenance information, which I have written about a few times previously. It’s an insightful and useful piece that I’d definitely urge you to read!
This is the last Technocrat of 2023, and I’ll be back in your inbox in January. In the meantime, over the next few weeks we’ll be publishing more stories about what’s to come in technology in 2024, so be on the lookout for those. And if you want to catch up on some past stories that you may have missed, here are just a few of my favorites from my colleagues in 2023:
What I am reading this week
What I learned this week
Microsoft’s Bing AI chatbot, renamed Microsoft Copilot, got election information wrong one third of the time, according to a new investigate from nonprofits AI Forensics and AlgorithmWatch. Will Oremus in the Washington Post writes that the investigate results “reinforce concerns that today’s AI chatbots could contribute to confusion and misinformation around future elections as Microsoft and other tech giants race to incorporate them into everyday products, including internet seek.” Here’s a reminder to not rely on generative AI for news!