But researchers I’ve spoken with over the past few months say the 2024 US presidential election will be the first with widespread use of micro-influencers who don’t typically post about politics and have built small, specific, highly engaged audiences, often composed primarily of one particular demographic. In Wisconsin, for example, such a micro-influencer campaign may have contributed to record voter turnout for the state supreme court election last year. This strategy allows campaigns to plug into a specific group of people via a messenger they already trust. In addition to posting for cash, influencers also help campaigns understand their audience and platforms.
This new messaging strategy seems to operate in a bit of a legal gray area. Currently, there aren’t clear rules on how influencers need to disclose paid posts and indirect promotional material (like, say, if an influencer posts about going to a campaign event but the post itself isn’t sponsored). The Federal Election Commission has drafted guidance, which several groups have urged it to adopt.
While most of the sources I’ve spoken with have talked about the growth of this trend in the US, it’s also happening in other countries. Wired wrote a great story back in November about the impact of influencers on India’s election.
Digital censorship
Crackdowns on speech by political actors are of course not new, but this activity is on the rise, and its increased precision and frequency is a result of technology-enabled surveillance, online targeting, and state control of online domains. The latest internet freedom report from Freedom House showed that generative AI is now aiding censorship, and authoritarian governments are increasing their control of internet infrastructure. Blackouts too are on the rise.
In just one example, recent reporting by the Financial Times shows that the current Turkish government is tightening internet censorship ahead of elections in March by directing internet service providers to limit access to private networks.
More broadly, digital censorship is going to be a critical human rights issue and a core weapon in the wars of the future. Take, for example, Iran’s extreme censorship during protests in 2022, or the ongoing partial internet blackout in Ethiopia.
I’d urge you to keep a close eye on these three technological forces throughout the new year, and I’ll be doing the same—albeit from afar!
On a personal note, this is my last Technocrat at MIT Technology Review, as I’ll be leaving to pursue opportunities outside of journalism. I’ve loved having a home in your inboxes over the past year and am humbled by the trust you’ve given me to cover stories of immense importance, like how police are surveilling Black Lives Matter protesters, the ways technology is changing beauty standards for young girls, and why government technology is so hard to get right.
Stories about how technology is changing our countries and our communities have never been more important, so please keep reading my colleagues at MIT Technology Review, who will continue to cover these topics with expertise, balance, and rigor. I’d also encourage you to sign up for our other newsletters: The Algorithm on AI, The Spark on climate, The Checkup on biotech, and China Report on all things tech and China.
What I am reading this week
- OpenAI has removed its ban on military use of its AI tools, according to this great report by Hayden Field in CNBC. The move comes as the company begins work with the Department of Defense on AI.
- Many of the world’s biggest and brightest are in Davos this week at the World Economic Forum, and Cat Zakrzewski says the talk of the town is AI safety. I really enjoyed her insider look in The Washington Post at the tech policy concerns that are top of mind.
- Researchers from Indiana University Bloomington have found that OpenAI and other large language models power some malicious websites and services, such as tools that generate malware and phishing emails. I found this write-up from Prithvi Iyer in Tech Policy Press really insightful!
What I learned this week
Google’s DeepMind has created an AI system that is very good at geometry, a historically hard field for artificial intelligence. My colleague June Kim wrote that the new system, called AlphaGeometry, “combines a language model with a type of AI called a symbolic engine, which uses symbols and logical rules to make deductions.” She says the system is “a significant step toward machines with more human-like reasoning skills.”