If 2022 marked the beginning of the generative AI boom, 2023 marked the beginning of the generative AI crisis. Just over a year after OpenAI introduced ChatGPT, breaking the record for the fastest-growing consumer product, it looks to have also helped break the record for the fastest government intervention in a new technology. The US Federal Elections Commission is investigating fraudulent political advertisements, Congress is demanding monitoring of how AI businesses produce and label training data for their algorithms, and the European Union enacted its new AI Act with last-minute changes to address generative AI.
Despite their novelty and speed, the issues of generative AI are achingly familiar. OpenAI and its competitors rushing to build new AI models face the same issues that have plagued social platforms, the previous era-shaping new technology, for over two decades. Companies like Meta never won the battle against misinformation, shady labor practices, and non-consensual pornography, to mention a few of its unexpected effects. Now, those concerns have taken on a new challenge with an AI twist.
In other circumstances, generative AI firms are built directly on faulty infrastructure installed by social media businesses. Facebook and others started to rely on low-wage, outsourced content moderation personnel, frequently from the Global South, to keep stuff like hate speech, nudity, or violence at bay.
That same labor is now being called upon to assist in the training of generative AI models, sometimes with similarly low pay and harsh working circumstances. Because outsourcing places critical parts of a social platform or AI firm administratively at a distance from its headquarters and sometimes on a different continent, academics and regulators may struggle to acquire a complete view of how an AI system or social network is produced and managed.
Outsourcing can also disguise the location of a product’s genuine intelligence. Was a piece of content removed by an algorithm or one of the many thousands of human moderators? How much credit is given to AI and how much to the worker in an overheated outsourcing center when a customer support chatbot assists them?
There are parallels between how AI businesses and social platforms respond to criticism of their negative or unforeseen consequences. AI businesses discuss implementing “safeguards” and “acceptable use” regulations on generative AI models, just as platforms have terms of service governing what material is and is not permitted. AI policies and procedures, like social network norms.
It is uncertain if chatbot providers make their products dependable enough to avoid the reactionary loop seen on social networks, which continually but ineffectively police the fire hose of fresh, bad information.
Although businesses such as Google, Amazon, and OpenAI have pledged to implement certain fixes, such as adding digital “watermarks” to AI-generated movies and photographs, experts have cautioned that these measures, too, are easily circumvented and are unlikely to be long-term solutions.
In response to worries that fake videos of US political candidates would disrupt 2024 election campaigns, Meta and YouTube implemented standards that require AI-generated political adverts to be prominently identified. However, the policy does not cover different methods phony material can be made and spread, such as watermarking generated photos and video.
Despite what appears to be a bleak prognosis, platforms have begun to reduce the resources and teams required to detect harmful content, according to Sam Gregory, program director of the charity Witness, which assists individuals in using technology to promote human rights. Over the last year, major technology businesses have let off tens of thousands of employees. “You’re reducing the capacity, both within companies and civil society, to pay attention to how it’s used deceptively or maliciously.”
Even though the US Congress and authorities across the globe appear eager to respond to generative AI more quickly than they did to social media. AI regulation lags significantly behind AI progress. That implies there is no need for the new wave of generative AI startups to slow down for fear of fines.