It’s Time to Take “Corrupt AI” Seriously
Recently your social media feed may have been flooded with headlines on the advances in AI or even AI-generated images. Text-to-image algorithms such as Dall-E 2 and Stable Diffusion are becoming hugely popular. GPT-4, developed by OpenAI, is now the world’s best-performing large language model. The previous iteration, ChatGPT, reached 1 million users in its first week—a rate of growth much faster than Twitter, Facebook, or TikTok.
As AI demonstrates its ability to craft poetry, write code, and even pollinate crops by imitating bees, the governance community is waking up to the impact of artificial intelligence on the knotty problem of corruption. Policy institutes and. academics have pointed to the potential use of AI to detect fraud and corruption, with some commentators heralding these technologies as the “next frontier in anti-corruption.”
Amid all the excitement, it can be easy to lose sight of the fact that AI can also produce undesirable outcomes due to biased input data, faulty algorithms, or irresponsible implementation. To date, most of the negative repercussions from AI that have been documented are unintentional side-effects. However, new technologies present new opportunities to wilfully abuse power, and the effect that AI could have as an “enabler” of corruption has received much less attention.
A 2023 Transparency International working paper I coauthored introduces the concept of “corrupt AI”—defined as the abuse of AI systems by public power holders for private gain—and documents how these tools can be designed, manipulated, or applied in a way that constitutes corruption.
Politicians, for instance, could abuse their power by commissioning hyper-realistic deepfakes to discredit their political opponents and increase their chances of staying in office. The misuse of AI tools on social media to manipulate elections through the spread of disinformation has already been well documented.
Yet corrupt AI does not just occur when an AI system is designed with malicious intent. It can also take place when people exploit the vulnerabilities of otherwise beneficial AI systems. This becomes of greater concern with the significant push worldwide toward digitalizing public administration. Algorithm Watch, for instance, concluded that citizens in many countries already live in “automated societies” in which public bodies rely on lines of code to make important social, economic, and even political decisions.
Digitalizing government services has long been recognized as reducing officials’ discretion when making decisions and thereby constraining opportunities for corruption. Yet, as our paper demonstrates, replacing humans with AI brings novel corruption risks. These are four good reasons why the risk of “corrupt AI” should be taken seriously.
1. Deniability and Dissonance
People are more likely to behave in a corrupt manner when they are less likely to get caught, such as when they can hide behind plausible deniability. The risk of individuals breaking ethical rules to reap illicit benefits is even higher in circumstances where they are not directly confronted by victims—in other words, when there is a large psychological distance to the people affected by their unethical behavior.
According to research in behavioral science, the deployment of AI systems could enhance both risk factors. Indeed, the complexity and autonomy of these systems, which produce outputs that are often incomprehensible to humans based on the input data provided, could make it easier for corrupt manipulation of this technology to escape detection. At the same time, the introduction of AI tools as an intermediary in decision-making processes can increase the psychological distance between perpetrator and victim.
The healthcare sector is one example of an area where these risk factors can undermine the potential benefits of AI. Doctors and health sector works are already being trained to use algorithms to help detect diseases and to assist in making healthcare-cost estimations. Yet there is some indication that these systems can be easily fooled. By simply changing a few pixels or the orientation of an image, doctors can trick AI image-recognition systems to produce faulty results, such as misidentifying a harmless mole as cancerous in order to prescribe expensive treatment. Healthcare workers can similarly reap benefits from manipulating AI systems to classify patients as high-risk and high-cost. These concerns are not hypothetical—researchers in the journal Science have already warned about this.
2. Scaling Up to Affect Millions
The second reason to take the risk of corrupt AI seriously is its potential to increase the scale of damage caused by an act of corruption. If you bribe a person, you might influence 100 people; if you corrupt an algorithm, you can affect millions.
“Algorithmic capture” describes how AI systems can be manipulated to systematically favor a specific group. For example, tweaking the code of algorithms used in electronic procurement or fraud detection programmes can steer lucrative public contracts to cronies or conceal wrongdoing by certain well-connected entities. While bribing an individual is usually about breaking the rules of the game to get illicit special treatment, corrupting an algorithm by bribing its developer or manipulating its code changes the rules of the game entirely. If an AI system is distorted to allocate resources in a particular way—such as licenses, permits, or tax breaks—a new corrupt “rule” can be embedded into the entire system.
3. Fewer People to Blow the Whistle
The third reason is that replacing humans with AI in public administration reduces reporting and whistleblowing potential. When decision-making authority shifts toward AI, there are fewer people involved who could report instances of corruption. Moreover, humans working in settings where algorithms do the policing and reporting might receive less training, and thereby lose the skills and knowledge needed to detect and report cases of corruption.
4. Secret Code and Concealed Corruption
The final risk factor is opacity. When AI systems are implemented without involving citizens, and code and training data are not disclosed, the threat of corrupt abuse of these systems is higher. For example, investigative efforts have documented biases in face-detection algorithms, as well as AI systems used for hiring decisions.
Suppose people developing and implementing such systems want to intentionally encode biases to favor certain demographic groups on a systemic level. In that case, the secrecy of code and data makes the reliable detection of intentional abuse of algorithms challenging to detect. As most AI tools are developed by the private sector, not state entities, reluctance to disclose commercially sensitive information, such as training data and underlying code, is widespread and hinders the auditing of the algorithms.
Subscribe to the Ethical Systems newsletter
In authoritarian regimes marked by a weak rule of law, even AI systems created to curb corruption can be abused for corrupt purposes. For instance, take the “Zero Trust” project implemented by the Chinese government to identify corruption among its workforce of over 60 million public officials by letting AI algorithms cross-reference 150 databases, including public officials’ bank statements, property transfers, and private purchases. While nominally intended to raise red flags that could indicate corrupt behavior, those who control this kind of digital surveillance infrastructure can easily abuse it to advance their narrow private interests or advance their political agenda.
What Can Be Done?
As ever broader swathes of our lives become regulated by AI, what safeguards can be put in place to ensure that we are not exposed to illicit—and often undetectable—abuses of power? Besides general suggestions like strengthening the rule of law, arguably the most promising countermeasure is facilitating checks and balances, ideally as an integral part of the development and deployment process.
One concrete challenge here lies in enforcement. How can private and public companies be forced to submit to oversight processes that may involve outsiders?
An important step would be to establish transparency regulations that mandate code and data to be shared responsibly. Privacy can be safeguarded by uploading data in a masked way; techniques like differential privacy help to remove identifiable information while still allowing the data to be meaningfully analyzed. By increasing accessibility, such transparent digital infrastructure facilitates code audits, as it allows data scientists to inspect code and data.
And it’s crucial that everyone has access, not just state authorities. Involving civil society, academics, and other citizens in the development, deployment, and improvement of AI systems is key—because oversight in public administration is vital to ensure these tools serve the public interest.
Nils Köbis is the Senior Research Scientist at the Center for Humans and Machines, Max Planck Institute for Human Development
Reprinted with permission from Nils Köbis / Transparency International.