The machine did it! (TI Newsletter)
- Fri, 16 June 2023
09/06/2023
Each day, technology is becoming more and more advanced. Machines can now write poetry, song lyrics and have sophisticated conversations that aren’t so different from those we have with our friends and co-workers.
AI could even write this newsletter for us... But for now, it’s humans behind this anti-corruption movement.
While it’s appealing to quickly embark on a journey into the realm of artificial intelligence, we should pause and ask: why are many of the creators of AI software sounding the alarm over their own creations?
Recently, top AI experts – including the CEO of OpenAI, the creators of ChatGPT, Sam Alman and AI godfather Geoffrey Hinton – issued a powerful statement expressing their concern about the potentially harmful effects of this rapidly growing technology. In the statement, they call on world leaders to recognise “the risk of extinction from AI” as a global threat comparable to pandemics and nuclear war.
This is not the first time experts in the field have issued a warning. A few weeks ago, many called for a pause on large-scale AI experiments, citing profound risks to society and humanity. The Center for AI Safety noted major concerns with the exponential growth of AI, including risks of weaponisation, the spread of misinformation and growing power imbalances as well as human dependence on tech. All akin to the cautionary themes depicted in Pixar’s iconic film, WALL-E.
A crucial issue that has yet to be explored at length is the connection between AI and corruption.
While many industry figures have pointed to AI’s potential contribution in curbing corruption – referencing its ability to detect fraud and predict corruption risks – it can also have negative consequences, whether intended or not.
The problem of AI perpetuating human biases has already been well documented, with multiple studies pointing to racism in predictive policing algorithms and instances of ChatGPT relying on sexist stereotypes.
Where AI is intentionally abused by power holders for private gain, it’s likely to have serious impacts on economic, political and social life, depending on how it is designed, manipulated and applied. In the political field, for instance, AI can and has been used to influence elections and even to damage the reputation of candidates – especially women – using hyper-realistic deepfakes.
Since AI is a frontier technology, policymakers are still getting to grips with how to set up proper rules and systems to keep things in check. In the meantime, AI systems that rely on machine learning operate in ways that are opaque and even incomprehensible to humans, making them difficult to hold to account.
This is the perfect scenario for shady and powerful figures waiting to perpetuate corrupt acts. Bad actors can exploit AI’s vulnerabilities that increase corruption risks. AI training data can be manipulated to favour certain outcomes over others. The models or settings of AI algorithms may also be tweaked to privilege private interests over the common good.
Because of confidentiality and data protections, powerful people can easily manipulate the results of AI systems to serve their own interests. As this technology continues to advance rapidly, it becomes tempting and convenient to shift the blame onto machines or algorithms rather than taking responsibility as humans. This growing disconnect from the actual suffering of people can create ethical loopholes, making it easier to commit crimes in the first place.
So where do we go from here? As highlighted in our paper, a starting point is to establish adequate AI regulations to prevent misuse, combined with the creation of professional codes of conduct and ethics training for those developing AI codes and technologies.
Last week, G7 officials met for the first AI working group to discuss how this new technology should be regulated. This is a step in the right direction. But until we have some concrete action items, the question remains: Is AI our friend or foe? Maybe you can ask Chat GPT.
Source From _ TI News Letter