
Artificial Intelligence Needs More than Consensus
In pursuit of financial gain, tech companies appear to blur the lines between ethical and non-ethical use of Artificial Intelligence according to respected academics and researchers.
By Archana Khatri Das
The capability of artificial intelligence in perceiving, reasoning, learning and solving problems, just like – human cognitive functions – is improving productivity and efficiency of operations across multiple domains including transportation, agriculture, education, medicine, healthcare, criminal justice, and safety and national security and more.
According to a recent report by global tech giant Accenture, businesses can increase their profitability by an average of 38% by 2035, if they incorporate AI into their operations. The report added that the introduction of AI will potentially create additional gross value (GVA) of USD 14 trillion across 16 industries in 12 economies.
Many AI applications like face recognition, automated learning, language translation and processing, robotics, game-playing, and tools like recommendations engine, are growing in usage with many more applications under advanced stages of development and are expected to go to market soon.
While businesses are capitalising on AI to gain meaningful consumer insights and maximise productivity, the absence of a uniform global law on ethical usage has given cause for alarm by industry researchers, and academics over the potential misuse of AI.
Most prominently, Elon Musk, Bill Gates, the late Stephen Hawking and Tim Berners-Lee have all joined a growing chorus warning of the potential dangers of unrestrained AI usage.
According to Philosopher Thomas Metzinger at Johannes Gutenberg University Mainz in Germany, big tech indulges in what he says is “ethics washing” and wields too much influence on proposed industry regulation around AI. According to Metzinger, big tech companies’ ethical debates on AI are red herrings to delay policy formation and regulation worldwide.
Metzinger is also a part of an EU committee that tabled an ‘Ethics Guidelines for a Trustworthy AI’; an industry benchmark for organisations to keep in check while developing and deploying applications using Artificial Intelligence. Principal Advisor in the Directorate-General for Justice in the European Commission Paul Nemitz – one of the architects of the European Union’s General Data Protection Regulation – agrees with Metzinger’s position and posits in his 2018 paper ‘Constitutional Democracy and Technology in the Age of Artificial Intelligence’, that Artificial Intelligence is completely controlled by the tech industry.
Nemitz also claims big tech players are able to manipulate excessive power through their hold on money, the infrastructure of public debate, large public data and personal information and R&D in AI; all of which may threaten the survival of democracy.
The primary charge on ethical mis-use towards big tech useage of Artificial Intelligence has been centred on a lack of transparency in data handling. Everything from how algorithms in applications are built or the stages of research or investment by big tech appears to be enshrouded in secrecy.
Some experts argue that even in seemingly innocuous situations when AI is used without disclosure (chatbots), this is deemed as a deceptive practise. Currently, there is no law that makes it mandatory for organisations to disclose when a machine engages a human.
For example, Artifical Intelligence, if applied with malicious intent, can mean propaganda campaigns would engage in Machine learning generative adversarial network- which involves the creation and dissemination of forged audio, video, text and images with unprecedented ease. It can also be used by bad actors to deploy highly customised phishing attacks. US-based startups like Meograph, Topaz Labs and Moduate have commercialised this kind of technology and currently offer the market the ability to create duplicates and fake data.
AI Now, an interdisciplinary research center, is currently tracking the impact of AI on human rights, labour and machine-learning biases. Its AI Index Report, tracks, collates, distils and visualises all tech related to AI.
In addition, OpenAI together with the World Economic Forum – have also highlighted the issue of what it dubs a ‘diversity crisis in AI’, whereby the fear of failing to prevent human biases from creeping into AI algorithms continues unabated, may create social upheaval.
The AI Now 2019 report highlighted how “facial recognition systems uncategorized non-white people,” and, discusses chat-bots learning misogynistic and racist language. Reports of incidents of bias creeping into AI algorithms against various communities appear to be on the rise.
Algorithmic decision making is set to become pervasive in the near future. Unless AI applications have inbuilt mechanisms to ensure algorithms are devoid of biases, there may be a substantial risk of systemic injustice becoming codified, so to speak.
On the governance front, many countries have offloaded plans to encourage the use and development of AI-based technology, yet have steered clear from any clear cut guidelines to regulate the technology.
Global bodies like the World Economic Forum (WEF), Organisation for Economic Co-operation and Development (OECD) and European Union (EU) have published broad recommendations outlining how laws may govern AI; which would include protection for human rights, security, safety and fairness. It also posits to fulfil transparency requirements, trustworthiness and accountability with respect to R&D as well as maintaining and using consumer data.
The proposed recommendations may pave the way for a global consensus on a common policy governing artifical intelligence. How this might be enforced is another matter for consideration.












