Author: Mark Tsagas, Senior Lecturer in Law, Cybercrime & AI Ethics, University of East London

As artificial intelligence (AI) becomes more powerful – even being used in warfare – there’s an urgent need for governments, tech companies and international bodies to ensure it’s safe. And a common thread in most agreements on AI safety is a need for human oversight of the technology. In theory, humans can operate as safeguards against misuse and potential hallucinations (where AI generates incorrect information). This could involve, for example, a human reviewing content that the technology generates (its outputs). However, there are inherent challenges to the idea of humans acting as a effective check on computer systems, as a…

Read More