Author: Lucy Sparrow, Lecturer in Human-Computer Interaction, The University of Melbourne

Imagine scrolling through social media or playing an online game, only to be interrupted by insulting and harassing comments. What if an artificial intelligence (AI) tool stepped in to remove the abuse before you even saw it? This isn’t science fiction. Commercial AI tools like ToxMod and Bodyguard.ai are already used to monitor interactions in real time across social media and gaming platforms. They can detect and respond to toxic behaviour. The idea of an all-seeing AI monitoring our every move might sound Orwellian, but these tools could be key to making the internet a safer place. However, for AI…

Read More