In 2017, the city of Rotterdam in the Netherlands deployed an artificial intelligence (AI) system to determine how likely welfare recipients were to commit fraud. After analysing the data, the system developed biases: it flagged as “high risk” people who identified as female, young, with kids, and of low proficiency in the Dutch language. The Rotterdam system was suspended in 2021 after an external ethics review, but it demonstrates what can go wrong when governments adopt AI systems without proper oversight. As more local governments turn to AI in an effort to provide real-time and personalised services for residents, a…