Last week, social media giant Meta announced major changes to its content moderation practices. This includes an end to its fact-checking program, starting with the United States. Meta’s platforms – which include Facebook, Instagram and Threads – will no longer employ human fact-checkers and moderation teams, relying instead on a user-sourced “community notes” model. This is a similar method to current content moderation on X (formerly Twitter). Meta’s hateful conduct policy also changed last week to allow more “free speech”. Advocate groups and experts warn this could lead to an increase in abusive and demeaning statements about Indigenous people, migrants…