In 2014, the British philosopher Nick Bostrom published a book about the future of artificial intelligence (AI) with the ominous title Superintelligence: Paths, Dangers, Strategies. It proved highly influential in promoting the idea that advanced AI systems – “superintelligences” more capable than humans – might one day take over the world and destroy humanity. A decade later, OpenAI boss Sam Altman says superintelligence may only be “a few thousand days” away. A year ago, Altman’s OpenAI cofounder Ilya Sutskever set up a team within the company to focus on “safe superintelligence”, but he and his team have now raised a…