Author: Aidan Kierans, Ph.D. Student in Computer Science and Engineering, University of Connecticut

Ideally, artificial intelligence agents aim to help humans, but what does that mean when humans want conflicting things? My colleagues and I have come up with a way to measure the alignment of the goals of a group of humans and AI agents. The alignment problem – making sure that AI systems act according to human values – has become more urgent as AI capabilities grow exponentially. But aligning AI to humanity seems impossible in the real world because everyone has their own priorities. For example, a pedestrian might want a self-driving car to slam on the brakes if an…

Read More