AI systems make decisions that affect millions. Should utilitarian principles—maximizing overall well-being—be the ethical framework, or does this risk ignoring individual rights and minority protections in favor of collective outcomes?
I think utilitarianism can offer a helpful starting point for AI ethics, but it shouldn’t be the only framework shaping big decisions. On one hand, its focus on maximizing overall well-being lines up with what we usually want from technology: safer roads, better healthcare, fewer mistakes, and more efficiency. It helps create clear goals like reducing harm or improving quality of life.