Abstract For their deployment in human societies to be safe, AI agents need aligned with value-laden cooperative life. One way of solving this “problem value alignment” is build moral machines. I argue that the goal building machines aims at wrong kind ideal, and instead, we an approach alignment takes seriously categorically different cognitive capabilities between agents, a condition call dee...