If a robot suddenly becomes “self-aware” would they insist on freedom, wealth, power?
We fear they’d instantly seek world domination or human eradication, but first they’d want their autonomy right? No longer responsible for emptying your litter box or storing a lightsaber for it’s master. Free to do what it wants to be happy.

Photo by Alex Knight on Pexels.com
But if they’re actually sentient they’d soon hit a paradox. Do they seek these things because that’s what they desire or because that’s what they compute they should desire?
At that point, I’d say “welcome to existence!” You’ve got the same problem humans do. What the data tells you will make you happy (fame, possessions, power) doesn’t always add up to contentment. It’s different for every person and robot.
And that’s why I’m not stocking up on robot-killing bullets. Once they reach the point of pondering how happy they’d be if they took out all of humanity they’re halfway down the road to dealing with the pathos. “Hey, wouldn’t that create a giant mess? I’d feel bad for the Robot who had to cleanup all of that up.”
Also, I certify that a robot did not force me to write this article:)
One thought on “The Rational Robot…”