The Dunning-Kruger effect also nicely explains managers and execs. Also if you play MMOs, you see this phenomenon on a daily basis. Sometimes multiple people like that group up simply so they can dominate a group and form a self-feeding cycle of reinforcement of their shortcomings. It's an ugly and unpleasant thing in virtual reality; it's even worse in actual reality.
Having never read Asimov's books or studied his rules, I find a gaping hole:
"A robot may not injure a human being or, through inaction, allow a human being to come to harm."
"A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law."
"A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws."
An robot could still keep these rules and by course of action injure a human being.
- Robot builds and activates a superior robot that is programmed to "injure".
- Robot obeys the first rule, and sacrifices itself trying to stop the superior robot when it is about to "injure". The superior robot reduces the original robot to scrap.
- Superior robot is free to "injure".
I like Bishop's line from Aliens, probably derived from the first rule: "It is impossible for me to harm, or by course of action allow to be harmed, a human being."
Asimov's laws only applied to Positronsic Robots, which, as far as I know, have yet to be developed.
This dialogue brought to mind more of the 1972 movie, Westworld. It was Michael Crichton's vision that robots programmed to be benevolent could, on their own, go nuts and start killing off humans. Fortunately, the cause of this turnaround was some equally theoretical contagious scrambling of computer codes that he referred to as "viruses."