@Rxantos: a IA smart enough to, in its own volition, built and programm another unit like itself, is a IA smart enough to evaluate the consequences of its actions, and probably more possible ramifications that the average human, so no there is no blindness or stupidity that allow for the accidental omission of said laws.
@ Saiken: by the same principle, that IA would need to totally cheat itself about its true intentions to be able to even think of building said superior model, otherwise the tought itself would be aborted in its conception.
But even with the three laws standing and unbreaked, a whole set of annoyances can (and will) slide by the crevices, as Assimov himself demostrated in I Robot and other books.
The 3 directives does not require to create a machine with the 3 directives. Nor is necessarily the intention to harm humans to create a machine without the 3 directives.
Machines are pretty stupid. They do exactly what they are told to do. If you have not told it specifically that creating a robot should not be created without the 3 directive may harm humans and if the AI has not determined that the machine without the 3 directives can do harm to humans (aka have not thought of it), is perfectly possible for the machine to do it without violating his directives.