Isaac Asimov's Three Laws are wonderful, and I am totally convinced that they will prevent robots from ever doing us harm, based on a single simple assumption: all government leaders are peace loving pacifists totally committed to these principles. Umm...
No, sadly, I am quite sure that governments will build autonomous killing machines. Indeed, I believe autonomous drones, allowed to launch attacks without someone a few thousand miles away even pressing the button, are already under development. Robot soldiers that do the same will be close behind.
"An Artificial intelligence is capable to program itself. While the directives can be implanted on the machine. There is nothing in the 3 rules that will prevent a robot to create another robot without those rules. "
Surely building a machine with the intent of killing humans is violating the first law.
They might be able to build it as they are not directly harming a human, but they must be able to stop it before it kills due to the inaction part.