Can a robot become human? What will it be like to interact with an intelligent Robot? And how will we know when we do?

Not IF but WHEN . . . What if a Robot develops a mind of its own? And how should Human Beings respond to that?

As a starting point, I reference the work of science-fiction author Isaac Asimov and his 3 Laws of Robotics. The Three Laws are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

A zeroth law was added later:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

In 2011, the Arts and Humanities Research Council (AHRC) of Great Britain published a set of five ethical "principles for designers, builders and users of robots."

1. Robots should not be designed solely or primarily to kill or harm humans.

2. Humans, not robots, are responsible agents. Robots are tools designed to achieve human goals.

3. Robots should be designed in ways that assure their safety and security.

4. Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.

5. It should always be possible to find out who is legally responsible for a robot.

The above five ethical principles are NOT the same as Asimov's three laws. But is this enough? And how will we know when the time comes.