Armed robots which could target and kill humans autonomously should be banned before they are used in warfare, campaigners said.
The potential future weapons would be able to select a victim on the battlefield without any human intervention, crossing moral and legal boundaries, Human Rights Watch warned.
The so-called 'killer robots' could also be adopted by autocrats to use in deadly attacks on their own people and would be incapable of exercising compassion, campaigners say.
Steve Goose, director of the arms division at Human Rights Watch, said: "Lethal armed robots that could target and kill without any human intervention should never be built.
"A human should always be 'in-the-loop' when decisions are made on the battlefield.
"Killer robots would cross moral and legal boundaries, and should be rejected as repugnant to the public conscience."
He spoke out as the organisation launched its global Stop Killer Robots campaign calling for a pre-emptive and comprehensive ban on fully autonomous weapons.
It suggests the prohibition could be achieved through an international treaty and national laws.
During the past decade, the use of unmanned armed vehicles or drones has dramatically changed warfare.
Now rapid advances in technology permits nations with high-tech military capabilities - including the UK, US, China, Israel and Russia - to move towards systems that would provide greater combat autonomy to machines, Human Rights Watch said.
But if one or more country chooses to deploy such weapons, others may feel compelled to abandon policies of restraint, leading to a robotic arms race, it argues.
Goose said: "Many militaries are pursuing ever-greater autonomy for weaponry, but the line needs to be drawn now on fully autonomous weapons.
"These weapons would take technology a step too far, and a ban is needed urgently before investments, technological momentum, and new military doctrine make it impossible to stop."