The hunt for Artificial Intelligence (AI) has ethical problems that are inherent in technological progression. AI also opens up a variety of other ethical problems that humanity will have little time to consider. The question of machines reaching the level of intelligence of humans is not a question of "if", it is a question of "when". Current predictions see Strong AI occurring in the next 10 - 40 years.
The progress of technology is reaching exponential proportions. Obsolescence is also occurring at unprecedented levels. Whether or not the obsolescence is purposely built into products or not, there is a trend that is causing not only difficulty for consumers but is also creating huge implications for the future of legal rights for machines.
To make the task of allowing for ethical issues of AI harder, mankind is busy at work in different AI fields despite the fact there are only vague definitions of what "intelligence" and "understanding" are actually comprised of let alone considering the inherent ethical issues that exist in the field. When will it be that we cross the line and say "Yes, model 4040 has rights because it is intelligent and has emotional simulation. Model 4020 on the other hand was only intelligent and doesn't deserve any rights". One machine is given rights and another machine is not. What happens if a machine has been given rights and later becomes obsolete? Is the best way to handle the issue to bury it and never give machines societal rights even though they may be more intelligent than ourselves and have more feeling than we could even comprehend? That certainly sounds dangerous and sounds like a good way to piss-off a new race.
Ethics as a field is based on creating and considering regions within "gray" areas that satisfy the societal norm - or to be put more simplistically - to figure out what is right and what is wrong. Despite our intentions of trying to categorise everything into neat boxes there is truly no hope of doing so in any field, let alone, AI. "What is right and what is wrong" is based on arbitrary value systems that have been carried through generations of society to help order the chaos that surrounds us. What is defined as right and wrong in a society is currently what is passed by government and ruling parties - usually in the form of legislation. The problem exists that technology is moving at break-neck speeds and definitions of what is right and wrong have no chance of keeping up.
When the time is upon us to decided about the legal rights of machines the question itself will struggle to exists as we will be facing more chaos than we can handle. There is, however, one ethical question we face right now; if technological progression is leading us to chaos should we stop it or is the point moot as the cat is already out of the bag?