Stupid robot article of the week, story of my life edition

Robot Consumers, Grow Up!

The problem is that, especially for Americans, this is about the only way to make robots palatable: Americans see them as jokes, or fantastical beings that should do everything for us but never be fully trusted.

Thanks Bill.

addendum: The article also links to self-described robot psychiatrist Dr Joanne Pransky, who among other things spoke out against the robot suicide commercial during the last Super Bowl.

4 Comments

  1. Why shouldn’t they be trusted? Without consciousness, they shouldn’t ever do anything unpredictable (as long as we’re careful in our programming); that’s ultimate trust.

  2. Sorry, that was probably rude. I was just confronted with an apparent paradox: your statement suggest that you have never worked with computers before, but the fact that you are posting on the internet proves otherwise.

    To put a finer point on it, I’ll let Turing speak for me:

    It has for instance been shown that with certain logical systems there can be no machine which will distinguish provable formulae of the system from unprovable, i.e. that there is no test that the machine can apply which will divide propositions certainly into these two classes. Thus if a machine is made for this purpose it must in some cases fail to give an answer. On the other hand if a mathematician is confronted with such a problem he would search around and find new methods of proof, so that he ought to be able to reach a decision about any given formula. This would be the argument. Against it I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy. In other words, then, if a machine is expected to be infallible, it cannot also be intelligent.

    The very idea that a machine ought to be infallible is, in Turing’s own words, unfair to the machine.

  3. Heh, I see your point. However, the key words are “careful in our programming”–I would hope that if and when we have robots on the scale that most people will recognize them AS robots (i.e. motile, humanoid, communicative, etc.), the programming is done with a bit more care and precision than, say, Windows Vista.

Submit a comment