Interesting interview with Eric Schmidt, CEO of Google.
First: The internet is not the same as Google!
Why do you think some people are complaining about Google’s power?
Try to understand the motivations of the complainers. Google is one of the companies where advertising is moving to us and from other forms of media. The fact of the matter is, that’s about the Internet, not about Google. We are one of the companies, but we are certainly by no means the only one.
The next question is one I wanted to ask Vint Cerf last week when he visited campus, but I thought it would sound silly in the context of his talk. Schmidt is a business man, not a tech geek, so I’m interested to see if their answers differ:
Is Google creating a real artificial intelligence?
A lot of people have speculated that. If we’re doing AI, we’re not doing it the way AI researchers do it, because they do real cognition. Our spelling correction (on misspelled search queries) is an example of AI. But if you talk of that in an AI class in computer science, they’ll say, Oh yeah, yeah, no big deal. On the other hand, spelling correction applies to millions of people every day.But Larry and Sergey talk about doing a real AI, and there’s the idea that you’re scanning all this stuff on the Web to be read and understood by an AI. That gives a lot of people the willies, because there’s any number of movies such as The Terminator that show the negative aspect.
Yeah, but again that’s because they’re using broad and imprecise terms. It’s true that we read the stuff, but in the next few years, cognition, or real understanding, remains a research dream.
I’m not sure how to take this answer. On the one hand, he clearly acknowledges the kinds of AI used in developing certain aspects of Google; but he is absolutely right that Google isn’t trying to be a cognitive system, and any view of AI along those lines will make Google fall well short.
Still, he leaves open the possibility that Google is trying to develop a non-standard AI system, which has been my argument all along.
Damn it, I should have asked Vint this question.
If you had asked, I would have said that Google isn’t trying to develop the kind of AI system that is conscious (no one quite knows what consciousness is anyway). We do think that more semantic understanding of text would make search much more relevant since it could remove ambiguities that result in “hits” that are not of interest. Think of the ambiguity of “jaguar” (car, animal). If one understood the context of the query and the semantics of the web material one could exclude one or the other from the response. To read much more into this than that seems overly conspiratorial. One would also like to be able to make inferences so that items that are relevant but don’t actually contain specific search terms could also be found. Using synonyms for such an expanded search is probably ineffective because of the amount of material brought into the search response. We have found that some languages benefit from language-specific actions in processing of search queries, though, and I suppose you could argue this is a very, very limited form of AI.
Oh, Internet. Now I get to sit here and wonder if Vint actually came to my website. I’m highly skeptical, since I didn’t mail him a link, and I doubt he visits every blog where the words ‘Vint Cerf’ come up, although if anyone has the ability to keep track of that, surely he does.
But because the comment seems genuine, I’ll entertain it:
When I talk of AI, I’m not interested in consciousness, and I’m not exactly interested in human cognition, either. Rather, I am interested in the possibility of non-human systems engaging in human social activities, in ways that possibly diverge from standard human behavior. For instance, I’ve argued before that Google uses language, in the sense that Google is a member of a community of language users that includes all of us.
I think this falls in line with Turing’s original conception of ‘thinking machines’: machines that can engage in the query-response game. Google, I think, fills Turing’s proposal better than any other system around; if you ask Google a question, it almost always gives you an answer that is appropriate to your question and sensitive to the various subtleties of the language. The fact that Google (as a company) is interested in making Google’s language use context-sensitive to eliminate possible ambiguities that don’t arise in most normal conversations drives home the idea that what Google is after is a language user in the robust sense of the term. At this stage, Google still has plenty of work to do, but it is definitely headed in that direction. I don’t think this is overly conspiratorial because I don’t think there is anything particularly mysterious about our ability to use language, and I am trying hard not to attribute to Google anything more than it actually deserves. I think it is fairly clear and straightforward that Google uses language and can response to our linguistic queries, and that Google’s use of language changes over time in step with the rest of the community. I think this qualifies Google as a member of the community of language users. A rudimentary users, to be sure, but a member nontheless.
In both talks, Vint talked a lot about mobile, networked, ‘smart’ devices populating the world around us, that are not only aware of other devices around it but also where it is spatially and temporally located, and how these devices can all work together to provide people with the information they want to know. Vint also warned about the possible social consequences of creating a ‘smart world’.
It seems to me that if Google really is an artificially intelligent, autonomous language user, then there are even deeper implications that must be considered. Smart networked devices might change the way we understand our immediate environment, and derivatively change the way we behave in that environment. But Google directly contributes to and participates in our social use of language, and in so doing helps change the way we all use language. Google doesn’t just compliment and enhance our use of language; Google plays the language game with us.
So when I ask if Google is AI, I’m interested not only in the implications of populating the world is smart devices; I’m interested in the implications of populating our social communities with autonomous, intelligent, nonhuman members.
Oh, I hope you really are Vint, because I’d love to see your response.
From an uninformed perspective, pattern matching seems a big part of cognition. Hence, any sufficiently powerful pattern matcher, like Google, will have the appearance of intelligence. If equipped with a memory used for pattern building and discrimination, Google would be a credible AI.