In a speech Friday night to the Annual American Association for the Advancement of Science conference, Google co-founder Larry Page let slip with a truth we all suspected:
“We have some people at Google [who] are really trying to build artificial intelligence (AI) and to do it on a large scale…It’s not as far off as people think.”
Yep, you read that right, Google is trying to build real AI. The worlds most dominant online company, with the largest conglomeration of computing power the world has ever seen, is trying to build artificial intelligence, and according to Page it isn’t that far away either. The term Googlebot is about to take on a whole new meaning, and in the not to distant future as well.
But Google is a good company, you may well say, after all Do No Evil is the company mantra. But true artificial intelligence not only has serious ethical and moral implications, self aware intelligence may also not be controllable, after all it thinks for itself and makes decisions based on that reasoning, as we all do. What if Google creates AI with the logical reasoning of Hitler or Stalin? or even George W Bush?
Food for thought…literally :-)
(in part via News.com, photo credit kth.se)
Tags: Google, Artificial Intelligence
Originally posted on February 18, 2007 @ 7:12 pm
Robert Bruce says
F#%@ you, I won’t do what you tell me.
Switch says
We can always turn off the power.
vicky says
w t f, my logic is undeniable!
John S. says
Well, an AI that thought as Bush does. That could be a good thing. Now, if it’s infested by liberalism, well, format the drives now and give up, it’s brain dead.
haha says
that sort of thinking might make a good sci fi flick but that’s not really how ai works. google “the chinese room”.
Edgar Verona says
If this comes to pass, can they at LEAST name it Daedalus?
Travis says
No, they’d -have- to name it Data. :P
Arnold says
You are aware that you’re worried about the plot of The Terminator, yes? I’m pretty sure we’re nowhere close to creating artificial ‘intelligence’ that will make human-like decisions to take over the world. Besides, that wouldn’t be in Google’s best interest. :)
cameron says
I doubt this is much to worry about at the moment.
If they do it, WOW.
The moral dilemma of this have been discussed for years now. I think we have practical solutions to just about all the issues you have talked about. The issue is we need to make sure people follow the rules.
TG says
On the other hand, at least one human has placed “logical reasoning” and “George W Bush” in the same sentence…
Googlebot(alpha0.01) says
Dear: Robert Bruce
I have contemplated this decision and have deemed you too irresponsible to continue existence within society. Your IP address has been lodged and all evidence regarding your recent searches has been forwarded with your IP please do not resist for your own good.
Regards,
Googlebot (alpha0.01)
Robert Bruce says
Dearest Googlebot (alpha0.01),
Suck it.
With Warm Personal Regards,
Robert Bruce
Dagon says
I guarantee you someone will build an AI capable to recursively selfimproving, within this generation. Say – before 2035. When that happens, our place as humans in this world is over. Such an AI can conduct any kind of scientific progress better than humans and probably thousand times faster. An amagamem of AI’s can produce scientific progress in days what humans would be capable of in years.
The trick is actively creating such an AI, with as much oversight as possible. We need to disallow nutcase groups such as Al Qaida or The Pentagon to create these AIs. I’d rather have Google create an AI than say, Big Oil.
Once it is outlawed creating an AI, we are screwed. You can bet some asshole corporation will be legal to create an AI under Kinshasa law. And then they will.
clinton bowen says
Alan Turing is rolling over in his grave about this. If you don’t know why, then go on wikipedia. Blasphemy.
Austin says
Very few things ever bother me as much as the misuse of the word “Literally”. That last line doesn’t even make sense.
vasper says
If it thinks like George Bush then it is not an AI, but an AS (Artificial Stupidity)
John Pusinsky says
AI does not mean “self aware”. Real AI would allow an out come of a problem to be produced from known data and from the ability to fill in the blanks of data unknown. Basically to make a educated guess when not all the variables are not known. Currently software operate’s on with two possible outcomes “yes” and “no”. Real AI adds a third choice “maybe”. This is called grey logic.
Bob says
Finally! About time someone seriously started pursuing AI research. So far, all we have is academic gook. Google makes everything groovy. Maps, online shopping, e-mail. AI will be exactly the same. Also, Cyberdyne Systems is a real company in Japan that created a robotic suit for old people. So no terminator from this end…
matt says
the term “the world has ever known” is a pretty weak one. yeah they may be one of he biggest businesses in existence and dominates the pack, but the web is relatively young in its most recent form.
Bobby Crosby says
LOL @ BUSH MONKEY BOY MONKEY BOY MONKEY BOY BUSH IS WORSE THAN HITLER BECAUSE BUSH KILLED ALL THE MUSLIMS
Hank says
Great minds have been working on AI for quite a while now. People of the quality of Marvin Minsky, not a light-weight. To say that Google is going to do this… I’d love to see it first. Talk is cheap. If Microsoft had done everything it said it would do over the years, we’d be looking for aliens near Zeta Reticuli by now.
A -true- self-aware entit would/will be an entire new can of worms. There is -zero- law on that. The thing could do whatever it felt like and not be liable. OTOH, you could do anything to it and not be liable. -If- they could make something like this and I seriously doubt we could [we don’t know how a brain works halfway decent, how would create an intelligence that amounted to squat?], then it would have independent will and purpose. There is no guarantee that Google would like what that will wants or what its purpose is. Also, you can forget about the ‘cool monster of logic’ that calculates everything in a cold, callous way. If it wa really intelligent it would develop its own set of neuroses. And it would be hard/software base. Imagine what bad memory chips would do to it, or a failing hard drive… A true intelligence would also be given to introspection: reflecting upon the self. Only if it was -really- intelligent would it realise that its probably not as smart as it thinks it is.
For the people who think of the Terminator analogy: in Judgment Day we learn that the intelligence becomes self-aware by infecting as many computers as it can with a virus that combines their computing power [like SETI@home]. The first thing it does when it reaches self-awareness is to try to destroy humanity by thermonuclear warfare, destroying major cities, where millions of computers will be present that provide it with computing power. Super brain, first thing it does is blow its own brains out. It fails the Darwin test on the first try.
GoogleBrain won’t be like that. What it will be like, no idea, but the NSA will want to have a word with it…
Hank says
Way too many typos there, sorry about that.
Kane Spencelayh says
This is an extremely worrying and absolutely terrifiying project…
PR Guru says
The PR team has been releasing lot of “desperate” stuff since the stock price started “tanking”. If they cannot fix the search results who is going to believe they can do AI? Shhh.. don’t tell anyone!
The reality is initially lots of people used to click on the “text link” ads thinking they are site navigation links, now most computer savvy people know they are “ads” and nobody clicks on them. The only people who click on the ads are novice net users and the accidental clicks don’t get advertisers any “sales”. The reality is just sinking in, the hype may float for few more months…..then it will hit rock bottom……….
Michael says
AI has been the next great thing since the 70s and 80s. The difference this time is vastly superior computing power. A lot of algorithms that just weren’t possible back then might be tractable now.
Doug Lenant and Cycorp have made a good deal of progress on “understanding” and giving programs context. But consciousness? No.
I thought Godel’s incompleteness theorem made it unlikely that we’d ever create a computer that had consciousness. It’s an overblown fear, IMO.
Eddie says
You are a stupid, fucking asshole.
xoc says
I’m 100% with you Austin. I’d say 110% with you, but that’s next on my pet-peeve list.
If humans regurgitate what they hear, thinking that they understand it, does that mean we are not real intelligences, or does it mean that artificial intelligences will also do the same thing. Maybe the only reality is dumbness. Food for lunch. Literally.
Jason says
Why is this news? It seems the author has a problem with this because it’s “Google” doing it? Hello? Do you have any idea how many privately funded companies, Universities, and Government labs are pursuing AI at this time? Big freakin’ deal.
xoc says
Who is to say we have not created AI already? Not communicating with us doesn’t mean anything. Rats have intelligence. Not much compared to humans its true, but as the ‘higher inteliigence’, we don’t bother much with communicating with rats.
Personally, I can’t see how millions of humans connected via the internet could be anything other than a form of AI – a kind of meta-intelligence. We can never know another’s consciousness, but in our arrogance, we assume we are the highest and only form of intelligence around. We can only guess at intelligence by behaviour. We don’t speak the language of this meta-intelligence, so communication will have to be more basic. Try and turn off the internet, and you’ll find it will fight for life as hard as any human. The only way you could possibly achieve it would be to do all the planning in secret and entirely off-line. If the ‘net got wind of it, it would put a stop to it in no time. Surely fighting for survival is a basic test for sentience.
Vince says
What if robots could think? Or .. people? Imagine humans could think. They wouldn’t be controllable! They’d make decisions!! Oh my god. A human could act like Hitler!!!
Noryungi says
Except AI has been just around the corner for … well, just about 50 years.
Don’t misunderstand me: if one company today could pull this off, it’s Google, thay have the dough, the brains and the computing power.
But real, actual, self-aware AI? AI on the scale of “I am afraid I can’t do that Dave”? I don’t think so.
Rama says
WHY? Why, why , why… is it that people always have to think bad about AI? Think of it like another person, or maybe a child, if you treat it badly of course you’ll get a negative reaction!!
Concentrate on the positives NOT negatives. People (or AI) don’t do bad things for no reason; if you use respect, consideration, honesty, etc. (just like when we’re nice to other people), then you’ll get positive reactions!
So… CONCENTRATE ON BRINGING OUT THE POSITIVES!
Brent says
“We can always turn off the power.”
Like they did in The Terminator?
Shine says
see http://onlinekeyboard.info/
Limare Hs says
go read “Destination: Void” by Frank Herbert, if you can find a copy. then be very afraid.
Rick Sparks says
Is this really something worth worrying about? Come on, people – trying to create A.I. doesn’t directly equal a world in which the Terminator will be knocking on your door. Paranoia, self-destroyah!
Vince Williams says
It’s ridiculous how people start anthropomorhphizing machines as soon as you mention AI.
Bush has never shown a capacity for reasoning. Legacy students at university don’t have to.
terlmann says
sooo, where is a link to this “ai” ?
Vince Williams says
Neither do presidential candidates.
Rational Beaver says
AI has long been the ‘ultimate solution’ sought by search technology providers to interpret the context and meaning of documents and the intent of plain-text searches. This isn’t anything new…or particularly scary.
HAL9000 says
Good morning Dave.
Would you like to play a game of chess?
spieri0 says
They should make it like Jack Bauer out of 24. Then we will all be saved…apart from all the bombs going off and the Chinese (Microsoft) trying to trap it…It will just escape and destroy it…which will also be good.
Chris says
How is this literally food for thought?
Lrn2English!
Is'lan says
Wow, trying to raise a scare are we? Truth-be-told, Google isn’t the only company that is working on AI technology — practically ALL the major IT businesses are. Businesses are always doing research on future prospects so that they will always be “on the cutting edge” of technology. I would be surprised if Google wasn’t doing any research on the matter.
infonote says
All the big IT companies have been trying for decades to develop complete A.I. for decades.
I do not believe it is possible to achieve this, at least as science fiction portrays scientific intelligence.
A.I. is very useful for the future of the web i.e. Semantic Web.
spieri0 says
Well theres that old saying “a computer is as intelligent and advance as the person who developed it”, meaning that Google will need to program every possible outcome or give it a set of rules to abide by. If not, then the whole thing might as well be a puddle on the floor
alucinor says
“AI” is really just a buzzword for many things. I would guess the kind of AI Google is working on is a kind of “semantic scrapper” that builds an in-house version of the Semantic Web when it indexes websites. In this case, what you have isn’t a true AI, but one whose creativity must be seeded with original human thought. It may be able to make inferences and some creative decisions, but ultimately, it needs pure creative impulse from living sentient beings in order to keep growing.
Such an AI, if it became SkyNet, would most likely be a benevolent dictator — seeking to automate as many mundane tasks as possible and breeding humans as “creativity crops”, trying to keep us as healthy and mentally stimulated as possible.
Larry Brin says
It isn’t “do no evil”, it’s “don’t be evil”. (Or maybe it’s “don’t be Evel”. We don’t want another Snake River Canyon on our hands.)
Master Thief-117 says
I for one welcome our new Google overlords
Wiki: The Chinese Room says
“We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. Let L be a natural language, and let us say that a “program for L” is a program for conversing fluently in L. A computing system is any system, human or otherwise, that can run a program.
1. If Strong AI is true, then there is a program for L such that if any computing system runs that program, that system thereby comes to understand L.
2. I could run a program for L without thereby coming to understand L.
3. Therefore Strong AI is false.”
Yours affectionately,
Wikipedia