Follow US:

Practice English Speaking&Listening with: Real Time Search Event

Normal
(0)
Difficulty: 0

>>

MAYER: Good morning. Thank you for joining us here at the Computer History Museum. We're

so excited to be here within so many and surrounded by so many wonderful pieces of computing history.

And it's an especially perfect backdrop for today's search event, which focuses on the

future of search and the innovation that's happening in search. At Google, when we think

about the future of search, we think about four key elements: modes, media, language,

and personalization. What do I mean about each of these? Modes really refers to modalities.

How do people search? Today, the predominant mode is that people type keywords into desktop

computers. When we look at the future of search, we think they'll be many more modalities.

What happens with mobile phones? What happens if you can talk to the search engine? What

happens if you could give concepts or pictures that could cue off of--the search results

could cue off of? We're really excited about the fact that in the future there'll be many

different ways of searching. And when you look at how search will grow in the future

and change, we think these many different modalities are really what will drive search

forward and really grow with overall adoptions to even greater numbers than we see today.

On the flipside, there is media. Media is what is appearing within the search results.

And the Web, of course, has gotten very, very rich; books, movies, news, local, maps, all

on the Web. And so, our search results have to mirror that same richness that we feel

on the Web. So we're constantly looking at how can we make Google more comprehensive

and more relevant with regard to media. So it's not just about 10 blue links; it's about

the best answers. Media also leads the next piece, which is language. We really are working

at Google to try and branch across languages, break down the language barrier because this

focus on language in translation is what unlocks the Web. Today, we are able to translate from

51 different languages into all known pairs of those 51 languages. And we have 173 local

domains, because we really foresee a world in the future where you can search and find

your answer wherever they exist in the world, whatever language it's written in. And the

final component that we see in the future of search is personalization. We don't know

a lot about what the search engine will look like 30 years from now, 50 years from now,

and it's because search changes so quickly that we--it's hard to actually pinpoint what

that future looks like. But one thing we do know is that the results would be more personalized.

They'll know what you are an expert in. They'll know who your friends with and where you're

located. And, ultimately, our results will become more rich and more relevant to our

users because of that personalization. And there is a fifth component to the future of

search, which is the rate of progress. There has to be a consistent rate of innovation

pushing us towards this future. And we've always been a company that likes to launch

early and often and iterate. And we've done lots and lots of search features over the

years. And in October, we released a new blog series called "This We Can Search." And "This

We Can Search" chronicles each week all of the new user visible features that appear

in search. So, our users get to see the latest and greatest, and they also get a sense of

this pace that's driving us towards the future of search. And since October 2nd, in the past

67 days, we've actually launched 33 different search innovations. So that's one innovation

every two days. And if you look at the innovations, they include things like the new homepage,

Flu Shot Finder, personalized search on mobile, and they all fit neatly into these four categories

of media, of personalization, modes, and languages. And we're really excited about this overall

focus that we have on search, this rate of innovation. And today's event really drives

us forward even further increasing that rate of innovation and new features that we're

launching. Today's event will focus mostly on modes and media. And so, without further

ado, I want to introduce our master of modalities who's driving our Mobile Innovation, vice

president of engineering, Vic Gundotra. >> GUNDOTRA: Well, thank you very much, Marissa.

You know, I'm very excited to have a chance today to speak to you about innovations in

mobile search. But before I dive into those, I'd like you to join me and just take a step

back for a moment. Think about the technological innovation that was so profound that it changed

mankind. What comes to your mind? Maybe you think of Gutenberg's printing press. Maybe

you think of the steam engine. Maybe you think of electricity. And it's true; those innovations

really change the course of human history. But what's interesting is that at their outset,

the full impact of those innovations was not understood. You know, Gutenberg was broke

within a few years. The first mass-produced newspapers that came from Gutenberg's printing

press happened many, many decades later, or who could have predicted that the steam engine

would lead to the Industrial Revolution or that the invention of electricity would one

day lead to the Internet and to microwaves. At Google, we argue that that same dynamic

may be happening in the personal computer space. You know, PCs are, depending on how

you count, 27 to about 33 old. In other words, we are just in the third decade of the personal

computer revolution. And it may be that only now have our eyes become open to what the

possibilities may be. In fact at Google, we see three major trends converging in combination

with mobile phones that enable new scenarios that absolutely excite us. Let me talk to

you about those trends then I'll get in and show you some of those scenarios. Now, the

first trend is Moore's Law or computing. Now, I am in my 40s. All I've ever known in my

life is Moore's Law. Ever since I was a kid and I had electronic toys to the computers

I used as a teenager, I knew one thing for sure; that next year whatever device I have

would be better, faster, cheaper. It's the ubiquitous, pervasive law we know as Moore's

Law. And it's a powerful trend in computing. That's trend number one, computing. The second

trend is far more recent. That is the trend of connectivity. You know, think back just

as recently as a decade and a half ago. Think back to 1995. If someone came to you in 1995

and said, "Look, there's a time that's coming that billions of devices, every device would

be connected to every other device." Would you have believed that? You know, the more

technical you are, the more cynical you likely would have been. And, you know, if you think

in 1995, the best file servers of the day, things like Novell NetWare, you know, they

connected a few thousand simultaneous connections and the idea that you have billions of connections

seemed a little bit absurd. Today, of course, we take it for granted. The Internet has happened.

You don't even bat an eye when someone next to you takes out their cell phone and controls

their TiVo at home. It is an amazing change, this change of connectivity that has swept

the world. Computing, connectivity--and the third trend is the most recent, and that is

the emergence of powerful clouds. Now, when I say cloud, I mean huge amounts of compute

resources that programmers have at their disposal, data centers the sizes of football fields

that host massive amounts of computational power and the ability to manipulate huge data

models. When you combine these three things: computing, connectivity, and the cloud, and

then you think about what's happening with mobile devices, you get something very, very

interesting. Let's talk about mobile devices. Let me go grab a mobile device here. This

is an Android device, common Smartphone. And you think about something like this, it's

got built-in sensors. It's got a camera. It's got a GPS chip. It's got a speaker. It's got

maybe an accelerometer. You know, by themselves, these sensors are not that extraordinary.

Some of you may say, "So what? This camera isn't very special. My Nikon or Canon camera

from 10 years ago surpasses this in quality." And you could be right. Or the microphone--what's

the big deal about the microphone? You know, the microphone I use in my church or synagogue,

much better than this microphone. But not when you compare this in the context of computing,

connectivity, and the cloud. You see, when you take that camera, and you connect it to

the cloud, it becomes an eye. That microphone connected to a cloud becomes an ear. And in

fact, what we're going to show you this morning is innovations that are the combinations of

those three trends and mobile phones that allow you for the first time to do powerful

new things including search by sight, search by location, and search by voice. Let's begin

with search by voice. Now, some of you may remember that it was about a year ago that

we introduced Google--Google Voice Search available for the iPhone in the Google Mobile

App. And during 2009, we worked very, very hard to improve the accuracy and the recognition

rates of that voice recognition. You might be surprised at how good it's gotten. Let

me show you. This is Google Mobile App, and it's running this Voice Recognition Software,

Google Search by Voice. You simply bring it to your ear and speak a query. So, let me

try a query, something like pictures of Barack Obama with the French President at the G8

Summit. You know, unlikely that you would ever type in a query that long. And there

you have it. Isn't that amazing? Now, as impressive as that query is, it becomes even more impressive

when you think about what just happened. What happened was we took your voice file, the

digital representation of your expression, sent it up to the Google Cloud where it was

broken down into phrases, sentences, words, and those were compared against the billions

of daily Google queries we get. A probability score was associated with all the potential

textual representations, the textual expressions of what we thought you said, they were ranked,

and then the best ones were sent back to your phone all within what you saw here in fractions

of a second. Really amazing, amazing work. Now, we've done more than just work on the

improvement on accuracy of Voice Search. We've also been adding languages. You heard Marissa

talked about our focus on languages. Last month, we announced Mandarin Voice Search.

Now, think about our customers in China, our users in China, who have struggled to enter

in with a notoriously difficult Mandarin character set on their mobile phones now being able

use just voice. Let me do a demo for you. Same app, Google Mobile App available from

the App Store, except in this case, I'm going to show you the Mandarin version of this particular

app. Okay. There you go. So you see that it's in Mandarin and I'll try my only Mandarin

query that I'm capable of doing. [SPEAKS FOREIGN LANGUAGE] That was McDonald's in Beijing.

Let's see if we get it. There we go, all the McDonald's in Beijing. Okay. Now, I'm very happy to announce that

today, and joining English and Mandarin, we have a new language we'll be supporting and

that is Japanese. Now, instead of me trying to say McDonald's in Japanese, we thought

we'd have a native Japanese speaker, Toshi--Toshi, please come up on stage--and do a real demonstration

for you. So, Toshi, thank you very much. So, you're going to do a query. What kind of query

are you going to do? >> TOSHI: So, my first query, I want to search

for the pictures of Kyoto's Kiyomizu Temple. >> GUNDOTRA: Okay. Let's see you do this query.

>> TOSHI: Sure. Okay. Here's the screen. [SPEAKS FOREIGN LANGUAGE].

>> GUNDOTRA: Fantastic. Okay. So, that was a great query. But why don't you try a query,

a longer query, a query that you probably wouldn't type? And why don't you tell us what

it is before you do it. >> TOSHI: I want to try Voice Search for my

favorite restaurant around the Google's Tokyo office.

>> GUNDOTRA: Okay. Your favorite restaurant by the Google office in Tokyo. Okay.

>> TOSHI: So, I'm going to search for the By Address. [SPEAKS FOREIGN LANGUAGE].

>> GUNDOTRA: Oh, wow. Fantastic. Thank you. Thank you, Toshi. You can see why I've--when

I practiced trying to learn that, I gave up. So, thank you, Toshi. You know, as I mentioned

at the beginning, we really do get the sense that we are just now beginning to sense the

possibilities. In fact, our dreams at Google go way beyond what you just saw--as Marissa

mentioned earlier, in addition to voice recognition, Google also has massive investments in translation

from one language to another. In fact, we have huge compute infrastructure for translation.

Imagine the following scenario. Imagine that you could speak to your phone in one language

that Google could recognize what you said, take the text of what you said, feed it to

our translation infrastructure, translate it to a different language, and have it then

come back to the device using the text-to-speech engine on the device and have it play back

in a different language. In other words, it would be almost a real-time interpreter. Would

that be great? Let me show you a demo. What we're about to show you is a technological

demonstration of a capability that we hope to deliver in 2010. There is a technological

preview. Let me try something. Hi. My name is Vic. Can you show me where the nearest

hospital is? English up to the cloud, translated back into Spanish.

>> [SPEAKS FOREIGN LANGUAGE]. >> GUNDOTRA: Okay. You know what? I apologize,

Randy. I didn't have this thing plugged in. Let me try this again, okay? Hi. Can you show

me where the nearest hospital is? English up to the cloud, back to Spanish, back down

to the device. >> [SPEAKS FOREIGN LANGUAGE].

>> GUNDOTRA: Okay. Now, that's just an early indication. You can imagine us working to

get latency down even faster. We also demonstrated English, Mandarin, Japanese. In 2010, you

will see us dramatically accelerate our efforts and support many, many more languages. Our

goal at Google is nothing less than being able to support all the major languages of

the world, okay? So that's Search by Voice. Let's talk about location. You know, we were

reviewing some market research from Japan and we found the research quite surprising.

What the research showed was that, in Japan, consumers were holding onto their cell phones

24 hours a day and keeping them as close as one meter away. And we thought, "Wow, those

Japanese, they must really love their cell phones to have that cell phone with them all

the time." Until I went home, fell asleep, rolled over on my bed, looked at my night

stand, and right on my night stand was my phone. In fact, we're not that different than

the Japanese. You know, I suspect, if you look around now, either you have the phone

in your hand, it's in your purse, it's in your bag, you know, the phone, because its

location is likely your location has become the most intimate and the most personal of

all personal computers. At Google, in our engineering teams, we want to make location

a first-class component of all the products we built. Why? Well, it speaks to the personalization

point that Marissa talked about earlier. In addition to personalizing the results we bring

back to you, we can also just deliver them faster. For example, we included My Location

in Google Mobile Maps. Maybe some of you use Google Mobile Maps. Our data shows that, on

average, we save about nine seconds per user because when you open up Google Mobile Maps,

we know your location and render that location immediately. But we want to do so much more

than that. In fact, let me show you some of the things that we're doing to use location

in other properties. How many of you use Google Suggest? Okay. Lots of hands go up. Our data

shows that over 40% of mobile queries that come into Google are queries that result from

a user selecting a suggest item. Now, how could that be improved if we use location?

Let me show you. I have two separate iPhones here that I'm going to do this demonstration

on. Let me just get them started up. And in both cases, I'm going to show you a future

version of the Google.com homepage that supports this suggest. Okay. So that's one. And that's

two. Let me just make sure you can both see them. Now, we've done something here, we've

hard coded, we've hard coded this phone to believe, as you can see that it's in Boston.

And we've hard coded this phone to believe it's in San Francisco, okay? So, let's start

doing a query. How about RE, what would you expect? Well, there comes the suggest options,

Red Sox, Red Sox schedule, which makes total sense in Boston. Let's do exactly that same

query in San Francisco. And what do you think will show up? RE, REI, one of the most popular

retailers in San Francisco. Isn't that great? Customize suggest based on location. Of course,

there's other things we can do, for example, product search. This is the time people do

lots and lots of shopping. But maybe you're like me. Maybe, in addition to just shopping

online, there are times that you wish you could actually know if that product happened

to be available locally or could we at Google combine your location with inventory feeds

from retailers and tell you if that product was available locally? That's exactly what

a future version of Google product search will do. Let me do a search here for a product.

How about something like a Canon Camera? How about a Canon EOS Camera? I'll do a search

here. Let's wait for those results to come back. Now, you'll know that the top two results

have something very special. You see that blue dot? That blue dot says those two products

are in stock nearby. And if you select in stock nearby in combination with our partners

who are sharing inventory data with us, you can see that Best Buy has this camera 1.3

miles away and it's available at Sears which is about 7 miles away, isn't that great? All

right, let's go back, let's go back and talk about the Google.com homepage again. Have

you ever asked yourself, what's nearby now? Maybe you get to an event a little early.

You have a few minutes to spare; you're in a new place. And you go, I wonder what's nearby?

That simple query is very hard to answer. You know, think about how you might solve

it today. Maybe, you go into your car. Maybe, you'll use your car's navigation system and

go through the rigid categories they provide. Maybe you'll type in an address onto a map;

but query is so difficult because your location is the query. And what we're going to do is

make location a first-class object right within the Google.com homepage. In fact, let me show

you a brand new feature called "Near Me Now" on Google.com mobile. Right there, "Near Me

Now" and I simply select "Near Me Now" and look at that, it knows I'm at the Computer

History Museum and shows me other categories of areas or interests that I might have to

go search for. But we can even do better than that; what we really want to do is take your

location, send it up to the Google cloud, have the Google cloud reverse GO code your

lat-long location, understand the business or place of interest that you're at, look

around you for all other relevant places, rank them appropriately at--for their popularity

and then send those back to you, back down to your phone in a fraction of a second. That

would tell me what's nearby. But watch what will happen; I'm going to click that down

arrow right by explore right here and boom. There you go, all the places right around

the Computer History Museum. In this case, we've not only answered the question of what's

near, nearby now, but we've also, if you look at the ratings, also answered the question

of what's good nearby. Now, of course, we realize that you may do the search on more

than just the Google.com homepage. And I'm very happy to announce that, today, we have

a new version available today in the Android marketplace, a Google Mobile Maps for Android.

And among the many features that are in this new product is also this--is this What's Nearby

feature. You simply go wherever that you want to look at, any place on the map; it doesn't

have to be any particular place. You just long press. So I'll just long press, select,

and then, say what's nearby, and exactly that same feature is available today on Google

Mobile Maps for Android. Now, let me switch and talk about search by site. You know, of

the human senses, sight or vision is considered the most complex. It's estimated that up to

two-thirds of our brain, of the human cortex, is involved in the processing of visual images.

And it's widely believed by scientists that if we could figure out how the human brain

processes images, that would be a monumental step forward in understanding in how the brain

actually works. Well, I'm very happy to announce a new product available in Google Labs today,

a product called Google Labs, I'm sorry, product called Google Goggles which represents our

earliest efforts in the field of computer vision. Google Goggles allows you simply to

take a picture of an item and use that picture, that picture of whatever you take as the query.

Now, before I demonstrate the product, let me tell you a story on how I use the product.

Obviously, we "Googlers" test product before we make them available. And I was doing my

job testing the product. I had a friend, a couple, call me. They were scheduled to come

over for dinner, but they called and said they were running late. They were stuck in

traffic. In fact, they were stuck for traffic for one hour. So, when the doorbell rang and

I opened the door, both the husband and wife had to use the bathroom desperately. And as

I opened the door, they said, "Sorry Vic, we've been stuck in traffic, we need to go

use the restroom." I very happily pointed to where the restroom was and the wife handed

me the gift that they have brought, a bottle of wine. And she said, "While we're in the

bathroom, open this bottle of wine." Well, I did what you would probably do. I pulled

out Google Goggles and took a picture of the bottle of wine. Maybe, you've ever wondered,

is this bottle of wine any good? Oh, you know, I got a result sent back; it said that the

wine has hints of apricot and hibiscus blossom, just then the door opened and my friend came

out of the restroom. Of course, she's not a "Googler," so I took the confidential product

we were testing and I put it away. She said, "Please, please pour the wine," and so, I

poured the wine. And she said, "It's my favorite wine, you know, what do you think of it?"

And I tasted it and I said, "It's got hints of apricot and hibiscus blossom." She was

blown away. They thought I had a great wine palate. I'll be honest with you, I don't even

know what hibiscus blossom is. You know, let me show you a demonstration of that. I happen

to have that bottle of wine, not that exact one, but the same bottle. Let's try it, let's

take a picture. Let's launch Google Goggles and let's see what happens. Now, the lighting

conditions are less than optimal. I'm getting a huge amount of glare, so I don't know how

this is going to work, but we will try. So, let's see here, you--let me take an image,

a picture of this bottle and then we'll bring it over here. Okay, oops, well, it helps if

I actually get an image, so sorry, I moved that. Let's try that again. One more time,

and I will try to hold my hand steady; surprising, holding my hand steady wasn't a problem during

rehearsal. Okay, and then you see, it's scanning, attempting to recognize what it sees here

and, in this case, it has gotten some of the bristling items. And, of course, as you use

it and as we get more and more data, it'll become more and more accurate; pretty exciting

work. Now, you may think, "Well, Vic, I'm not going to take a picture of a wine bottle,

if that's all the Google Goggles does." Well, it does a lot more. It recognizes things like

CD covers, movie posters, books, barcodes, lots and lots of categories. You know, for

example, imagine that you're traveling in Japan and you come across this landmark. Now,

you don't speak Japanese, but you do know that's a famous landmark. How would you go

about and getting any information about that? Well, using Google Goggles, you could just

take a picture of it, and ask Google to help you. Let's try that. Let me come over here

and let me try to take a picture of that landmark. I'll pretend I was there. Okay, I got a picture.

Let's come back over here and show you the results. It's analyzing and there we go. How

about that? It accurately recognizes that landmark. Now, once again, it's incredibly

impressive when you understand what's going on. In this case, those images are being screened

to the Google or send to the Google Service, the Google clouds. There, our vision algorithms

are analyzing the image and looking for objects that it can detect. Those objects have signatures

that are created and then those signatures are matched up against an index that has over

a billion images in it. The best matches are ranked and then sent back down to your device

all in a fraction of a second, showing you really the power of devices that are connected

to the cloud. Now, some of you may ask, "Why is this in labs? Why is this product in Google

Labs and available today?" Well, we put it in Labs for two reasons; one, because of the

nascent nature of computer vision. We really are just at the beginning here, and the technology

is just getting under way. The second reason is because of the scope of our ambitions.

Google Goggles today works very well on certain types of objects in certain categories. But

it is our goal to be able to visually identify any image over time. Today, you have to frame

a picture and snap a photo; but in the future, you will simply be able to point to it, as

simple, as easy as pointing your finger at an object. And we'll be able to treat it like

a mouse pointer for the real world. Of course, we're a long way from that today. But today

marks the beginning of that visual search journey. And we strongly encourage those of

you with Android phones to download that product, give us feedback and help us really grow.

So let me wrap up here. You know, we really are at the beginning of the beginning. You

know, if you think about the mobile announcements that we talked about today, everything from

Japanese voice search to a new version of Google Mobile Maps that allows you to search

nearby or the upcoming changes to the Google.com homepage or even something like Google Goggles,

all of these are powerful demonstrations of what happens when you take a sensor-rich device

and you connect it to the cloud. Yes, it could be that we are really at the cusp of an entire

new computing era, an era where devices will help us explore the world around us that devices

can understand our own speech or help us understand others, devices that may even augment our

own sense of sight by helping us see further. We hope you're as excited as we are about

these mobile announcements. While it is just the beginning, the possibilities ahead inspire

us. So, thank you. Marissa, please. >> MAYER: So, you can see, search engines

that understand where you are in the world, search engines that understand you when you

talk to them, even search engines with eyes. These are the things that are going to change

the interface for search fundamentally, as we move forward. The other thing that will

fundamentally change the interface for search is media. Remember, media is what we refer

to in terms of the richness of the Web. And the way that richness needs to be reflected

on the results page. And it's not obvious, but media is really fundamentally a relevance

problem. Can you find the right information? Can you rank it the right way? And if you

look at the way media has evolved inside Google Search over the past 11 years, it's pretty

interesting. We started with just Web sites, URLs, a list of 10 new URLs that you needed

to be ranked. Then we evolved towards universal search bringing books, news, video, local

information, images, products, blogs, all onto the results page. But there again, there's

all kinds of interesting relevance questions: When should those genres appear at all? Where

should they appear? Which one from those genres? Which item from those genres should be surfaced

on the results page? It's a huge relevance challenge. And then think about the Web today;

the Web is hugely participatory. It's hugely fresh. I heard this morning that in Tahoe,

people woke up to two feet of snow. And I've heard that some of you were caught on it on

your way home; I'm jealous. But what's interesting is was it two feet, was it one foot, was the

snow blowing or not? And there are some official sources for that, but they don't always get

it right. Yet, there are user updates out there that do get it right. But how do you

find them? And how you cut through all of the updates that aren't relevant? That's why

we have one of our foremost relevancy experts here in the company to talk about some of

the challenges with media and relevancy. Please help me welcome Amit Singhal, Google fellow.

>> SINGHAL: Thank you very much, Marissa. So, we are here at this wonderful Computer

History Museum today. And before I get to today's big announcement, it's just fitting

that we take a moment and talk about the history of information flow. Now, I've worked in the

field of search for almost 20 years now. And what we are going to announce today is one

of the most exciting things I have seen in my career. But let me first take a moment

and talk about how information has flown over century. So thousands of years back, people

got most of their information by word of mouth. Sitting around campfires, from their tribe,

from their village, kids would walk up to the village elders who have all the knowledge

and say, "Hey, grandpa, should I eat those beans?" And the grandpa would say, "No, no,

no, them are poisonous, okay?" And the kids who listen eventually became grandpas and

pass that knowledge along generations. And it took generations for knowledge to get from

one point to another point geographically. And that was clearly not fast enough. Then

Guttenberg invented the movable-type printing press and this process of information dissemination

was parallelized. An author could write a book, thousands of books can be--books can

be printed. And then, they were sent on horsebacks and by boats to around the world. Village

elders around the world now had the power of that knowledge. And believe it not, some--up

to some 20, 30 years back, that was the primary mode of information transfer. And even though

great strides were made in the physical limits--in the printing technology and transportation

technology, the physical limits of printing technology and transportation technology still

made it so that for information to get from an author to their--to the consumers took

weeks, months and sometimes even years and that was clearly not fast enough. And then

came the Internet. And what I'm showing you here is one of the early Google servers that

are displayed here at the museum. And, now, suddenly, the world changed because billions

and billions of documents were available to millions and millions of users just through

the Internet, through search engines. And that was a great revolution. In the early

days of Google when I got here nine years back, we used to call that information every

month, and we would put up a new index every month. People will call it Google Dance. And,

clearly, a month was not fast enough. And then, the Web world, and then we started calling

every few days then every day, and then, every few hours to now when we actually can call

every few minutes. But, clearly, in today's world, that's not fast enough. In today's

world, the world is producing information from around the globe every second by tweeting,

by posting other updates, by creating Web pages, by writing blogs; you name it. Information

is being created at a pace I have never seen before. And in this information environment,

seconds matter. Let me just use a metaphor to explain my point, the old metaphor of a

library. Imagine a library with billions and billions of books. The librarian, just when

the librarian understood the billions and billions of books in his library, the librarian

realizes that there are a hundred million new books coming in every day. Now, the librarian

finally figures out how to deal with a hundred million new books arriving at his library

every day so that he can tell the patrons what they should look for. And just when he

mastered that process, the librarian realizes that there are a hundred million of people

running around in his library, adding pages, deleting pages, writing notes, adding things

to books and guess what? If they didn't find the book on what they were looking for, they

wrote 140-character note and handed it to the librarian saying, "Not in your library."

And that's what, that's the information environment today. And imagine in this library, a patron

walks up to the librarian and says, "Hey, I need to learn about President Obama." And

the librarian says, "You need that book, that book, that article, that image, that speech,

that video. And, oh, by the way, 17 people just left me a note that President Obama has

been talking to the Democrats in a special session on Sunday about health care and, by

the way, he'll probably be in Oslo on Thursday receiving Nobel Prize," and so on and so forth.

Now, imagine that librarian does all that in under half a second without breaking a

sweat. As of today, that's what Google can do. We are here today to announce Google Real

Time Search.

Google Real Time Search is Google's relevance technology meeting the real-time Web. Now,

I can't emphasize this enough--Marissa has said this right now--relevance is the foundation

of this product. It's relevance, relevance, relevance. There's so much information being

generated out there that getting to you relevant information is the key to success of a product

like this and that's where we, as Google, come in because for 11 years, that's what

we have done. Rather than talking about this more, let me just show you, I have with me

Dillon Casey, the product manager for this product and Dillon, what's happening out there?

>> CASEY: Well, as you mentioned, we weren't the only one's working yesterday, so one of

the great things about this new product is we can actually see what people are talking

about in regards to Obama right now. >> SINGHAL: So, Dillon types Obama into Google--Google

Research Page comes in and, wow, look at that, results just came in Real-Time, this page

has come to life. Do you see that?

This is results coming into Google's Results Page. As they are produced on the Real-Time

Web out there, okay, our users will get their results on the results page as they are produced

in Real-Time out there. This is the first time ever any search engine has integrated

the Real Time Web into the results page. So, let us show you a little more of this, Dillon,

why don't we click on the latest result links out there. When Dillon clicks on the latest

results link out there, he's taken to a full page of Real-Time Results as they are produced

on the Web out there coming into this page, you see there's a twit that just came in.

Here's another Real-Time page that we called from anwerstatyahoo.com seconds ago, another

twit that just came in seconds ago and--hey, Matt--man--what are you twitting out there?

Our good friend Matt just twitted and, guess what, it just showed up in Real-Time as he

twitted. This is the power of Real-Time Search. Now, let me just take three examples to demonstrate

what you have seen out here. So we have been testing this product internally with our Googlers

and as we have been testing this product, I have received good examples from my friends

about how they are experiencing this product. One time, one Googler had heard that GM's

car sales were finally stabilizing, so she typed GM into Google and, of course, our results

were right there--the biggest news of the day was indeed that GM's car sales have stabilized,

however, she noticed this latest results for GM section while she was searching and read

that GM CEO Fritz Henderson had just stepped down seconds ago, now this was the information

she needed right then and this is the power of Real-Time Search. Now, on the results page,

as I said, this is the first time we are presenting Real-Time Web on the results page--what you

see in this Real-Time section is a scrollbar to the right, so that if a Real-Time result

just scrolled past you, you can go back, go forward, in addition, what you see here is--seconds

ago, we had called an article from Business Insider and that was presented to the Googler

and there was also an update from Twitter.com, a twit from Twitter.com talking about Mr.

Henderson stepping down. Now, what you observe here is this is the whole Real-Time Web; this

is comprehensive Real-Time Web, with twits, with news articles, with blogs and so on and

so forth. Let me take another example. One of the Googlers was searching for Bank of

America, and, of course, our results were very relevant but the key part there was that

the Real Time Web was buzzing about how Bank of America had decided to re-pay its TARP

loan and all those results started showing up in this latest results section that I've

been talking about; but if you click on the latest result for Bank of America, as Dillon

did for the query Obama to take you to the second page, you are taken to this full page

of Real-Time results and what you notice here is this special new link that we are launching

today under our search options. We are very happy to launch this new search option called

the latest results, which are available to you by clicking through with the latest result

section or opening the search options on the top left of the result page and then clicking

at this new feature. And once you click on this new feature, you are given information

from the Real-Time Web, the comprehensive Real-Time Web from Twitter, from Wall Street

Journal, from a Website, the Loop21.com, and so on and so forth--comprehensive Real-Time

Web results coming to Google's results page in Real-Time. Now, let me take a third important

example, one of the Googlers was visiting home for Thanksgiving break in Maryland and

wanted to get H1N1 vaccine and heard--had heard that his whole high school was administering

H1N1 vaccine, so he typed H1N1 vaccine Arundel, the high school's name, and was taken to the

result page, which were very relevant results saying "Free vaccine to be distributed at

schools." So, very fresh results around Thanksgiving telling the Googler that vaccines will be

free and available at this high school but the Googler already knew that. He wanted to

know how long the lines were, what else is happening. So the Googler clicked on this

show options link that I just talked about and, having clicked on that, got the new options

panel and on this options panel, we are very happy to announce today that we are adding

a new update's link and by click on--by clicking on this--this updates link, you will get all

the twits and other updates coming into Google system in Real-Time. In this particular case,

when the Googler clicked at this update's link, the Googler got one very, very relevant

twit saying that the high school had run out of vaccines--we saved him a trip and he was

totally impressed. Now, for such hyper-local queries, we are--maybe one person is twitting

on their cell phone or very few people are saying something, Real-Time Search becomes

incredibly powerful because it shows you exactly what you need in your geography when you need

it. This was one single twit and it became available to that Googler right on Google's

page. So why I don't show you this page live. Hey, Dillon, what's happening out there man?

>> CASEY: Well, actually I hate to admit, I've been up here kind of surfing around while

you were talking but, you know, I don't know about you but I'm really excited about Google

Goggles and I've been watching what people are saying about it, it's--it's so cool. So

I just put the query in and I hit search and there's people talking about it right now,

in fact... >> SINGHAL: Wow, look at that. Vic, dude,

when did you announce it? How many minutes back? And here we are on Google's Result Page

Real-Time Web brought to our users for something you heard from Vic right now. Now, that's

incredibly exciting and, as you can see, we can go into all the full Real-Time page and

you can see the entire Real-Time Web for Google Goggles being brought to our users right at

Google's results page and--and the page that we have been talking about after that. Hey,

Dillon--hey, man, what are you doing? We are in the middle of the most important launch

of the year and you're playing with your cell phone, man? Grow up.

>> CASEY: Right, sorry. >> SINGHAL: What's happening?

>> CASEY: Sorry, Amit, a little guilty pleasure. You know, I've been kind of following the

Tiger story and it turns out this also works on mobile. I'm getting updates from the Apps.

>> SINGHAL: What? It does? Then why don't you share it with all of us?

>> CASEY: Okay, forgive me but you can see right here.

>> SINGHAL: Wow, look at that. This is Google's Real Time Search on mobile phones. So we are

very happy to announce today that Google's Real-Time Search is available on iPhone and

Android and anywhere you are--you need your information now, just pull out your Smartphone,

type a query into Google.com and you will get the Real-Time Web in your palm right away.

And that's the power of mobile Real-Time Search. At this point, I am also happy to announce

that our Google Trends page is coming out of labs--it's leaving labs, it's graduating

from labs. We are very happy. And we have added to it this new hot topic section that

Dillon is showing you right now. On the hot topic section, you will see what the Real-Time

Web is generating right now, what information is coming into from the Real-Time Web into

Google systems and, by clicking on one of those queries, you will of course see Google's

Real-Time Search Results. In addition, we have added a window down there where you can

type your query and see Real-Time Search Results. Now, before we go there, let me just say one

thing, we are rolling the Google Real-Time Search product over the next couple of days

and over the next couple of days, some users may not have access to this product as we

roll the binary out, however, you can always go to this new google.com/trends page and

by clicking on one of the hot topics, you will get to see Google's Real-Time Results

or you can type your own query into more hot topics. So Bernanke's speech is what's happening?

>> CASEY: Yeah, yeah. While you were giving your speech, Bernanke just presented a huge,

gigantic speech and the stock market went up.

>> SINGHAL: Wow, it happened right now? >> CASEY: Yeah.

>> SINGHAL: Man, I should call my broker. What's happening to my money now? Anyone knows?

My broker is not here. So what you observed here is this new google.com/trends page which

will take you to our new feature right away. Okay, so, we are all technologists here. I

have been working in the field of search for almost 20 years. We all love technology, you

love technology and I would be cheating you if I didn't tell you what went into building

a product like this. Let me just tell you, we literally had to develop dozens of new

technologies to make Google Real-Time Search as relevant as it is; technologies like language

models, we had to model whether a certain twit was genuine information-carrying twit

or was it just a weather buoy sitting out there twitting automatic twits. We had to

develop query fluctuation models. If queries fluctuate, the volumes fluctuate at a certain

rate, then something becomes eligible for you to see on your results page. We had to--we

had to develop the Real-Time content generation fluctuation model. If there's suddenly a lot

of content about Bernanke's speech or the stock market, something just happened. Now,

this is--these are some of the most exciting technologies that we have developed to build

a product like this. Today, we are processing over a billion documents a day generated by

the Real-Time Web out there and within seconds, we have analyzed the documents and twits and

updates and we have understood the documents and twits and updates and within seconds we

have seen a user query, which may have--which we may have never seen before. We don't--we

haven't seen one-third of the queries that we will see today ever before. So you take

documents that you've never seen before, you take queries that you have never seen before

and you merge them together and filter for relevance and bring it to the users within

seconds and that's what Real-Time Search is all about. Now, at Google, we talked about

the four pillars of search: comprehensiveness, relevance, user experience and speed. And

let me tell you one thing; I've worked in this field for a long time--I've worked at

Google for nine years and as the information world has exploded, as the amount of information

at the level out there has exploded, we are getting hundreds of millions of new items

every hour, the importance of relevance has gone through the roof. Everything is important.

Comprehensiveness is important. User experience is important. Speed is important. Indeed,

relevance is important but as the amount of information out there has grown, as much as

it has much and it's growing at the pace at which it is growing, relevance has become

the critical factor in building products like this. Now, let me just recap of what we just

talked about. So today we are very proud to announce Real-Time Search, a new latest search

option in the Google search options, a new update search option in the Google search

options, a mobile version of our Real-Time Search and the new trends page living labs

with a new hot topic section that would give you Real-Time Results. And I'm incredibly

proud of what we have built. As I study this--information is now getting to you instead of--from in

generations, instead of in years, instead of in months, instead of in days, instead

of in hours, in minutes, it's getting to you within seconds and I'm incredibly proud of

what we have built but at Google, we are never satisfied. It takes about one-tenth of a second

for lights to go around the world and, at Google, we will not be satisfied until that

is the only barrier between you and your information. Thank you. Let me hand it back to Marissa.

Thank you, Dillon. >> MAYER: Thanks. So the first time that updates

have been integrated into the search results and we have actually the most comprehensive

set of updates, too. We're so excited about this product. With that said, we didn't want

to rest on those laurels. We actually have two new exciting partner announcements. The

first of those announcements is with Facebook. Facebook will be providing us with the feed

of updates from their public profile pages, also known as Facebook Pages and these will

be appearing in Google's Real-Time Search. The second new partner, we have to announce

today is MySpace. MySpace will be providing us with feed of updates from all of their

users on any updates that are public and these updates will also be appearing in Google's

Real-Time Search. We have support from our partners here today. I want to thank Biz Stone

and the--and the team from Twitter for being here today and also Jason Hirschhorn from

MySpace, the chief product officer from MySpace and his team. Thank you very much for coming

and supporting the launch today. Last year on--in conclusion, last year on our tenth

anniversary, we published a blog on the future of search and that laid out the vision for

modes, and media, and language, and personalization. And I think when you look at today's announcement--search

engines that have eyes, search engines that can understand you when you talk, search engines

that know where you are and search engines that know what is happening anywhere in the

world and can bring it to you in Real-Time. It's amazing to see how far we've come on

realizing that vision in just one short year. And with that, I'd like to welcome Amit and

Vic back to the stage and we'll take some questions. Thanks.

>> BENNET: And we're also going to take some questions here in a second from online as

well while we set up. And if folks have questions here, feel free to step up. We have microphones

just right here. But, we're going to start with an online question first that's coming

from The Guardian in UK. This is for you Vic. We got to put you on the hot seat first. It

says, "Given Google's acquisition of Neven, to what extent can Goggles recognize faces?"

>> GUNDOTRA: It's a great question. Hartmut Neven is in the room, I believe. Hartmut,

are you here? There he is. Hartmut Neven had a company that we acquired and he is the leader

of this particular project concurrently at Google. His previous company did some pretty

amazing work around voice--around face recognition. And the technology that we built with Google

Goggles is very general. Of the billions of images in the index that we do recognize,

"faces" is one of those objects. However, for this particular product, we made the deliver

product decision not to do facial recognition. At Google, we deeply respect the user's privacy,

and we still want to work through issues around user opt-in and control. And so while we have

the underlying computer science technology to do facial recognition, we decided to delay

that until we have more safeguards in place. >> BENNET: That's great. And again, if folks

have questions here, just go forward, put up your hand. And if you could just introduce

yourself, so that people who are listening in online can know where the question is coming

from. >> TENAKA: Hi, this is Akito Tanaka from the

NIKKEI. I had a question regarding the Real-Time Search. What is the advertisement opportunity

in that area? >> SINGHAL: So, right now, we are concentrating

on bringing the most value to our users with all the wonderful partnerships that Marissa

just announced, and our partnership with Twitter. And I believe that this phase is very, very

young. As time goes on, new models would develop, and all the companies that we are talking

about are experimenting with multiple models of how to generate revenue from all this wonderful

real-time information that the world is producing. I think all the companies like Twitter and

others have added tremendous value to the world, because we can figure out what these

key conditions are in Tahoe right now. And I can figure out how the traffic is like in

Bangalore right now. And you name it, right? There's so much information out there. And

as long as the product brings value to users, I think new models are recognizing and various

of the revenue streams emerge. And you will see a revolution in that space over the next

few years. >> WATERS: Rich Waters in the Financial Times.

Can you tell us more about in Real-Time Search how many sources you're crawling? How often

you're crawling? Are you taking feeds from Twitter and Facebook, and other places? And

longer term, you know, how much real-time information you can be able to run?

>> SINGHAL: That's a great question. So, we are crawling a lot of content. As I said,

right about a billion pages a day. We are crawling everyday. We are crawling from many,

many sources. We are going out to the web, and we are crawling all those sources, all

the good sources out there. Definitely, all the new sources, but not just the new sources.

If a company announces a new product into the press release, yes we will get it. If

a blogger writes a blog about something, yes we will get it. And we will do that within

seconds. So, we are crawling, we are casting a very wide net. The key here is comprehensiveness

of real-time information, and integration with Google's search results. Those are the

two keys--those are the two key design principles behind this product. And indeed, we are taking

feeds from our partners, Twitter, and going forward very soon from MySpace and Facebook.

And we would like to get as much information as there is out there via feeds or any other

mechanism, because our objective is to build the most comprehensive Real-Time Search out

there. >> MAYER: And I should also add that on our

latest mode, we actually do have other update providers, including FriendFeed, Jaiku, Identi.ca,

and TWiT Army. And as Amit said, we'll be working to bring Facebook and MySpace into

that functionality over the next few weeks. >> BENNET: Well, we have further questions

from here. Oh yeah, go ahead. >> HELEN: Hi, Helen Malbay [PH] with the Financial

Times Germany. What about availability of Real-Time Search on your non-US site?

>> SINGHAL: That's another great question. So, we at Google strongly believe in all of

our products becoming international rapidly. This first launch is available in all English

speaking locales. That would be the UK, Canada, India, Australia, and New Zealand, and so

on and so forth. And very soon, some time in Q1, we are planning on launching many new

languages. So, this is one of our top priorities in this project. Our first priority was launching

it, stabilizing it, making of our infrastructure and relevance work, because that, as you can

imagine, has been a hard challenge. And as I showed you with all the new technologies

that we have had to develop, that was our first focus: building a product that would

bring value to the users. And going forward very soon, we are going to rapidly internationalize

this product. >> BENNET: So, we're going to take one more

online question here. This is another one for you, Amit, about relevance. How do you

prevent spammers from taking advantage of the Real-Time search results? This is from

Steven Bivins [PH]. >> SINGHAL: This is, you know, this is something

that I know something about having run Google search for about nine years. We have the best

systems in place to prevent gaming of the system, okay? Our spam lead out here sitting

with us, Matt Cutts. And Matt runs the best spam prevention team that there is out there.

And we have had experience with this for so long, we have developed algorithms inside

that can see what's going to happen next and counteract almost before it happens. Matt's

team has developed some of those algorithms. And real-time is moving--for us, real-time

is moving from minutes to seconds, and we are already in the game of running the system

that's minutes fresh. And we do a great job. You find Google results very useful that we

called a few minutes back. And Matt and his team and the rest of the team at Google are

experts in this area. We know many things about it, and that's how our real-time product

is already so relevant. >> CURTIS: David Curtis. A quick question

on the customization, are you going to be able to integrate social memory counts so

I can customize the prioritization of Real-Time search results by the people who I care about?

So in other words, using Facebook connect or LinkedIn, or Twitter a lot to sort of prioritize

or even let me selectively choose, like, the real-time results that I wanted to deal with.

>> SINGHAL: You are just picking questions out of my mind. That's so wonderful. We are

very excited about what's happening. And the key thing that we are excited about is we

are just getting started. You mix with this real-time search all the key principles that

Marissa talked about, and Vic talked about. Localization, personalization, and now you

have a real-time product that everyone would like to use it. In addition, recently you

have noticed we have been experimenting with social search in our labs. Marissa launched

Social Search a few weeks back. And that is an angle of personalization where you see

results from your social circle alongside the most relevant results from Google. Now,

you can just imagine when you merge these multiple technologies that we have developed

here from Real-Time Search to social search to job location-based search, what you will

get is a product that you would love, and that's coming very soon to a Google near you.

>> BENNET: See, Glenn up here in the front? >> GLENN: I'm really curious [indistinct].

Could you further expand on the distinction between Facebook's going to give you the public

feeds, which the stuff that members are already designated, is okay to be seen by anyone,

and then MySpace is giving you all the feeds, I mean instead of...

>> MEYER: Right. >> GLENN: So, that's anything that a MySpace

member will get, please. >> MAYER: So, Facebook has a product called

Facebook pages, which are special public profiles for specific entities, their feed that they're

providing us will have the updates that come from those pages. For MySpace, it covers all

of their user's profile pages and any update that's designated as public. So if I were

a user on MySpace and I saw that my updates could be public, those will all be coming

to us in the feed. >> GLENN: [INDISTINCT]

>> MEYER: That's right. So that users can decide what they'd like to see offered by

this feed to Google and then search and how broad that they really want to share it on

the social network, in general, by using the privacy controls available on each of those

networks. >> MCCRACKEN: Harry McCracken with Technologizer.

You talked about how you have these partnerships with a bunch of major sources of real-time

content. You need those relationships and those feeds to do this or what would happen

if there was something that people are excited about that you did not have a relationship

with? >> MAYER: I think that overall our goal is

always comprehensiveness. But our mission is the world's information making it universally

accessible and useful. And we really do mean the world's information. As Amit can attest

to, you get better results when you have more items to choose from, when you can analyze

them, understand how they relate to each other. So the more comprehensive we can be, the better

we can serve our users. And that's why we've had such a focus on making our real-time results

that we're launching today already the most comprehensive available. And we're taking

that even further with the MySpace and Facebook partnerships. In the future, if there was

something else, obviously we'd want to partner with and include those sources as well.

>> BENNET: We have a question from Danny up in the front here.

>> DANNY: Can you go back and clarify with Twitter what financial deals are there, if

there are any, and then the same for Facebook and MySpace. Are there ad deals that you're

paying for this stuff or is it just the goodness of their hearts or what?

>> MAYER: We cannot disclose the financial details of any of the deals.

>> DANNY: [INDISTINCT] >> MAYER: I can't. I'm sorry.

>> DANNY: [INDISTINCT]. If we go back to MySpace, we've got Murdoch saying that he wants you

guys to pay him to carry his news results, right? But then, with MySpace, either they've

just decided to give you this stuff because they think it make sense and that's his company,

and apparently that information is free for them to hand out, or you're actually paying

for it. So, it seems reasonable to ask whether or not somebody is getting something out of

this financially. Even if you can't do the details, it's either they're doing it for

free or they're not doing it for free. Can't you give us that?

>> MAYER: Yeah, I'm sorry, we can't confirm. >> SINGHAL: We don't exclude any source, any

source of real-time information we would really like to have integrate it with our system.

And we let our algorithms for relevance decide what updates or tweets, all blogs, all news,

all web pages to serve this to the user. So at Google, we are all about comprehensiveness

and we will accept all sources of real-time information and we'll build the most comprehensive

real-time search. >> But are those--I mean, are you applying

the same relevance algorithms? You write the algorithms, so you know what they can do.

And are you applying the same algorithms you applied to normal web search just on a faster--on

a faster footing or how have you have to change it?

>> SINGHAL: So as I was talking about earlier, we have had to develop at least a dozen new

technologies to make Real-Time Search work as well as it does, because clearly in Real-Time

Search, you need to have models of information fluctuation. As information fluctuates out

there in the real-time world, all as information fluctuates in our query screen you have to

react to that. And clearly, those fluctuations are useful for traditional web search. But

for real-time search, they are just incredibly critical. So, I wouldn't say it's exactly

the same algorithm that we can use, because to work with this amount and this base of

information generation, you have to develop these new technologies. And what we have developed,

some of these are just amazing technologies, right? I've work in this field so many years

and I didn't think we would develop these technologies so fast at the rate that we did.

So, I'm incredibly proud of how much relevance we have brought to the product based on the

technologies we have developed using our experience with relevance.

>> BENNET: So we're going to take one more question from online for Vic. The question

comes from Luke Wilson with more--I don't know if it's the Luke Wilson, but from a Luke

Wilson--with more Android phones and possibly even Google Android hardware, does Google

intend to reduce support for non-Android devices? >> GUNDOTRA: Absolutely not. You know, our

desire is to reach our customers on whatever platform they're on. And today, there are

a variety of smart phone platforms like the iPhone in Apple is a strong partner of ours,

like a BlackBerry, Nokia. At times, we choose different priorities in terms of which one

we do first. But it is our goal to reach as many as possible based on the technical capabilities

of that underlying platform. >> BARAK: Hi, Sylvie Barak from Hexus. I wanted

to know, first of all, do you feel that your Real-Time Search will be the death of journalism?

And second of all, everyone knows that knowledge is power. Does that make Google the most powerful

company in the world? >> SINGHAL: So, let me answer it to the best

of my ability. Your questions are clearly very loaded. So, journalism has its role and

it always will have that role. Information is indeed power, and what you in this room

are doing, are empowering the world, as we speak, with information about what's happening

here right now. So, I can't even think about putting those two words together, death of

journalism based on Real-Time Search, because you bring so much value to the world that

this value has to be brought to the world. And regarding your second question, our goal

at Google has always been to bring timely information to our users. And clearly we are

empowering our users with the information that they need now. We have been in this business

for 11 years. We get the information, we do our special relevance work, and we bring it

to our user. So, I think it's all about user empowerment. I personally have felt empowered

many times when I had the knowledge in my hand through Google and I walk into situations

that I had to. So it's all about user empowerment. And Real-Time Search is the next step in that

direction. >> MAYER: Yeah, I will just add that, you

know, our purpose is really around facilitation and reference. Getting people to do their

search and getting them off of our site and on to where the information exists in its

native form as quickly as possible. And so from that, like I said, you take a little

bit of exception with the question of we have the information, we don't. The web has information

and we want to get the user's back out to the web as efficiently as possible.

>> BENNET: It's probably worth knowing too that Google sends billions of clicks each

month to news, publishers, and this will be yet another channel through which to send

those clicks. Yeah. >> HELLER: Hi, Michael Heller. I'm with the

Google Technology User Group. I just--my question is around the integration of real-time results

with the rest of Google results. I saw, the latest within the web page. And I'm just wondering

what your vision is for long term. From a user perspective, how much does the user thinking

about is something real time result versus some other kind of result and where you're

trying to go in terms of that direction? >> SINGHAL: Another great question. The power

of our universal search is that users don't have to think about whether they should be

searching here or there. They should be searching in Google search box, and all information

that is relevant to them at that moment should surface on the search results. We have made

great strives in universal search with integrating books, videos, images, news, blogs, and so

on and so forth into Google Search. We don't think of it as this search or that search.

We think of it as Google Search. Whatever needs to be seen by the user now should be

integrated on the Google's results page. And Google Real-Time Search is just the newest

feature of Google Universal Search because this is the genre that's very relevant in

today's web and we have just brought it right to our users with our integration with Google

Search. >> MAYER: Yeah, I would just add one example

there, which I think drives on how much of a relevance problem this really is. So for

example, think about when you're searching for a product. So, apparently, a few weeks

ago there was a massive stroller recall, and what's nice is when we're actually dog-fooding

this, this what we call to using it internally, when we're dog-fooding Real-Time Search, we

did search for that stroller and only did you get places where you could go and buy

it, but you also were alerted to the fact that there are been a recall, which I would

argue for users. That is a very important piece of information. If you're about to go

and make that purchase, you want to know that there's been a major news issue with this

or potentially a major safety issue with it. And I think that that shows the power of the

Real-Time Search. And in that case, that Tweet, that news article is incredibly relevant,

and that's why we surface it on the same page as our search results.

>> Marissa, you just mentioned that it's still Google's idea to get users as fast as they

can to other pages on the web. We've seen some changes though from Microsoft and Yahoo

that's in to think that creating pages from information around the web is a better way

to go and gives people answers. You also mentioned answers. Can you give a sense of where Google

could have lay ends on that kind of creating user interface or sort of still having algorithm

being the king? >> MAYER: I think that our view is that we

overall believe that the web thrives on openness. And so, the reason that we have this amazingly

rich set of data to search and provide on our search results is because the web is open

and there's like a huge amount of participation to say, "Oh, you now, we'll develop the most

authoritative page on this." I think it's problematic, because it does make it a much

more enclosed system and we want to be much more inclusive. That said certainly as we

evolve search, there comes a point when you do want to be referring to things potentially

more as entities. Now here's a restaurant. What can you tell me about it? Right? And

we do that by offering heterogeneous search results. So this will really allow you to

see not only the canonical page for that restaurant, but also reviews, et cetera. I think we'll

play with the user interface with that, but again, the point is still that the best information,

the richest information is out there on the web. We want to get people there faster, but

we don't want to hold them on what we would call the most authoritative page or host that

page. >> SHANKLAND: Yeah, this Stephen Shankland

from CNET News. I appreciate your focus on relevance and recency. But in my observation,

a lot of times those things-—they don't necessarily go together. Do you have any way

of putting truth into the equation? There's a lot time where--a lot of situations where

time goes by and the truth seeps out. So, is there a way that you can actually assure

people that they're not getting connected to rumors and things that are potentially

factually wrong very quickly? >> SINGHAL: No, this is a very good question

and a very, very tough scientific problem that the research community is also thinking

about and we are also thinking about. And right now, a straightforward answer to your

question is we emphasize quality and relevance. And that often brings the truth out. I say

often because there are maybe occasional chances when the truth is somewhat grey and not black

and white. In which case, it can be debated. But it's a very hard problem because language

understanding is still an unsolved problem. We have made great strides in language understanding

at Google. However, it's still an unsolved problem. We are very excited about the algorithms

we are developing to understand language, but what you are talking about is, in some

sense, the grand challenge for language understanding. So, we are excited about the strides we have

made but that's our ultimate objective years down the road to get to that point. And that's

why after having worked in the field for 20 years, I come into work every morning like

a kid going to candy store, because I get to work on all these things.

>> MOORE: Patrick Moore. >> GARY: I'm sorry. I'm speaking. This is

Gary. I'm with SearchWise [PH]. Actually we are hosting some Real-Time information on

our Website. So, is there any way that we can submit the channels to the Google Real-Time

Search, you know, where the resource? >> SINGHAL: Sure. Please, talk to Dillon,

who's sitting here after the event. >> GUNDOTRA: Yup.

>> GARY: That's also a general question for a lot of Real-Time hosting Websites.

>> SINGHAL: Right. >> MAYER: I think it make sense at some point

to have a standard API and a standard feed that we accept and I think we will be moving

in that direction as we evolve the product. >> GARY: Okay, thanks.

>> PATRICK MOORE: Patrick Moore. One of the questions that I've noticed come up on Twitter,

yeah, I think it should be repeated is a lot of energy and effort is spent on PageRank.

What does this do in the Real-Time Search? It seems like, you know, you got a real issue

here, how are people going to deal with this as PageRank impact the Real-Time Search results?

>> SINGHAL: So, Page Rank is a very important piece of our ranking relevance technology

that we use for our entire Web search. And PageRank is indeed one of the hundreds of

factors that we use in our ranking system. And for Real-Time Search, we have all those

hundreds of factors. Some may not be as powerful in the context of Real-Time Search, some maybe

more powerful. In addition to Page Rank and those numerous other signals, we have had

to add these technologies that I talked about. Like, language modeling, right? How do you

model language that it actually is finding good relevant information? And so, PageRank

is always a very important piece of our technology including Real-Time Search. We have just had

to develop many new things to make Real-Time Search as relevant as it is.

>> MAYER: And I would add that Page Rank is really about finding authoritative pages.

One of the more fascinating things that we've seen or we're beginning to see inside some

of the real-time data is authoritativeness exist there as well and there are signals

that indicated. So, for example, re-twits and replies and the structure of how the people

in that ecosystem relate to each other, you can actually use some of our learnings from

PageRank in order to develop a, say, you know, an updates rank and/or an updater rank for

the specific people who are posting. So, this is something we're beginning to experiment

with. It was interesting to see that same parallel where PageRank looks at links. You

can actually look at the very mechanisms inside of these updates streams and searches and

in a sense authoritativeness in the same way. >> BENNET: We're going to take one more online

question from Eric Wester from wikiHow, who's asking: If there's an option to disable the

scrolling feature, he says he realized there's a pause but what about disabling it all together?

>> SINGHAL: So, we really have experimented tremendously with this user interface for

Real-Time Search. And based on early positive and good feedback like this, we did introduce

the "Pause" button. And after a lot of experimentation, I think the current interface is serving its

purpose of conveying to you the Real-Time nature of your query and providing the Real-Time

results. And allwe are always experimenting with everything that we do, including user

experience, not just for Real-Time Search, but for everything else. And we would be working

on the user experience for over the entire search system going forward, clearly Real-Time

Search is the new feature we are very excited about, and we would do a lot more work in

the user experience direction of Real-Time Search going forward.

>> MAYER: And I think we also have to acknowledge this very early in the evolution of the feature,

and we don't know that this is exactly the right user interface. It may ultimately change

and so--and we also want to always to honor our user's preferences. So, if they have expressed

a preference to us, we don't want to later say, "Well, hey, here's a whole new way the

user interface could work and how do we interpret that option there." A few years ago when we

first put PDFs into our search results, we had the PDFs somewhat imbalanced, never showing

up too frequently. And lot's of users mailed it and said, "Can wecan I just turn off

having PDFs in the search results?" And we also said, "Actually, you know, bear with

us. The relevance of PDFs will get much better." And sure enough, it did. And we didn't want

to have this sort of mechanism where you turn it off and then we have to re-introduce it

later once it was better. We actually think that the Real-Time Search relevance is already

very good, but that--we can anticipate changes to these search results. So, a "Pause" was

the right compromise. >> KENNEDY: Nell Kennedy. I'm wondering about

time to locate on the GeoSearch, can you talk about your coverage for non-orbital sources

of field location data worldwide through your scanning MAC addresses and looking at cell

towers right now? And have you done any--have you done any work looking at how Galileo could

possibly change how you do search internationally? >> GUNDOTRA: When you say Geo Locate, you

mean a feature like My Location? >> KENNEDY: Correct.

>> GUNDOTRA: Okay. So, our time to locate varies depending upon the source of the geo

data. So, on a cell phone, if it's GPS data, cell phones depending on make and model can

sometimes take a very long time, up to 20 minutes to get their initial GPS fixed. In

those cases, we fall back to using cell tower, and if the phone has an A-GPS chip, A-GPS

gives us assist to GPS. That assist comes from using the unique identifier of the cell

tower, and we're able because of a very large database to almost instantaneously give you

a reasonably accurate fix until we can get the true lat-long once the GPS kicks in. So,

that gives you that very, very fast experience in Google Mobile Maps.

>> KENNEDY: And do you feel like you have good coverage internationally for those types

of location sources, other than cell--other than GPS?

>> GUNDOTRA: The coverage has grown by an order of 92 this year. And, obviously, the

more and more phones that carry Google Mobile Maps increases our coverage. So, we're very

happy, most places internationally, there are a few very rural areas that we continue

to drive for a better coverage in. But the rate of growth is very encouraging. They will

have broad coverage. >> KENNEDY: Will Google's Real-Time Search

API gets supported in the Google Search API in the future? I would rather like see that

in a long tail adaptation. >> SINGHAL: So, that's a great idea. We haven't

yet looked into the details of that. And I assure you, we'll be looking at the details

of that going forward. >> BENNET: Yeah, one more question online,

which is just, when is all the stuff going to be live both in the mobile front end and

on the Web front? >> GUNDOTRA: So, I'll take the mobile front

stuff. Live today is Japanese Voice Search. Brand new in the Android marketplace is Google

Mobile Maps Version 3.3. Or if it isn't live now, it will be live in the next few hours.

And also, Google Goggles, as you saw from the Twit, is already available in the Android

marketplace. Some of the new innovations that you saw on the Google homepage, like Near

Me Now, those are in the coming weeks ahead. We can't exactly predict those dates because

of some of the holidays, but it's in the very imminent, in the near future. Things like

Product Search are probably a few more weeks beyond that. But that gives you a timeline

for everything we discussed except one thing, that fascinating demo I showed where the devise

was able to translate from English to Spanish. That was a technology concept demo. You'll

see the first products from Google that start to include that technology some time in Q1.

>> SINGHAL: So, for Real-Time Search, we are starting the rollout process today. By the

end of the day, some percentage of Google users would start seeing Real-Time Search.

And in the coming days, we will complete that rollout as our systems roll out to the various

data centers that we have. For now, if you want to access Real Time Search, please go

to google.com/trends and you can click on the "New Hot Topics" panel on the left, or

type your query under that panel in the "New Search" window we have added and you will

get Google's Real Time results now. >> You don't get at the Website [INDISTINCT].

SINGHAL: We can check that. We'll make sure. >> I have heard that...

>> SINGHAL: No, I'm sure. You know, maybe some of the binary we just did needed.

>> [INDISTINCT] before that. >> SINGHAL: Okay. We'll check it right away.

Someone's checking it. >> JONES: Bobby Jones from the Guardian, again.

I wanted to ask Vic about the visual search. You said there are a billion images already

in the computer vision data. I'm just wondering who--whether Google owns, like, the canonical

encyclopedic entry of what that image is or whether you determine what that is by, you

now, a Web search algorithm. You know, is the Empire State building determined by everyone

on the Web saying it's the Empire State building or you deciding that's what it is?

>> GUNDOTRA: So I could give you a very compelling answer but I would be remiss as the person

sitting directly to your left is the engineering lead behind it. And so, in this case, if you

could hand over the microphone to Neven--no, Hartmut. Hartmut will give you the answer.

>> NEVEN: Actually, one of the most interesting parts of our system is a technology called

"Unsupervised Learning." So, essentially, the algorithms will go out and build a model

for visual recognition in a completely unsupervised memory based on photos we find. And then,

a model, let's say in your example, for the Empire State building, will emerge as a reflection

of what's on the Web. >> JONES: Okay. My follow-up question then

was: Would it be possible to Google-bomb visual search?

>> NEVEN: In principle, yes, but, I mean, let's say, we have techniques to prevent things

like this. >> JONES: Okay.

>> BENNET: So, I think we have time for just a couple more questions. One over her in the

back, yeah. >> MACENA: So, currently--I'm Chris Macena.

I'm sorry. You're doing a lot with text-based status updates. I'm curious if you're looking

to expand that to other types of activities that people are doing on the Web that are

being recorded through other types of social networking services and systems, again, just

beyond status updates. >> SINGHAL: Indeed. We are very excited about

what the future holds. Today, we are starting our new Real-Time Search with text-based status

updates and the rest of the Real-Time Web. And we are learning about how to do relevance

in this world and we have done a very good job of getting you relevant real-time results.

As time progresses, we have image search technology, we have video search technology, and we will

be accepting all those forms of real-time information. Some of the greatest information

I've seen real-time is held in Twitpic, for example, right? The cable broke on the Bay

Bridge or something like that. There's excellent information there and, indeed, we will be

integrating that going forward. >> MAYER: As also, MySpace, one of our partners,

is already looking at how they can take some of the non-textual updates and make them available

to us. >> RIPER: Hi. I'm Van Riper. I am actually

one of the co-leaders of the Silicon Valley Google Technologies User Group. But in my

day job, I work for Crillon (ph), which is a product-based local search in real-time

inventory look-up. So I was kind of curious--I should be looking for another job. Could you

say a little more about the availability of the integration of product availability in

your results that you mentioned early on before all these real-time stuff? It's just very

exciting. Did you have a question? Early on--yeah, early on demo and something about it being

able to get real-time product availability, product inventory.

>> GUNDOTRA: Yes. >> RIPER: And when you're--when you then answered

the follow-up question about availability of stuff, you didn't even mention that so

I was just curious. >> GUNDOTRA: Oh, I'm sorry. That will be integrated

into our product search sometime in Q1. The partners I demonstrated there were Sears and

Best Buy. And, obviously, we're working with many other retail partners to get inventory

data so that we can combine that with the user's location and deliver that experience.

>> RIPER: Got it. Thanks. >> GUNDOTRA: Yeah, sure.

>> SINGHAL: Okay. Let me just take a moment. And Dillon tells me that google.com/trends

should be working now. Maybe we hit a minor glitch so we will check that again. But please

test it. >> SINGLE: Ryan Single from Wire.com, again.

Could you give a sense of what relationship there is between the Real-Time Search Index

and the Google Web Index, and whether one is feeding into the other or not?

>> SINGHAL: Another great architectural question. Google Search Index over the years has evolved

to be updated every few minutes, even like, you know, within a few seconds. And Real-Time

Search Index is just a continuation of that technology that we have been building for

many years now to do things like Google New Search, or even calling the Web very fast.

The new thing that we have added into this is update receiving and indexing and merging

with the index. And that has been a great new technology because, at that end, we have

built some of the technologies I showed you for modeling how information is flowing in

the system. >> BENNET: So I think that's all the time

we have for now. We'll stick around if you all have questions here. And thanks so much

for coming. >> SINGHAL: Thank you.

[END]

The Description of Real Time Search Event