Σάββατο 18 Μαΐου 2013

How Google, with your help, is overhauling its maps

Google Maps users supply critical data that the company uses to customize maps based on the destination selected and to create cinematic fly-bys of popular locations.


Yatin Chawathe, engineering director for the Google Maps Web platform, speaks at Google I/O 2013.
SAN FRANCISCO -- Google's mapping service relies on mammoth data centers, vast quantities of satellite imagery, and a fleet of Street View cars. But it also relies on you.
At the Google I/O developer show here on Friday, Google engineers described how they've overhauled Google Maps, and two areas in which information from Google users is key to that.
First, using anonymous data collected from people using Google Maps on mobile phones, it picks the best navigation routes. Second, using photos people upload to its Panoramio and Picasa photo services, it generates immersive tours that swoop around popular attractions.
"We can take advantage of all the work you do," said Yatin Chawathe, engineering director for Google Maps Web platform. "It's a shared responsibility."

Chawathe and Jonah Jones, the user experience design leader for Google Maps, gave a behind-the-scenes look at just how the new Google Maps works. Google made the desktop Web browser version of the new service available to Google I/O attendees, and others can sign up to get it, but it's not yet available on mobile.

The new Google Maps makes several changes, including a search bar that hovers over the upper left of the map rather than a bar all the way across the screen above it. If you search for something or click an item on the map, Google Maps pops up a rectangular card with information about it.
There's a subtler change, too, though: Google redraws the map with a focus on that item. Roads that lead toward it get brighter colors, broader widths, and bolder text. Roads that are secondary to the task fade and lose labels.
The new Google Maps interface, here showing the vicinity of Pullens Gardens, London, but without the gardens themselves indicated as a point of particular interest.
The new Google Maps adjusts to spotlight route information specific to a particular destination. Note how navigationally relevant streets are bolder while others have faded.

The idea, Jones said, is to combine the comprehensive map data Google has with the sort of highly customized information you might get when a person draws a map of how to get to a particular park.
"We wondered how we could build a map that feels customized for you but is also fit for purpose," Jones said.
So how does Google do it?

Help with navigation

It begins with a search query, of course. As soon as a person indicates what destination they're interested in, Google analyzes people's real-world navigation patterns in the vicinity.
Once a person indicates a destination on a map, Google finds all driving data for the vicinity. It then ranks that on popularity to spotlight the most useful navigational routes. Here, the lesser routes are shown in light blue and the top picks in dark blue.

"Google has access to significant amounts of anonymized data about how millions and millions of users use Google Maps every day," Chawathe said. It analyzes each segment of road for popularity to gauge how people would get to a particular place -- Pullens Gardens in London, in his example.
After Google figures out which routes get visual priority on the map, it also figures out a second stage of contextual information, spotlighting local information that is likely to be relevant. For example, it shows the names of cross streets near the destination.
The new look will spread to mobile devices, though neither Jones nor Chawathe said when.
"As time progresses, you're going to see this kind of experience mirrored across all sorts of devices," Jones said.
Jonah Jones, the user experience design leader for Google Maps, speaks at Google I/O 2013.

Building immersive photo tours

Chawathe and Jones also described how Google handles photos in the new Google Maps to construct virtual fly-bys of sites. These tours are designed to look like what a cinematographer might produce with a moving camera, Chawathe said.
"Once you figure out how to get somewhere, you want to get to know that place better," he said, so Google tries to provide "a rich, immersive feeling for how these places look in the real world."
Google begins with photos supplied by users, then starts analyzing them through Google vision algorithms.
"We try to figure out exactly where that photo was taken in the real world, and where in the real world every single pixel in every one of these photos corresponds to," Chawathe said.
The result is a 3D "point cloud" that links the real world to a large number of photos -- tens of thousands of them for a site such as Notre Dame Cathedral in Paris, he said.
To create a photographic fly-by of a location, Google generates a "point cloud" that records each spatial point of a location and knows what pixels from thousands of photos correspond to that point. It then picks top photos, as judged in part by human users of Google services, to choose which shots to incorporate.

Picking top photos

Google users again help out for the next step, selecting the photos. "You don't want to really sit through those tens of thousands of photos," Chawathe said, but Google users have done a lot of the hard work already.
"For each photo, we rank them on how popular they are -- how many people looked at that photo, how many liked that photo," he said.
Google picks a few of these good shots, including some close-ups and more distant shots and screening out duds such as family photos that don't focus on the site itself. And to make the tour smoother, it drops in interstitial photos.
Some human labor is used to screen photos for suitability, but the company expects computers to take over more and more of the work.
"The first pass is purely algorithmic. We do perform some manual moderation," Chawathe said. "That will go down over time as we gain more confidence and gain more signals to feed into our algorithms."

A schematic version of a cocktail-napkin map with the navigational highlights a human might draw. It's this sort of customized information that Google wanted to build into the new Google Maps.

by Stephen Shankland

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου