Monday, July 30, 2007

The 3D Revolution: Cartography for the masses

The New York Times has a report describing the rise of inexpensive or free Internet-based software tools that have turned cartography into an amateur activity. You may have heard the term "mash-up," which has been used to describe pointer-filled Web maps based on Google Maps (such as ChicagoCrime.org), but the article discusses the impact of newer tools and services, such as Microsoft Collections, Google's "My Maps," and MotionBased. Professional cartographers have taken notice:
The maps sketched by this new generation of cartographers range from the useful to the fanciful and from the simple to the elaborate. Their accuracy, as with much that is on the Web, cannot be taken for granted.

"Some people are potentially going to do really stupid things with these tools," said Donald Cooke, chief scientist at Tele Atlas North America, a leading supplier of digital street maps. "But you can also go hiking with your G.P.S. unit, and you can create a more accurate depiction of a trail than on a U.S.G.S. map," Mr. Cooke said, referring to the United States Geological Survey.

April Johnson, a Web developer from Nashville, has used a G.P.S. device to create dozens of maps, including many of endurance horse races -- typically 25-to-50-mile treks through rural trails or parks.
I've discussed how GPS and other consumer electronics might change the way we find out about events. In my "Meeting the Second Wave" essay from February, I described the following usage scenario, involving timestamped images tagged with GPS data and automatic image tagging:
In the second wave of new media evolution, content creators and other 'Net users will not be able to manually tag the billions of new images and video clips uploaded to the 'Net. New hardware and software technologies will need to automatically apply descriptive metadata and tags at the point of creation, or after the content is uploaded to the 'Net. For instance, GPS-enabled cameras that embed spatial metadata in digital images and video will help users find address- and time-specific content, once the content is made available on the 'Net. A user may instruct his news-fetching application to display all public photographs on the 'Net taken between 12 am and 12:01 am on January 1, 2017, in a one-block radius of Times Square, to get an idea of what the 2017 New Year's celebrations were like in that area. Manufacturers have already designed and brought to market cameras with GPS capabilities, but few people own them, and there are no news applications on the 'Net that can process and leverage location metadata -- yet.

Other types of descriptive tags may be applied after the content is uploaded to the 'Net, depending on the objects or scenes that appear in user-submitted video, photographs, or 3D simulations. Two Penn State researchers, Jia Li and James Wang, have developed software that performs limited auto-tagging of digital photographs through the Automatic Linguistic Indexing of Pictures project. In the years to come, autotagging technology will be developed to the point where powerful back-end processing resources will categorize massive amounts of user-generated content as it is uploaded to the 'Net. Programming logic might tag a video clip as "violence", "car," "Matt Damon," or all three. Using the New Years example above, a reader may instruct his news-fetching application to narrow down the collection of Times Square photographs and video to display only those autotagged items that include people wearing party hats.
The party hat example may still be years away, but the New York Times article describes how "geotagging" has already become a reality, thanks to features associated with Flickr and Google Earth.

No comments:

Post a Comment

All comments will be reviewed before being published. Spam, off-topic or hateful comments will be removed.