Showing posts with label Media. Show all posts
Showing posts with label Media. Show all posts

Thursday, December 22, 2022

Favorite holiday traditions, and revisiting Ebenezer Scrooge

Japanese Santa Claus illustration, 1914
Nearly everyone has a favorite holiday tradition. Small children gravitate toward opening presents. Some people love shopping for presents, or wrapping them in a very creative way. For our daughter, it is decorating the tree. Our son still likes to leave out cookies for Santa, even though he is a teenager!

For several Jewish families on our street, it's lighting the menorah. For some Catholic neighbors, it's attending midnight Mass.

Music is popular, from religious songs to modern seasonal hits. Performances of The Nutcracker are frequently sold out in Boston. One of Nicole's colleagues has attended the Boston Pops annual holiday concert every year since 1987!

One of my favorite traditions is watching a film version of A Christmas Carol. There are more than a dozen movies dating back to the early 1900s, ranging from by-the-book renditions to musicals to modern adaptations. I prefer the 1951 black and white film starring Alastair Sim

This week, I read Dickens' original novella, A Christmas Carol. In Prose. Being a Ghost Story of Christmas. I expected it to be dated in language and plot, but that was not the case. Indeed, it was a gripping story that brings together horror, drama, and even a touch of humor - I couldn't stop reading! You can read it online here. Take a look at the vivid description of Ebenezer Scrooge on the opening page:

Oh! But he was a tight-fisted hand at the grindstone, Scrooge! A squeezing, wrenching, grasping, scraping, clutching, covetous, old sinner! Hard and sharp as flint, from which no steel had ever struck out generous fire; secret, and self-contained, and solitary as an oyster. The cold within him froze his old features, nipped his pointed nose, shrivelled his cheek, stiffened his gait; made his eyes red, his thin lips blue; and spoke out shrewdly in his grating voice. A frosty rime was on his head, and on his eyebrows, and his wiry chin. He carried his own low temperature always about with him; he iced his office in the dog-days; and didn’t thaw it one degree at Christmas.

Then we come to Scrooge and Marley, Ebenezer's place of business and the single-minded focus of his life - making as much money as possible, no matter the cost to the people around him:

The door of Scrooge’s counting-house was open that he might keep his eye upon his clerk, who in a dismal little cell beyond, a sort of tank, was copying letters. Scrooge had a very small fire, but the clerk’s fire was so very much smaller that it looked like one coal. But he couldn’t replenish it, for Scrooge kept the coal-box in his own room; and so surely as the clerk came in with the shovel, the master predicted that it would be necessary for them to part. Wherefore the clerk put on his white comforter, and tried to warm himself at the candle; in which effort, not being a man of a strong imagination, he failed.

No wonder "Scrooge" is now a noun for miserly tightwad! I won't spoil the ending, but A Christmas Carol is a story of redemption and finding humanity even in the most unlikely of characters.

Dickens' story was a hit the moment it was released in England in December 1843, nearly 180 years ago. It would have been a story treasured by generations of our English-speaking forebears, first in book form, then on the stage, and later in film.

Whatever holiday tradition you observe, enjoy the time to celebrate traditions and connect with family and friends.

Saturday, March 14, 2015

The fall of tech media and the rise of PR


I was employed as a journalist from 1994 to 2010, with breaks in 1996, 1999, and 2002-2005. I worked for a TV station, a newspaper, a trade magazine, and then in various online news ventures. In this post, I will share a short history of the decline of traditional media, and how many talented news veterans have ended up working for PR.

In October 1999, I began working in the tech media. This was the height of the dot-com era, when magazines were as thick as phone books and money was pouring into high tech advertising. From 2000-2002, after the first dot-com bubble burst, the industry experienced the first wave of mass layoffs. At that time, the newspaper and magazine sectors were still relatively strong and were able to absorb some displaced writers and editors, but some started to go over to the "dark side" (PR). At my company it was also possible for senior writers and editors to go into research, which was seen as a more respectable alternative career path than corporate PR.

There was a slight recovery from 2003-2004, but then an interesting thing started to happen: a steady trickle of slow-motion layoffs, consolidations, and other cost-cutting measures. The weaker pubs began to fail as demand for print advertising dried up, and events began to feel the heat too as new entrants muscled their way into the scene. In some cases, staff were shifted to growing online units, but overall there was a net loss of staff in editorial, ad operations, and events.

Starting around 2005 or so, I began to notice a curious thing: Many of the 30-something journalists in my organization were voluntarily moving to industry. Some started to work for PR agencies, but in many cases they moved to in-house marketing units of large tech companies -- Microsoft, CA, Bose, etc. Certainly the pay and benefits were attractive but my own sense was there didn't seem to be much of a future staying in journalism. Why keep a job which offers little chance to advance and could probably lead to layoffs in the near future?

People stopped using the term "the dark side" around that time. It's hard to make some ethical stand about the purity of the profession when people are getting laid off or taking a salary cut while serious journalism is being sacrificed for the sake of pageview-heavy slideshows and blogs.

Sunday, March 10, 2013

Back in the saddle

So the 廢物 show in Taipei went well. Really well. The best place to see photos is on the band's Facebook page. Someone took professional-grade video and when the final cut is ready, I'll share it on this blog.

Since returning to the states, I've jumped back into the In 30 Minutes guides. I incorporated in January, launched the first title (about Online Content Marketing) by another author in February, and am busting my ass this month to get expand marketing and distribution. I'm also looking for new guidebook authors. In other words, I am full-fledged Lean Media mode. You can follow the action on @ilamont.

Until next time ...

Friday, March 23, 2012

Trader Joe's pink slime statement



I thought I would share this note, as an example of the importance of the Web when it comes to customer communications. Right now there is a huge wave of concern over "pink slime", a beef additive that consists of gristle, connective tissue, and "fatty bits of leftover meat." Naturally no one wants to have this in their hamburgers or tacos, and consumers are justifiably concerned. I've been reading some of the coverage showing supermarkets falling over themselves to swear off pink slime, but noticed a name missing from the list: Trader Joe's.

That's funny. TJ's is big on quality. But they're not on that list. And when I searched the Trader Joes website and checked the "Customer Updates" page, there were many notes about food quality and sourcing issues, but nothing on pink slime (see screenshot below). Was Trader Joe's hiding something?

Trader Joe's pink slime - no statement on TJs website

So I emailed the company and quickly received a reply. Here's Trader Joe's statement about pink slime:

Thank you for taking the time to contact us about the "pink slime" issue. Our ground beef is 100% pure beef with nothing added. Please be reassured that this is not something that would be permitted in our products-- NO pink slime in any of our beef (or any of our products with beef listed in the ingredients). We stand buy our integrity and our dedication to our customers.

At Trader Joe's, food safety is of the utmost importance, and we take seriously the work done to ensure our products are wholesome and safe; after all, we're customers, too - and we would not sell anything we would not eat, drink, or use ourselves. Therefore, we only work with reputable suppliers, many of which are actually generally much smaller in comparison to other markets, just so that we can ensure the quality and integrity of our products. We also have third party audits of our products and vendor facilities to ensure that our standards are met.

It was signed "Kerry, Customer Relations".

I bothered to email the company, but how many people didn't bother, and/or assumed because there is no statement on the website or in the media, TJs does add pink slime?

Clear, up-front communication is a must in cases like these. It's vital to get the statement on the website and out to the media and bloggers covering the issue ... or risk your reputation and sales suffering as a result.

Update: Trader Joe's finally updated the website on March 26 with a statement about ground beef, probably a week or two after they started hearing from customers (I had emailed them 4 days before). It took them that long to write a 100 word statement and post it to the Web? Ridiculous.

Follow Ian on twitter

Monday, December 05, 2011

Outer Limits, Moody Street: A 20th-century shop thrives in the digital age



Comic book stores are one of those 20th century retail holdouts that will continue to hold on to their tight little niche. I realized this as I was browsing the aisles of Outer Limits, located on the bustling old-school shopping district along Moody Street in Waltham, Massachusetts.

Economists and e-commerce experts may be skeptical. How is it possible, they might ask, that a store that specializes in analog media and obscure toys, carries tens of thousands of dollars worth of inventory, and is generally regarded as a lifestyle business has any hope in the plugged-in, digital age?

My answer: It's not just that Outer Limits has an amazing collection of sci-fi toys, pop-culture memorabilia, Mad books, Dungeons & Dragons manuals, 45 RPM records, and (of course) several thousand comic books.

It's also because the collection is browsable and tactile in a way that eBay and Amazon are not.

It's because Outer Limits leverages these online channels to support customers outside of eastern Massachusetts -- and does so with near-perfect customer satisfaction rates.

It's because the owner, Steve, can answer questions about practically any obscure comic book author/artist, and has samples or collections of many of them.

It's because the store has a wide range of customers, mostly males from about age 5 to 50, but some women, and many foreign visitors.

And it's because customers can find things that they probably wouldn't even know to look for on most e-commerce sites. To wit:

Angry Birds stuffed toy? Check!

Darth Vader bobblehead? Check!

Model Clone Wars troop carrier? Check!

Newly published collection of Spy vs. Spy escapades? Check!

Die-cast Aston-Martin car from an old 007 movie? Check!

Collections of seemingly every well-known American comic book character, from Archie to the X-Men? Check!

Complete Neal Adams collection, from the 1960s to the 1990s? Check!

Large plastic Godzilla action figure? Check!

Large plastic Mecha-Godzilla action figure? Check!

I am not the only fan. I was in the shop recently and it was packed with kids, teens, and adults. Everyone was finding something that interested them. And as long as there is a supply of unique items that tug at people's sense of nostalgia, pop culture, and fun, Outer Limits will continue to hold on to its special niche.

The shop is located on 437 Moody Street in Waltham, Massachusetts (two doors down from the popular Patel Brothers Indian supermarket).

Monday, June 27, 2011

WorldTV - our MIT Media Lab final project

One of the more interesting class projects I took part in during my last semester at MIT was our MIT Media Lab final project for MAS 571 ("Social TV: Creating New Connected Media Experiences"). The project was called WorldTV, and with my team (Jungmoo Park, MBA '11, and Giacomo Summa, MSMS '11), we created a pretty slick video demonstration of the proposed software UI. The video was shown at the MAS 571 demo day at the Media Lab (you can watch it below) and we wrote an accompanying concept paper that we are in the process of preparing for an IEEE CCNC workshop. In the following post, I'll describe not only what WorldTV is, but its genus and some of the reaction we've received so far.
world tv
WorldTV is a television app and accompanying mobile app for browsing user-generated video from one's social circle, as well as event video produced by strangers that tie into one's news and cultural interests. Instead of using traditional browsing methods -- scrolling through channels or searching for videos -- the proposed service uses a 3D globe as a navigational tool. WorldTV is aimed at people with global networks, which might include people with friends, relatives, and colleagues from other countries; people who spend a fair amount of time travelling; or people who are interested in news or culture in other countries.

The concept had great appeal to the entire team, not only because of our backgrounds (Giacomo is from Italy, Jungmoo is from Korea, and I spent most of the 1990s living overseas) but also because all of us have observed the exponential growth of user-generated video and realize its power and appeal to ordinary people. In 2006, I wrote about the potential of geotagged, time-stamped online photos to give insights into local events. I expanded the idea to include tweets and user-generated video in a proposal for my Linked Data Ventures class called PPP (PixPeoplePlaces). When I began the Social TV class, I took the PPP concept even further with user-generated video, emphasizing the social aspect of plotting event video on a local map (this was the basis of my first assignment for Social TV -- you can see the poster here).

Developing WorldTV - our MIT Media Lab final project

I envisioned all of these ideas as Web apps displayed on a computer monitor. For one of the early poster sessions for the Social TV class, Giacomo independently came up with a different approach. He asked, why not use a full-sized television screen to display a map of the entire earth with hot spots that reflected breaking hard news events that might be captured by amateur shooters? (This happened as anti-authoritarian demonstrations were breaking out across the Middle East in early 2011). Instead of being a "Lean Forward" experience (something that requires user input or interaction, such as a video game) this would be a "Lean Back" experience, in which the viewer could sit on the couch and take in the video. Giacomo also considered how video could be differentiated on the global map with different sized or colored markers, and how "likes", social networks, or newspaper articles could determine what appeared on the screen. He called it "WorldTV".

There was clearly some overlap between our ideas, and we decided to team up for the final project. We expanded the concept to include not only video from breaking news in other countries, but also cultural events (festivals, parades) and entertainment (sports, performances, etc.). The social filter would not only display streaming/recent videos from one's social circle, but could also reflect the collective interests of the social circle.

WorldTV business model

An additional requirement for the final project was a business model. I had already been thinking about using phone and laptop cameras as a way for ordinary people to access amateur expertise all over the globe, for a price. Examples of amateur expertise might be a power user demonstrating how to use a new gadget, advice on registering a company in a certain state by an experienced business owner, practicing foreign language conversation with a native speaker, etc. I dubbed the scheme Real Time Requests. (RTR). A live auction and reputation system would determine prices paid by people seeking expertise, and match them up with sellers. We decided to fold it into the proposal. The idea was then debuted at another MAS 571 poster session in April:


Jungmoo, who had a background as a professional television reporter for a Korean broadcaster, was intrigued by our poster session presentation and joined the team. Our next task was to take the concept and make a demo to show at demo day at the MIT Media Lab in the last week of class in May. For the final deliverable, we didn't have the skills to produce a working prototype. However, we did have the skills to produce a software mockup and accompanying video demo.

The team got to work. I created a simple WorldTV television UI using HTML and CSS, built the maps with Google Earth, and mocked up a mobile UX on an iPhone "remote". Giacomo wrote the script and starred in the video. Jungmoo took the raw video and graphics and used his professional editing skills to create a really slick video demo, which is shown below:



We presented the video and an accompanying slideshow on the business model last month at the Media Lab. Our Media Lab instructors, Marie-José Montpetit and Henry Holtzman, invited a group of industry pros from major cable and national broadcasters (including NBC and WGBH) to watch all six student presentations. After seeing our team present, one of the NBC visitors was interested in the idea of "shared experiences." Giacomo explained that user-generated video around sporting events and concerts could populate the global view, depending on how one's filters were set up. This prompted another executive, who I believed was from HBO, to question the legality of using amateur concert video. I responded that copyright law was decades behind the technological and social reality, but she was skeptical. I then said that there would always be artists who want to exercise strong control over this content, but there were also many artists who recognized the value of fan content to generate additional interest or loyalty, and in my opinion, the latter group would have a competitive advantage. But as I thought about it later, it was clear that addressing the entertainment industry's copyright concerns would be a huge issue, regardless of how outdated the laws are.

Our team also heard from Henry, who thought the Real Time Requests business model was really a separate concept that did not match WorldTV. We agreed. Jungmoo and Giacomo had actually raised the same concern in our planning discussions, but I felt we needed a business model that did not involve standard subscriptions. Henry noted that a subscription might actually work for some people.

So what's next for WorldTV? All members of the team have graduated, and none of the industry visitors seemed interested in taking it further. We hope, however, that if our paper is accepted to the IEEE CCNC '12 conference, it might get some traction. In the draft that we are now preparing, I outlined the "Future Work" required to make WorldTV a reality:
The next steps for WorldTV would be to create a working prototype using Google Earth, YouTube and Facebook APIs, the Android or iPhone SDKs, and other existing software and hardware components. Besides using the prototype to evaluate functionality and performance, ordinary users in the target audience (people having global networks) could also test the system with an eye toward determining which features and use cases hold the most promise. When the product is ready for wider distribution, identifying suitable “TV App” platforms and partnerships could take place. In the long run, creating a scalable architecture with its own API and opening up WorldTV to outside developers (much like Facebook and Twitter have done) would help unleash the greatest potential of the platform. This would require significant investments, but in the long run would help realize innovations for the next age of television.
If the paper is published, I will share a link on this space. In addition, if anyone is interested in learning more or helping to develop the idea, my contact information can be found here.

Monday, April 25, 2011

Spark Capital investor on Twitter: "Depending on what day it is, they’re profitable"


On April 13, Todd Dagres of Spark Capital came to speak at our New Enterprises class. Todd is a former instructor of the class, and an established venture capitalist -- his VC bio lists Akamai and a host of investments in other networking companies dating back to the 1990s. He is also one of the much-lauded early investors in Twitter, and after his presentation, he fielded questions from the class. I raised my hand, and popped two questions that I thought were relevant to the discussion about building a profitable enterprise (our assignment that week was a go-to-market strategy for our own business ideas): I asked Todd how the Twitter team sold him on the costs and the revenue potential, and whether or not the company was profitable.

Todd responded:
“The second question I can’t answer. I can say that ... let’s put it this way: Depending on what day it is, they’re profitable. So they’re generating lots of revenue (see, that’s the revenue right there). And depending on how much comes in that day, they can be profitable. So they are monetizing, put it that way.

As far as, did they have a plan? Absolutely. They had a plan that said, 'we're going to grow subscribers, we're going to monetize subscribers.' So they had a plausible plan. By the time we invested, they had decent momentum. They had under ... (trails off)

When we invested, by the way, you’ve got to understand. It’s not like it is now. Back then when we invested, Facebook was a fraction of what it is now. Zuckerberg had yet to be on the cover of a magazine, and Twitter, had, probably when we invested, 600,000 subscribers. Which if you look at it , 600,000 subscribers, that’s a lot. 'Why did you wait until they had 600,000 subscribers?' Back then, it wasn’t as obvious as it is now that you can monetize the subscribers.

When we made the Twitter investment, there were articles in the press that outnumbered the other articles in the press, relative to social networking. And basically what the articles said on the negative side, is 'social networking advertising CPMs suck.' And I even saw one table that said, 'here is the CPM hierarchy.' And down at the bottom, along with the nastiest stuff you can imagine, was social networking advertising CPMs. And the reason given was, 'you have no idea what people care about on social networks. But if they go to a car site, ho ho ho!’

So cars, financial services, things like that had the highest, high tech blogs and magazines and things like that had the highest CPMs, and way down at the bottom was social networking because no one could understand why advertisers would advertise against social networking, ‘you never know what you are going to get.’ Someone talks about what they did last night, 'who cares, who would ever care about that?' But as we know now, Facebook knows a lot about you, and they can target ads against you, better than probably Google can. All of a sudden, social networking CPMs have gone way up, and it’s pretty obvious. ... (trails off)"
I wasn't able to ask a follow-up question, but there are problems with some of the arguments he used to defend Twitter and its revenue potential:

  1. Facebook may know a lot about its users, but Twitter does not. Real names and other demographic data are not required for registration. Many people on Twitter deliberately obscure their identities.
  2. Facebook CPMs may be higher, but not by much, and surely not approaching the levels that I see Federated Media charging vendors to post display advertising in its network of online publishers (food blogs currently command $5-$12 CPMs, and Business Insider gets >$20 for display advertising). In my own small advertising tests using Facebook's self-serve advertising platforms, I paid for $0.14 CPMs in February 2010 and $0.20 CPMs in April 2011.
  3. If it wasn't obvious back in 2006 or 2007 that it was possible to monetize Twitter's subscribers, why invest in the company?
  4. Regarding the claim that "depending on how much comes in that day, they can be profitable": Such a defense would never be accepted by the current instructors for New Enterprises (Bill Aulet and Howard Anderson) for our class projects. It's also the sort of thinking that got lots of people in trouble in the late 1990s. Private market trading has valued Twitter at close to $8 billion, not based on real earnings or a plausible business model, but rather the premise that the people behind Twitter will somehow figure out a way to make it work. They haven't so far.
The unsaid argument for investing in Twitter was never mentioned by Todd, and indeed is seldom mentioned in the classes we have on building innovative businesses: A VC's primary responsibility -- or, should I say, fiduciary duty -- is to produce returns for their funds. If the exit is tied to actual revenue or profitability of their portfolio companies, that's great. But if not, who cares, as long as the exit is for a comfortable multiple of the initial investment?

As we have seen many times throughout history with everything from tulip bulbs to dot-coms selling bags of dog food, if enough people think that a popular product or service will be The Next Big Thing, a bevy of dreamers, flippers, and suckers will surely come a-running. Of course, the problem with the sale of unprofitable companies at logic-defying valuations is someone will end up getting burned. While the party is still going, however, no one wants to hear the sour notes. In Twitter's case, its status as a game-changing platform creates value on a different dimension for partners in the ecosystem. But this value does not easily translate to revenue, which once again takes us back to the question of Twitter's long-term value to investors.

More blog posts about my MIT experience:

Saturday, March 19, 2011

Solutions for the academic/mass media divide

"Why don't journalists link to primary sources?" The question was posed on Hacker News, and was based on a Guardian article lamenting sloppy reporting in the Daily Mail and Telegraph.

It prompted me to write the following response in the Hacker News thread:

If you asked most reporters whether they used primary sources, they would say yes, and point to the interviews that they conduct.

But if you were to point out that primary sources also includes published research, almost to a man or woman they would say A) they don't have the time to read it B) they don't have access to the journals or C) they are not aware the research exists. A few might concede D) even if they had access, they wouldn't be able to understand the research, which points to the fact that most journalists didn't major in science/technology in college and academic writing can be difficult to penetrate.

Of the above factors, I think C presents an opportunity for academics and startup publishers. On the academic side, it's pretty clear that the traditional method of reaching out to reporters via press releases and personal contacts is becoming less viable as newsrooms cut staff and the remaining writers have less time to network/talk with sources (travel budgets to attend conferences are very restricted these days) and write up stories based on those encounters.

Some researchers have seized upon blogging as a great way to not only reach their peers, but also a wider audience, and of course, other media (including journalists, specialist blogs, etc.). Group blogs written by researchers and experts are another great way to highlight new research and discuss ideas, too. Terra Nova ( http://terranova.blogs.com/ ) is one example focused on virtual worlds; I am sure the audience here knows of many others.

But the problem with individual and group blogs is they are still largely unknown outside of a relatively small group of people. In order to make a mass audience connection, there needs to be a way for these ideas to be presented in newspapers and television reports (which is how many people still learn about the world around them), or on media websites.

An arrangement to republish blog content or for the blog authors to prepare easy-to-understand summary reports for a mass media audience are possibilities, but the processes and incentives need to be worked out -- preferably in a way that takes the load off of editors, who don't have the time to find the right bloggers and deal with the freelance contracts and payment issues. One startup idea would be to create a "marketplace" to match publishers who are seeking an informed report about a specific scientific topic (for instance, how a boiled water reactor works). Another avenue for a startup would be to set up a "science wire service" which prepares timely, relevant coverage (including blogs, video, and features) about new research and developments every day. Media companies could subscribe to the service and editors could browse the service and use as much as they like, just as they do with Reuters, Bloomberg, AP, etc.

As for the specific issue of not including links, this partly relates to the awareness and access issues mentioned above, but also to the fact that content management systems used at many newspapers and magazines are optimized for print publishing, not online publishing. Inserting links typically has to be done *after* the article has been written, often by different editors or producers who know how to use Wordpress/Drupal/homegrown tools. I think there's a startup opportunity here as well, but unfortunately it also requires a rethinking of newsroom processes and control.

The debate reminded me of my "Source Blocks" idea from 2008. It never caught on, for the simple reason that most writers (including me) are too lazy to manually include them. But that could be an opportunity for another new media product ...

Tuesday, March 01, 2011

Social TV poster #1: PeoplePixPlaces

(Update: This concept has evolved further and turned into a final project called WorldTV, complete with a software demo and video) From the Social TV class I'm taking this semester at the MIT Media Lab: A social TV application based on news. I came up with PeoplePixPlaces, a Web-based application that gives a window into local news, using geocoded video, pictures, and tweets, as well as individual users’ own social lenses. The poster explains the concept in more detail:

social TV

The genus of the idea predates MAS 571. Last semester in 6.898 (Linked Data Ventures), I proposed a similar project, PixPplPlaces. The one-sheet vision:


“People want to know a lot about their own neighborhoods.”

- Rensselaer Polytechnic Institute Professor Jim Hendler, discussing Semantic Web-based services in Britain, 10/18/2010

While superficial mashups that plot data about crime, celebrity sightings, or restaurants on street maps have been around for years, there is no service that takes geotagged tweets, photos, and videos, as well as associated semantic context, and plots it on a map according to the time the information was created. The idea behind PixPplPlaces:

• Index some publicly available location-based social media data in a Semantic Web-compatible form
• Plot the data by time (12:25 pm on 10/24/2010) and location (Lat 42.33565, Long -71.13366) on existing Linked Data geo resources
• Bring in other existing Linked Data resources (DBPedia, rdfabout U.S. Census, etc.) that can help describe the area or other aspects of what's going on, based on the indexed social media data

Potential business models:

• Professional services: News organizations can embed PPP mashups of specific neighborhoods on their websites, add location-based businesses who are their ad clients, or use the tool as an information resource for journalists -- what was the scene at the site of a fire on Monday evening, just before the fire broke out? Lawyers, insurance companies, and others might be interested in using this for investigations.
• Advertising services: A suggestion from Reed - "a source of ads/offers in Linked Data format - for the sutainability argument as a business. Maybe in the project you can develop an open definition that would let multiple providers publish ads in the right format that you could scrape /aggregate and then present to end users? If you demonstrate a click-wrap CPC concept you might be able to mock it up by scraping ads from Google Maps or just fake it."

To be researched:
• Is social media geodata (geotagged Flickr photos, geolocated Tweets) precise enough to be plotted on a map?
• Should this be a platform or a service?
• How can the data be scraped, indexed, or made into "good" Semantic Web information?
• Would any professional organization -- news, legal, insurance -- pay for it?
• How viable is the advertising model in a crowded field chasing a (currently) small pool of clients?
The Semantic Web requirements for the 6.898 project and emphasis on tweets and photos gave the tool a different flavor than the Social TV version; in addition, I didn't consider the possibility of using "social lenses" to filter the contributions of people in the user's social circle. But for both projects, I recognized that the business case is weak, not only in terms of revenue, but also in terms of maintaining a competitive advantage if open platforms and standards are used.

Incidentally, I first had the idea for a geocode-based application for user-generated content back in 2005 or 2006. My essay Meeting The Second Wave explains the original idea:

In the second wave of new media evolution, content creators and other 'Net users will not be able to manually tag the billions of new images and video clips uploaded to the 'Net. New hardware and software technologies will need to automatically apply descriptive metadata and tags at the point of creation, or after the content is uploaded to the 'Net. For instance, GPS-enabled cameras th at embed spatial metadata in digital images and video will help users find address- and time-specific content, once the content is made available on the 'Net. A user may instruct his news-fetching application to display all public photographs on the 'Net taken between 12 am and 12:01 am on January 1, 2017, in a one-block radius of Times Square, to get an idea of what the 2017 New Year's celebrations were like in that area. Manufacturers have already designed and brought to market cameras with GPS capabilities, but few people own them, and there are no news applications on the 'Net that can process and leverage location metadata — yet.

Other types of descriptive tags may be applied after the content is uploaded to the 'Net, depending on the objects or scenes that appear in user-submitted video, photographs, or 3D simulations. Two Penn State researchers, Jia Li and James Wang, have developed software that performs limited auto-tagging of digital photographs through the Automatic Linguistic Indexing of Pictures project. In the years to come, autotagging technology will be developed to the point where powerful back-end processing resources will categorize massive amounts of user-generated content as it is uploaded to the 'Net. Programming logic might tag a video clip as "violence", "car," "Matt Damon," or all three. Using the New Years example above, a reader may instruct his news-fetching application to narrow down the collection of Times Square photographs and video to display only those autotagged items that include people wearing party hats.

For the Social Television class, we have to submit two more ideas in poster sessions. I may end up posting some of them to this blog ...

Thursday, February 03, 2011

A new world order, attributed to the Internet

"Governments and their security forces are afraid of the people now. The new generation, the generation of the Internet, is fearless. They want their full rights, and they want life, a dignified life.”

- Shawki al-Qadi, an opposition lawmaker in Yemen

Source: "In Cairo Streets, A Fight For The Arab Future", The New York Times, February 3, 2011

Sunday, January 23, 2011

Google's spam and content farm problem is not "better than it has ever been"

(Update below) I use Google Alerts to keep abreast of certain topics, such as my MIT program and mentions of my name. The automated search results that are emailed to me are interesting, particularly the ones based on my name. They almost always look like this:


The results are garbage, filled with contextually unrelated and bizarre terms such as "Lamont Arizona" or "lamont rupture disk." The links in the screenshot above take users to a page filled with links about Georgia real estate and references to many random terms (including my name), while the Danish site contains scores of random terms that include my last name (such as product or business names -- "Lamont auto", etc.), but no links. They contain no useful information about me or any other topic, yet pages like them are generated every week and added to Google's index. I looked back at my email archive, and found that pages created more than a year ago are still active.

What is their purpose? Pages like this are machine-generated spam, designed to get eyeballs on pages filled with advertisements, or to boost the search-engine ranking of linked sites. Google's wonderful search engine depends on language in headlines and page text as well as inbound links to determine which sites deserve to be at the top of search engine results when certain terms are typed in. That's great for quality sites which have lots of inbound links and deserve to be at the top because they are likely to be the most relevant and useful for users.

The problem is the system has been effectively reverse-engineered by spammers and content farms who are adding little, if any value with spam pages and poorly written trash or copied content that are boxed in by ads and affiliate links. In some cases, the pages don't contain any ads, but lots of links to other pages that someone wants to raise in the search engine rankings. Links and search-engine ranking translates to money if the keyword is popular or relates to something that people research online with the intention of buying. The ultimate prize for the spammers and content farms is getting their garbage on the first page of Google search results. The fact that quality pages (or the original content) that people are more likely to be interested in are pushed down or off the page are of little concern to them.

So I read with some interest a post by Google's Matt Cutts on the company's recent efforts to fight the problems described above. He said:
January brought a spate of stories about Google’s search quality. Reading through some of these recent articles, you might ask whether our search quality has gotten worse. The short answer is that according to the evaluation metrics that we’ve refined over more than a decade, Google’s search quality is better than it has ever been in terms of relevance, freshness and comprehensiveness.
Say what? I don't know what sorts of metrics Google is using, but I see poor quality results showing up in almost all of the searches that I perform. Not always on the first page -- on a Google search for my own name, there are enough high quality results (my blogging and social networking activity, plus references to other people with the same name) that push the garbage off the first page of search engine results. Still, starting at the top of page 3 of the results, I see results for bogus/scammy paid ringtone and "fast download" schemes attached to copyrighted technical mp3s that I produced for Computerworld when I was an editor there. Besides being illegal, such pages are not the original source of the mp3s (the pages on Computerworld.com are) yet the scraped content repackaged into paid services ranks higher than the originating site, which offers the mp3s for free.

For popular terms, however, the garbage routinely outranks the real thing. For instance, when you search for "online education," affiliate garbage dominates the first page of results. I am sure practically everyone reading this post has had a similar experience, attempting to conduct some serious research using Google and on the first page of results being presented with trick sites or utter drivel. It wastes users' time, and in some cases gives people false or misleading information. Organized hacking and crime rings have also joined the party -- for one of the bogus mp3 sites I obsrved, the credit card payment server is located in Russia. How many innocent people have attempted to pay for something through this system, and have ended up having their credit card information stolen or malware downloaded to their computers?

I also found it strange that Google is bragging about quality while the main problems that people were criticizing the company for in December and January -- low-quality content farms and scrapers -- still clog up search results (see one humorous example described on this Hacker News thread). Certain keywords are basically useless to find quality content on the first page of results (Wikipedia sometimes ranks high, but the quality is often questionable). SEO-driven content farms have simply taken over.

To Cutts' credit, he tried to respond to some of the questions and criticisms on Hacker News, but it is premature to crow about "quality" when spam, low-quality information, and other garbage fills search engine results. The problem clearly has not been fixed.

Update: Matt Cutts and debated the definition for "quality" and what an increase in spam and content farms does to Google's quality metrics. Part of the Twitter thread can be accessed here.

Related posts:

Disclosure: I am not a spammer or content farm, but I do use Google Adsense and Amazon Associates, and rewrite my headlines to improve their search engine ranking. 

Sources and research: Google, Paid Content, Techmeme, my own experience.

Saturday, October 23, 2010

Disruption: Broadcast news vs. the humble iPod touch

The Newslab blog recently posted about the differences between "professional" video shot with TV crews and video created with mobile devices. Judging by the tone of the article, CNN and others are experimenting with such tools, but are doing so in a very cautious manner. It prompted me to leave a comment, in which I said:
I recently started using a 4th generation iPod touch (I bought the 32GB model on Amazon for $280), which has a decent video camera built in, to shoot simple clips/interviews. [This] blog post demonstrates what I was able to produce:



Note that the only editing I did on the interview consisted of trimming the ends off the clip, an ability which is included in the iPod's camera application. From within the app, I uploaded it to YouTube, and then switched to my laptop to embed the YouTube clip on my blog post. A few days ago, I bought a $4 app in Apple's mobile app store called "ReelDirector" that lets me mix clips, add titles, switch transitions, and even add music.

With the cheap price and high level of functionality on these devices, there's no excuse for trying out mobile video. Is it pro quality? Of course not. But it's certainly enough to do newsgathering and interviews on the fly. And, the gear fits in your pocket and can be operated by the journalist -- no need for expensive cameras, extra crew, and extra overhead to get the story out.

It's apparent that there is still a lot of resistance in the broadcast news industry to using cheap mobile devices, laptop cameras, as well as any production process that's not "pro." In the mid-1990s, I worked in a TV newsroom, and know the prevailing attitude among many broadcast journalists (and crews) is a near obsession over making sure only the best-looking people and best-looking footage appear on screen. At the time, our reporter/cameraman teams would spend three or four hours every morning shooting tape and setting up interviews, and the remainder of the day editing down the footage and doing voice-overs. The result? One or two 2-minute clips per team per day.

Long after I had transitioned to online, the Flip video camera came out, and was a hit. Until the Flip, consumer video cameras from Sony and JVC tended to have complicated user interfaces designed by Japanese engineers. The Flip did away with 90% of the UI clutter, and had just five buttons, a flash drive to store 60 minutes of video, and a flip-out USB plug to transfer video files to PCs. It was also very cheap -- just $125 dollars. I enthusiastically began using one for reviews and interviews, and evangelized it to everyone in the Computerworld newsroom. This was in 2007. However, the weak point with the Flip was the lack of good editing software, which forced us to turn to professional video staff for more complex editing tasks. Never mind the information or images captured by the Flip -- there was more than a little skepticism from the pro video people about the jerky, poorly lit footage, tinny audio, and the fact that there were compatibility issues with the expensive AVID editing suites they used.

Now, the Flip looks positively ancient compared to the iPod touch with its simple editing tools and wireless uploading. The iPhone and the iTouch have the potential to turn many online text-based journalists -- and even people who have never worked in a newsroom or been trained as journalists -- into effective online video journalists.

The professional broadcast community may not get it right now, but they will get the message soon enough when lots of quality work is performed by jackknife journalists and amateur producers, and audiences make it clear that expensive modes of production are not a prerequisite for their attention.

Monday, October 11, 2010

Baseline Scenario: Why China is unwilling to revalue the yuan

An essay of mine was published earlier this month on the Baseline Scenario economics blog. It talks about China's yuan policy, and some of the reasons why Beijing will push back against external efforts to let the renminbi appreciate against other world currencies.

The essay has generated a lot of discussion, both online and off. It will be interesting to see the end result of the current back-room negotiations over exchange rates, but I am highly skeptical of talk that China will let its currency float. We heard the same hopeful talk about China freeing the yuan from the dollar peg back in 2005 and Beijing's "commitment to allow the yuan to be set by the interaction of demand and supply forces." But the peg remained and China only let the Yuan appreciate slowly (about 17% in five years). For reasons that I explained in the essay, I think the best that the U.S. and other countries can hope for is a very gradual appreciation of the yuan that does not threaten China's export-driven growth or social stability.

Friday, September 03, 2010

Murdoch's paid content experiment: #fail

A high-profile experiment to force people to pay for online news has failed on several counts, writes Ian Burrell in the Independent. A rival newspaper owned by Rupert Murdoch, The Times, created a pay wall earlier this year. As expected, traffic dropped through the floor. But even more worrying was the impact on another key revenue stream: Advertising. Writes Burrell:
Faced with a collapse in traffic to thetimes.co.uk, some advertisers have simply abandoned the site. Rob Lynam, head of press trading at the media agency MEC, whose clients include Lloyds Banking Group, Orange, Morrisons and Chanel, says, "We are just not advertising on it. If there's no traffic on there, there's no point in advertising on there."

He warns that newspaper organisations have less muscle in internet advertising campaigns than they do in print. "Online, we have far more options than just newspaper websites – it's not a huge loss to anyone really. If we are considering using some newspaper websites, The Times is just not in consideration.

The other problem for The Times is there is no niche content differentiation. It's specialty is national and international news as well as conservative political commentary -- content that is readily available for free elsewhere. In my opinion, Murdoch's niche titles with high-value readers such as the Wall Street Journal will probably fare better, especially if competitors also use pay walls. But, as I said earlier in the summer, "news, commentary, and analysis in most other fields is rapidly becoming a commodity in a sea of information alternatives." Attempts to make people pay more for a commodity product will invariably fail.

Link: PaidContent

Sunday, July 25, 2010

The supply/demand curve for paid news

A few weeks ago I discussed how long-form journalism was a poor fit for mobile devices. In addition, long-form content such as features is increasingly tough to use on the Web, where information/entertainment distractions are never farther than a hyperlink, bookmark, or tweet away.

But Frédéric Filloux's excellent Monday Note brings up another factor that's going against lengthy essays, features, and videos: A shift in consumption habits among young Web users (Digital Natives) who have grown up with the technologies. He writes:
"... The fastest is the best. Forget about long form journalism. Quick TV newscasts, free commuter newspapers, bursts of news bulletins on the radio are more than enough. The group will do the rest: it will organize the importance, the hierarchy of news elements, it will set the news cycle’s pace."
The "group" is the trusted social circle which serves as an echo chamber and information channel. Trust is vitally important to this group, Filloux writes, and most corporate media doesn't have it.

The other essay that is worth a quick scan is Jeff Jarvis' analysis (via Mediagazer) of the recent Conde Nast corporate shakeup and the company's admission that advertising cannot be counted on to support operations (I wonder how much they paid McKinsey for that piece of advice?). As for Conde Nast's plan to recoup revenue through high-priced online subscriptions, Jarvis' incredulous reaction nails it:
The problem is going to be that there is only *more* competition in content and so trying to suddenly charge *more* flies in the face of basic economics. The absurdity of the strategy struck me yesterday as Amazon tried to sell me a subscription to Time for 28.8 cents an issue while Time is trying to sell its iPad issues for $4.99 and I see no reason to buy either. In what world do these economics make sense?
I agree with Jarvis (I had a similar reaction to Conde Nast's "How about a buck a click" quote when I saw it), but wouldn't it be interesting to do a back-of-the-napkin supply/demand curve to model what's going on with online information and plans to charge for it? I'm pretty sure the demand curve would show a small number of people who don't care about price and will gladly register for paid news or download an expensive mobile news app; a large (but not huge) population who won't pay much and are extremely sensitive to price increases; and then the majority of the population who won't pay anything. If plotted, it would look something like this:
"P" represents price on the Y axis, and "Q" represents quantity on the X axis. Both lines continue off the page. In my microeconomics class, demand levels were depicted as being linear or slightly concave. But for online news, I believe demand at high price levels is very low, and rapidly drops as the price increases (iPad news app developers, take note!). It flattens out as prices approach zero, but that shouldn't be much consolation for providers. The price level is too low to support operations unless there is a massive audience, but Q hits $0 much too soon -- there's simply too much free news out there, plus many other free or low-cost alternatives (see "Quality vs. junk journalism"? Or news vs. other information/distractions?), meaning that the Q value for paid news will always be low.

What's missing from this chart? The supply curve. Typically, it runs perpendicular to demand, starting near the origin of the graph (where x and y intersect) and moving up and to the right in a more or less straight fashion, reflecting the fact that as prices increase, more Q (i.e., more supply) will be made available to sell and consume.

But for paid news, I can't quite figure out how to draw it. Any product in a capitalist economy should follow the basic supply curve pattern described in the previous paragraph. But in the online news industry, publishers are making so much content -- including high-quality content -- available for free. How do you draw that? (Economists or readers with a better understanding of how the theory works in situations like this, please feel free to weigh in below, in the comments section)

In some cases, publishers are offering content for free on some platforms, while attempting to charge for it on others. Conde Nast's Wired is a perfect example: Conde Nast has $10-$12 annual subscriptions, which are sometimes issued for free (I pay nothing now -- is my zip code that good?), or you can pay the $5 newsstand price. Wired's paid and verified circulation is 754,574, according to the Conde Nast media kit. Or, you can read most content online for free. Quantcast reports more than two million people do that every month. Then you have the Wired iPad app, which launched with lots of fanfare earlier this year at $5 an issue, but was almost immediately discounted the following month. The total number of June iPad issues sold the first month? 95,000. If you plot the online/digital users on the demand curve, it would look something like this:
Theoretically, supply and prices should dial down to meet demand and reach a state of equilibrium. I don't see how this can happen for paid online news, as long as there is an open system of information exchange that allows for a constantly growing mass of content of all types, most of it free. There may be some exceptions in niche topics such as finance. But news, commentary, and analysis in most other fields is rapidly becoming a commodity in a sea of information alternatives. Add to that factors such as the consumption patterns cited by Filloux, and a never-ending stream of new platforms and information products, which fragments the audience even further. In such an environment, the prospects for getting people to pay for news are very limited.

Saturday, July 17, 2010

"Quality vs. junk journalism"? Or news vs. other information/distractions?

I thought I would continue my discussion of the competitive environment for news, after spotting "What the audience wants" isn’t always junk journalism on the Nieman journalism blog (once again, it's something I found through Mediagazer, which has replaced Romenesko as my main source of news about news).

Laura McGann's Nieman blog post suggests that "coverage based on clicks" is a recent phenomenon. I would argue that it actually has many parallels in the traditional media world. Ever since revenue has been tied to metrics, publishers (and journalists) have employed various tactics to boost circulation/viewership. Long before there were "10 worst movie villains" slideshows on the 'Net, broadcast news had "sweeps week" and newspaper publishers realized that crime news, investigative reporting, and 96-point headlines helped attract eyeballs and drive advertising/circulation revenue.

Also, in the always-connected mobile and Web world, news organizations should understand that it's not always a question of audiences wanting quality vs. junk journalism. It's a question of people them wanting information, or sometimes wanting a distraction. Newspaper publishers -- including the New York Times and The Washington Post -- realized this a long time ago, and offered funnies, crosswords, sports, recipes, and sudoku to their print readership, in addition to high-quality journalism.

But online and on mobile devices, the focus is still on hard news and quality journalism. This is despite the fact that audience members clearly want other types of information besides news. They want updates from their social circles, information about personal or professional interests, product-related data, etc. -- as well as entertainment and other distractions such as playing games, listening to music, looking at photographs, and playing videos.

Another way of looking at the situation: publishers are competing with sites, services and products that they seldom considered as rivals in the old days. In the oil spill example given in the Nieman blog, the people who tuned into the spill coverage and later tired of it may have switched to other quality journalism stories and sources afterward. But I suspect that many turned to "junk journalism," and even greater numbers turned to information/entertainment sources that aren't even "news."

Is that a bad thing? I don't think so. Quality news and other types of information and entertainment shouldn't be seen as mutually exclusive. In print this isn't an issue -- no one thinks the Boston Globe is committing sacrilege by publishing the funnies, automobile reviews, MLB stats and photographs of rich people attending fundraisers and other social functions. The Globe recognized years ago that readers want these information and distractions, just as Yahoo has recognized that people nowadays have certain online information and entertainment needs, ranging from hard news to online games. It's a tough place for Yahoo to compete -- there are too many competing services, CPMs are low, and there are added costs to operating the services. But it's something that sets Yahoo apart from other publishers, in the eyes of audiences and advertisers.

Tuesday, July 13, 2010

Thoughts on the online information market, and why e-readers won't save journalism

e-readers won't save journalism
For some time, I've been wanting to write more about online news organizations and the competitive environment they now find themselves in. I was finally prompted to do so after reading an essay about e-readers by the Columbia Journalism Review's Curtis Brainard (linked from Mediagazer). Judging by its length and lack of a search engine-friendly title ("A Second Chance") it's clearly intended for a print audience. And, while Brainard brought up some valid points about the new e-reader technologies, he missed the boat in terms of the content that is best suited for mobile devices. Here's the comment that I left on the CJR.org website: 
No, e-readers won't save journalism -- at least not the kind that the  author and the Columbia Journalism Review practice.

Consider the people reading this essay. What percentage of readers are consuming it on an e-reader, iPod, iPad, Android phone, or any other mobile device, relative to the percentage of readers who are looking at it on a PC or laptop screen? I suspect the mobile:PC ratio is quite small -- maybe just a few percentage points, if that (perhaps the CJR can let us know?). I further believe that even among those who are looking at it on a mobile device or e-reader, very few are reading it from start to finish. Like many publishers, the Columbia Journalism Review is still oriented toward long prose pieces that are a poor fit for mobile devices or the people who own them. Who is going to read a 4,546-word analysis (the length of this essay) on a small screen, or even a 1,000-word news article. How many would be willing to shell out subscription fees for long-form Time, Wired, or WaPo print content on a Nook or iPhone?

Even short-form content may be a stretch, when there are so many other free and low-cost distractions available on mobile devices. Publishers no longer have a monopoly on information or entertainment, like they did a decade ago, when tabloids, metro newspapers, books, magazines and CD walkmans were the staple on subway cars. Now when I look around at my fellow commuters, I see people playing games, listening to mp3s, texting, watching videos, checking Facebook for updates, and sometimes even looking at a newspaper or mobile news app. If people don't want to read a 2,000-word feature, or don't feel like paying for news (print or mobile), they still have too many free/cheaper options to choose from -- options that they didn't have before, because the technology wasn't widely available.
This last point about competition deserves a little additional commentary. I've been thinking a lot about the competitive environment for online news. Publishers assume people want news, when in actuality many of them just want information. Increasingly, it's not so important where that information comes from.

Consider product news. In the mid-1990s, when a new Apple product came out, the channels for information were far more restricted. People depended on the news media (and advertising) to find out about these products, because the only other channels were word-of-mouth and retail outlets. I don't need to pay to learn about the iPhone 4 from the New York Times' David Pogue (or, for that matter, and other news publisher), when I can get facts about the product from Apple.com, video from YouTube, and numerous opinions from blogs, message boards, and my online networks -- all for free. What extra value is Pogue delivering? Certainly, he's a great writer and is accurate and unbiased (well, most of the time), but is he worth $1 in print or a $n monthly subscription? I don't think so.  Neither will most other people, when they are prompted to pay online or on their iPad.

Update March 2016: Six years after writing this post, I frequently read news articles on my iPhone. I also own a small-screen e-reader (a Kindle paperwhite) but I have found it difficult to read books or other long-form content. That said, millions of other people have adjusted to the new e-reader technology. I also have a digital publishing business that produces how-to guides, but sales of ebooks have actually leveled off or declined for most marketplaces while paperback sales have continued to grow. I don't think e-readers are the savior for long-form content  

Sources and research: Columbia Journalism Review, New York Times/Bits, EdibleApple.com, Mediagazer

Image: New York Times iPhone app on the iPod touch. Creative Commons Generic License 2.0 -- you are free to use it for commercial or noncommercial work, as long as you credit the source (Ian Lamont) and link back to this blog post.

Saturday, June 26, 2010

Why not rename AOL?

A thought, after reading this piece about AOL and its new content strategy: Why not rename the company?

Ask most people what they associate with the AOL brand, and I suspect the answer will probably be "dial-up Internet access," "Time-Warner's multibillion-dollar mistake", or maybe "my mom's email address." No one associates it with premium content. Indeed, because of these negative associations, the brand is probably a liability for premium content -- and the premium advertisers AOL needs to get on board to make CEO Tim Armstrong's vision work.

While I'm on the topic, I also wonder why Armstrong and team haven't done more to capitalize on one of AOL's most important assets -- AIM. The instant messaging program is used by tens of millions of people every day, and is seen as an indispensable tool by many of them. Compared with other premium content producers on the Web, it's a differentiator that none of them can easily duplicate. Why not leverage AIM in new and exciting ways to support the premium content push?

Image: Gilgongo/Flickr, creative commons generic 2.0 license

Thursday, May 27, 2010

Google's views of the news business: No magic bullet here

James Fallows, writing for The Atlantic, has published a fascinating but mistitled feature on Google's approach to the news industry ("How to save the news", June 2010 issue.)

I say "fascinating" because it's probably the most complete public airing of the company's views of the news industry to date. Fallows interviewed at least a half-dozen executives -- from engineers to senior news executives like Krishna Bharat and CEO Eric Schmidt -- about their views of news processes and advertising.

I say "mistitled," because judging by the account in The Atlantic, there are no convincing strategies or products in Google's pipeline that will save the news industry as we know it. As you'll see below, I think Google is missing (or underestimating) several key points about the nature of the news business.

The article is several thousand words long, and strangely contains not a single link or commenting interface. If you're pressed for time, there is a summary by AllThingsD's Peter Kafka located here. I've left the following comment on the AllThingsD article, relating to a few crucial elements that Fallows missed (or Google avoided discussing?) that I think are crucial to an understanding of the display advertising ecosystem and news:

I read the Atlantic piece, and was struck by a couple of things.

First, local business display advertising, which is an important part of many newspapers' revenue models, was not discussed. If local newspapers are counting on sufficient online display ad revenues, they (and Google) will need to address the problem that relatively few local businesses are savvy enough or sufficiently interested in online display advertising, even though Google (and Facebook, ESPN, etc.) have tried their darndest to make it easy for them.

Here's something else to consider: The only way most newspapers and other local publications including phone books and magazines have been able to sign up local businesses for local print campaigns is by employing boots on the ground to cold call and visit in person these companies, and pressure them into buying ads. That is not the Google way.

Another issue that I think the Atlantic author did not press Google on concerns the massive oversupply of online information sources for people to turn to, including media from all over the world, company websites, social networks, shopping sites, forums, etc. The increase in the amount of available pages on which to serve ads, combined with the decreased amount of time that people will spend on news sites because they are too busy updating their Facebook feeds or looking at Wikipedia, will depress the prices of news pages that contain display advertising.

One last thing: Eric Schmidt was quoted as saying the following:

"In the future model, you’ll have subscriptions to information sources that will have advertisements embedded in them, like a newspaper. You’ll just leave out the print part. I am quite sure that this will happen."

To me, this seems like a 1990s vision of the future of news -- basically duplicate online what newspapers are doing in print. While this is already happening, it's not working. Considering the two points listed above and the many other online trends working against the news industry, I am skeptical of Schmidt's vision as stated in Atlantic.

The "boots on the ground" reference is something I picked up when I talked to a few pros who've been working PR and marketing in the Boston area for years: Most local business ads are sold by salesmen working the phones or going door-to-door in neighborhoods. The commissions are low, so volume is key -- they work quickly and try to get as many sales as possible per day. It's apparently a point of frustration for restaurants, florists, boutiques and other stores -- they don't like the pestering or the costs and they can't easily recognize the value of most print campaigns. But they often give in. They recognize that they should be doing some marketing, to bring new business in the door or to be recognized as part of the community. While some are interested in trying out online ads or seeing people referred by Yelp, Foursquare, or Groupon, many others do not want to get involved in the technology or are not familiar with how online display advertising works or can benefit them. Some businesses are not keen on online because they have grown up with traditional marketing practices and/or think that their target market won't see them (I actually heard this from a national manufacturer that makes outdoor equipment a few years ago -- most of their customers are in their mid-40s or older, so why bother with online?).

Finally, while some traditional newspaper companies have tried hard to migrate local businesses to online advertisers, I still see some major failures in the marketplace, including the Boston Globe, which is owned by the New York Times company. Relatively few of their print advertising customers are converted to online display ad customers. Part of this could be reluctance on the part of vendors and local businesses to make the jump, but I also suspect that the sales organization still operates in a 20th-century frame of mind, where big commissions are tied to print and online is an afterthought.

Sources and research: Interviews with various Boston-area PR/marketing agencies, Boston.com, The Atlantic, AllThingsD, ilamont.blogspot.com

Monday, May 24, 2010

Associated Content: What is Yahoo thinking?

So Yahoo is buying Associated Content. The price tag? Rumors say it's between $90-$100 million.

A strategy based on low-quality, commodity content is bad for any brand, but because this product is tied so closely to SEO, it is really putting itself at the mercy of Google -- Yahoo's classic foe in the search arena. A fundamental change in Google's search algorithm or SERP design could really hurt Associated Content.