Showing posts with label UI. Show all posts
Showing posts with label UI. Show all posts

Friday, August 11, 2017

Amazon KDP survey: the improvements I suggested

So I received an email from Amazon's KDP program, asking me to take a short survey about the program. I've been using KDP for years, but in the past few months the Amazon self-publishing program has gotten a lot of grief from participants for rampant scams, ranging from ebook box set trickery to make money and establish "bestseller" status to bogus borrows and fishy promotions  gaming rank and revenue. The scams take money from readers as well as honest authors trying to play by the rules and publish good books.

But those aren't the only problems. When I was prompted with the following question, I had five specific suggestions:

Survey question: What would you like us to work on next that would improve your KDP experience?

My response:
  1. Get rid of transmission fees. This made sense when people downloaded books to their Kindles over 3G. Now that most downloads are wifi, it's a bogus charge that cheats authors and publishers.
  2. Stop using a misleading UI that tricks people into signing up for KDP Select.
  3. Please stop constant needling to lower prices.
  4. Please do a better job of screening out bogus authors using Wikipedia, Fiverr, or illegally copied sources to "write" books.
  5. Please find and punish people who are outright ripping off readers and other authors with scams and other tricks. It's not enough to remove their ranking. Kill their account and prevent them from opening up a new account tied to the same bank account. Money spent on these scams is not fair to readers or authors who are playing by the rules.
Did I miss anything?

Monday, March 26, 2012

Blogging for 10 years

Ten years ago today, I started my first blog. I was taking IT classes at Boston College at the time, and one of my instructors (Aaron Walsh, now a friend and advisor) told me about this new, powerful concept in online publishing called "blogging". Although he suggested using one of the nascent online blogging platforms at the time, I was getting my online career off the ground and decided to create my own blog, using hand-coded HTML and CSS. The result was this:


This early blog didn't last -- hand-coding pages and FTPing files to my personal site was too much of a pain -- but I liked the format of short observations with links and later photos. I started a Blogger blog in 2004, which I still maintain today (you are looking at it right now). I also expanded to many more locations on the Web. Since 2002, I have written thousands of posts that have appeared on ilamont.com, digitalmediamachine.com, Computerworld, The Industry Standard (no longer online), Harvard Extended, Ipso Facto, Terra Nova, and MIT.

What will the next 10 years bring? Surely a lot more, as content creation moves into the mobile sphere and new ways of gathering and presenting content appear on the scene. It's been a ton of fun, and I can't wait to see what happens next ...

Saturday, February 11, 2012

UI, UX, and MVP ... oh my!

"User experience" (UX) and "user interface" (UI) are terms from the world of software and hardware design. But these terms should be intimately familiar to anyone who uses gadgets, software or the Web.

Ever been frustrated by your television remote control's assortment of buttons and symbols? How about a confusing website that has dozens of links on the front page, but it's difficult to find what you're looking for, even though you know it's there? Blame bad UI. It's endemic to the television and camera industries, but individual companies such as Research in Motion (maker of the BlackBerry) and GoDaddy (a large Web host/domain registry) are notorious for terrible user interface design.

At the other end of the spectrum are technologies that not only look good, they make your life easier by minimizing unnecessary clicks, buttons, or engineer-centric features. The experience is often so good that you want to recommend it to your friends. The simplicity of Google's search engine and most Apple gadgets (such as the iPod touch and click-wheel iPod nano, pictured above) fit into the "good UX" category.

But design and technology clash for early-stage technology companies, which are often trying to get an MVP ("minimum viable product") out the door and into the hands of users as quickly as possible. It's easy for design to fall by the wayside. But maybe it shouldn't be. I've written in the past about MDP, or Minimum Delightful Product. The idea comes from Adam Berrey, who had this criticism of MVP:
"In the consumer world 'viable' isn't really compelling. It's like someone in the ICU. They are alive, but not really fun to hang out with."
He's right. Further, MVPs are targeted at early adopters rather than mainstream users, meaning that the feedback loops will be based on a different set of users than the people you want to attract. That's not to say an MVP can't evolve into something delightful, but for a product aimed at mainstream users, why not start with great design?

Image: iPods, circa 2010 and 2007. I am licensing this picture under a Creative Commons Generic 2.0 license. Please credit Ian Lamont and link to ilamont.com if you use this picture.

Friday, February 03, 2012

Mobile app competition and App Store SEO

One of the most interesting things I discovered as I was researching Craigslist tools was the hyper-competitive landscape in both the iPhone App Store and Android Market. Craigslist doesn't make its own app, so dozens of developers have hacked together mobile apps that piggyback on top of Craigslist's Web-based service. It's led to an oversupply of similarly named apps, all doing the same thing. Take a look at the screenshot below, from the iPhone App Store:

Craigslist apps in iTunes

How can all of the apps have the same name? They don't -- if you look closely, you'll notice that different punctuation has been added:
  • Craigs-list
  • Craigslist
  • Craigslist.
  • Craigslist!
  • Craigslist`
The reason why the developers used this name is because A) they know end-users will search for Craigslist in the App Store and B) they want to rank higher in the results than the competition. It's extreme SEO, except it's using the iPhone App Store search engine instead of Google. Note, however,  that unlike Google results, Apple App Store rankings depend on human inputs. Every app in the Apple App Store has been reviewed by human beings before being accepted, which means that Apple isn't paying attention, or doesn't really care what shows up in the search results.

The Android Market is similarly crowded with apps having similar names and functionality. To further confuse things, Craigslist Mobile on Android is made by a completely different company than Craigslist Mobile for the iPhone.

I don't know who is copying who, but it makes a mess for users.

Sunday, August 28, 2011

Visualizing professional LinkedIn networks

This is cool. LinkedIn has developed a visualization for members' professional LinkedIn networks. I spotted an example on a blog that I was browsing, and was curious to see what turned up for me. I was kind of surprised by the results:

linkedin data visualization

What's going on here? In a nutshell, my map reflects two major networks I am a part of: My MIT Sloan Fellows class (blue) and IDG Enterprise (orange).

The Sloan Fellows group of about 100 people are so connected with each other (i.e., nearly everyone is connected with everyone else in the group) that the lines between them form a nearly solid mass of blue. A few people on the outside of the blue mass are MIT students from other programs (e.g., the two-year Sloan MBA) who have a few connections with others in my class.

The orange network consists of former colleagues at IDG Enterprise -- mostly editors and technical staff but also some business executives. There are also many lines between them, but not nearly to the same degree as the Sloan Fellow network. That's because the IDGers are more likely to be connected to people in their own publications (Network World, Computerworld, The Industry Standard, etc.) and/or to people having similar roles (editors with editors, developers with developers, etc.). As I worked at three IDG publications and in many cross-functional teams from 1999 to 2010, I am pretty well connected across these groups.

Smaller LinkedIn groups: Who are they?

There are some interesting small groups near the center of my galaxy. They aren't necessarily close to me; rather, it appears smaller networks (light orange, green, and magenta) or single connections (gray lines) are shown nearer to the center of the visualization. The colored networks include a group of a half-dozen people (green) I met at the State of Play conference in 2007 and connected with on LinkedIn immediately afterwards. I haven't had much contact with any of them since then.

Another group (magenta) consists of high school pals who I have had regular contact with over the years, but are barely noticeable on the map, owing to the fact there are only four of them. Actually, there should be five magenta dots, but one of the guys who is connected with me is not connected with any of the others, and therefore shows up as a single grey dot -- even though he happens to be one of my closest friends.

LinkedIn Taiwan

Not reflected at all is my extensive network in Taiwan, which I developed over a six-year period in the 1990s when I lived in Taipei. If it were visualized, it would have about 20-30 people with slightly less density than my IDG network.

Why isn't it there? Two reasons that come to mind are the network predates LinkedIn by many years, and many of the people who I know from that period of life are Taiwanese and are therefore less likely to be LinkedIn users (social network usage in Taiwan evolved much differently than it did in the U.S. and other countries). While I have still connected with about a half-dozen people from my time in Taiwan, they show up as gray dots because I knew them from different settings (social, music, different jobs, etc.) and they are not connected with each other. It's also possible they haven't learned how to leverage LinkedIn, although there are many LinkedIn books available for people who want to understand how to create a profile and build their networks.

Nevertheless, it's an interesting visualization and makes me wonder if I need to develop other professional networks in new ways. You can try it out by visiting InMaps (Update: It's no longer active), which will require you to authenticate through your LinkedIn account.

Wednesday, July 13, 2011

Educational iPhone game development: Our experience with Egg Drop

It's an exciting feeling to be a part of a team that creates something special. It's even more exciting when you see early users not only getting a kick out of the product, but asking to use it again and again.

Educational iPhone game development: Our experience with Egg Drop
That was our experience with Egg Drop on the iPhone, an educational game and our student team's final project for 11.127/252/CMS.590, Computer Games and Simulations for Education and Exploration (see also my post on an earlier student project from the same class, "A curriculum for learning computer programming in WoW"). Our assignment, which built on nearly three months of instruction, theory, readings, and other projects, was to design and produce a digital game that is playable for 15-20 minutes. "You should identify clear learning goals and map them onto game dynamics," we were told. To actually develop the game, it took about 24 days from the initial ideation sessions to the final presentation at class demo day.

There is a lot of flexibility in the term "digital game," and the half-dozen student teams in the class pursued all kinds of ideas. On demo day, we saw Terminus, a text-based adventure to teach terminal commands ("Zork meets terminal," was one way of describing it). Another student team created a PC game called Rocketmouse that taught children the fundamentals of gravity.

The class had a lot of Course 6 undergraduates, including some who had written games in the past. But the instructors (Eric Klopfer and Jason Haas) made an effort to balance out the teams with experienced programmers and people who couldn't program, but were able to handle other tasks.

Coming up with an Egg Drop game idea

Our team didn't go into the project thinking that we would make a mobile game. The ideation process started with the class brainstorming on potential learning topics; those ideas were put on a whiteboard and then people could choose which team they wanted to join. Inspired by a recent engineering documentary about the construction of a helipad on top of a wind-blown skyscraper, I suggested doing some sort of construction-based game that would teach basic architectural concepts. At the time, I was thinking of something on a PC or the Web, which would allow for a more sophisticated interface.

Alec, a Course VI classmate with whom I had worked on a “digital gates” board game earlier in the semester, was interested, along with a few other undergraduates. We discussed how to improve the concept. One of the first suggestions was to do it as an iPad game. The idea was to use a touch-screen interface to build a skyscraper, and then testing the strength of the construction with various environmental forces such as wind, earthquakes, and other disasters. Alec came up with a clever twist: How about turning the game into a variation of Angry Birds? Instead of being the birds trying to get at the pigs, the player would be the pig, trying to protect the egg from being knocked down, by building a strong-enough structure.

The “Reverse Angry Birds” proposal (also known as “Reverse Upset Avians”, or RUA) was put on a whiteboard with about a dozen other ideas. It got some votes from the class, and was chosen as a finalist project. Five people joined the team in all, and we started to refine the idea and discuss the practicalities of implementing them.

One decision that we had to make right away concerned the platform. While the iPad sounded promising, there was a problem: Aside from me, no one had an iPad, which would make life difficult for our developers when it came time to test the app. The iPhone seemed like a better idea, because:
  • Three of us had iPhones or an iPod touch
  • Three of us had Macs, which meant we could work in Xcode, Apple’s developer tool for the iOS SDK
  • Alec had experience developing games and developing on the iPhone platform, and was also familiar with a 2D game engine for the iPhone called cocos2D.
The team agreed that the iPhone/Xcode path was the way to go. Clearly, myself and the one other person who were not Course VI would be unable to build a game, but there was room for us to do “code-like” activities, ranging from building artwork and sound files to creating levels in XML. I was capable of doing those tasks (and had some prior experience with level design in our 6.898/Linked Data final project), and could do user testing/QA (I had two young subjects who were willing to pitch in, as described below).

In the proposal document submitted to our instructors, we described the game as follows:
Egg Drop is a physics-based game designed for the iOS platform that attempts to teach basic intuition of physics and stable structures.

Because it is an iOS game, the only way to play Egg Drop (barring a release on the Apple app store) is to download and compile the source. The source of the game is hosted publicly on Github and can be found at:

https://github.com/alect/Digital-Egg-Drop

Learning Goals:
  • Gain a rudimentary understanding of physics, construction and other principles involved in building structures
  • Learn strategies for building stable structures that can survive the elements.
  • Learn to use resources in an optimal way to meet construction goals.
  • Develop the hypothesize -> experiment -> redesign strategy of designing, which is a useful skill in many wider disciplines than construction. The flow of the game should lead the player to use this strategy inherently, and hopefully bring the strategy with them from the game.

Prototyping and game-testing Egg Drop

Our plan was approved, and we got started on RUA. MIT has built up a culture around experimentation and prototyping and we all got to work pretty quickly. Alec was the lead developer, and took on tasks relating to integrating the physics engines, building the objects and resource manager, and creating a sound engine. He built a working prototype within a few days and uploaded it to github, which let those of us with Macs download it and try it out in Xcode’s iPhone simulator.

Another Course Sixer, Sarah, hadn’t used Xcode or Objective C before, but got up to speed very quickly. She was responsible for much of the final design as well as an in-game tutorial, which really helped make the game more appealing (you can see the tutorial in the gameplay video at the bottom of this post). She also created the system to import levels in XML format, which made it easy for me to do some age-appropriate level design and implementation on my own for our user testing -- before the XML engine was built, in order to alter levels during testing I had to change values in arrays and arguments in ResourceManager.mm. These changes were difficult to share with the rest of the team and prone to error, so Sarah’s work was very helpful. A third Course Six concentrator, Stephen, didn’t have a Mac (a requirement for Xcode) but worked on artwork, sound files, and documentation. The other member of the team worked on level design.

The game evolved from our original vision of creating a variation of Angry Birds. Creating the gameplay and artwork for the pigs and birds would have been extremely difficult and time-consuming (we only had a few weeks before demo day on May 10). We settled for a slimmed-down version of the game in which the goal was to build a structure that would protect a single egg from an onslaught of natural disasters at the end of each round. For instance, the kid-friendly level #3 used the following XML as inputs:


On the screen of the iPhone simulator, this translated to an egg resting on the plain at the start of the game (posx and posy describe its starting position). The player could place, in order, two vertical wooden planks, a horizontal straw block, and a horizontal brick, before the disaster (a meteor falling from the sky, directly on top of the egg) occurred. The only way to survive: Placing the two vertical wooden planks next to the egg and the horizontal brick resting on top of the plank, over the egg. Any other combination resulted in the egg breaking and “game over” for the player.

Changing the name of our Egg Drop project

As the game evolved, we dropped “Reverse Upset Avians” and started calling it EggDrop. It was an instant hit with my kids, even before we had meteors and earthquakes. The simple physics of placing planks around the egg was entertaining enough in sandbox mode (see screenshots, below). But when better artwork, different building materials, nails and other elements were added, it was addictive. My younger child in particular would ask to play it when he came from school, and after I came home from a long international trip, one of the first things he asked to do was play the game on the iPhone simulator.

One interesting element of game design that came up with the Egg Drop project was the target audience. I thought we should really be clear who we were targeting at the outset. Segmentation and “Total Addressable Market” exercises are part and parcel of the Sloan way in classes such as New Enterprises. But we ended up taking a much more flexible approach, as described in our proposal:
“One advantage of iOS and other touch devices is that they support a very wide age range. We hope the game will be playable by children as young as five or six while still being entertaining to adults. Young children will most likely reap the most benefit from the educational concepts the game presents. In addition, we found that we could cater levels to fit different age ranges, making the game customizable for all learning levels.”
While age customization was possible, for the purposes of testing we only had two versions: One for us and college-aged friends, and a simpler version for younger elementary school students. I worked extensively on the kid version, and developed new age-appropriate levels based on regular user testing. Here are a few excerpts from my user testing diary, which was submitted as part of our final project:
4/30/11

The kids had a fun time with a modified version of alect-Digital-Egg-Drop-3357c7c (I added about 30 extra block and nail objects, so they could play longer). They definitely get the nailing aspect of the construction, and used it to protect their egg almost immediately.

++++++++++++++++++++++++++++++++

5/4/11

Tested alect-Digital-Egg-Drop-9f0fc79 on my son. This was the first time he had seen the disasters, which he really enjoyed (especially the earthquake, which sometimes sends blocks flying).

I was also surprised to see that he right away figured out the solution to the wind disaster (nailing something to the floor) which vexed me when I saw it the first time.

He also used extensive experimentation to try to solve all of the problems he observed. For instance, for the earthquake, he tried positioning the blocks close to and further away from the egg, nailing different size blocks to the floor, etc. He gave up after 4-5 unsuccessful tries, at which point I showed him how to do it. Then he played to the end (two tall planks).

He noticed and liked the new egg [artwork].

++++++++++++++++++++++++++++++++

5/6/11

Played build alect-Digital-Egg-Drop-d3eb420, which has some memory issues that Alec addressed. However, we noticed a bug after the second level that prevented us from going to the third level -- the level up button didn't respond on the emulator.

The gameplay is fun, and as a proof of concept it is good, but I wonder if the learning couldn't be more robust. Maybe if we had more time ...

++++++++++++++++++++++++++++++++

5/9/11

Building out levels in XML. I am using Google Docs spreadsheet to track the progressive difficulty of the challenges, and using my own judgement and gameplay to see how they work.

The advantage of using oneself for testing is I can quickly rearrange the blocks or disasters, reinsert them into ResourceManager.mm, and play the new version on the emulator.

I am going to try to introduce it to my son tomorrow morning ... I unfortunately won't see him for the rest of the day.

+++++++++++++++++++++++++++++++++++++++++++++

5/10/11

My son hadn't seen the new designs, so he was very happy to see the artwork. He also liked the meteor, cushion blocks, and the idea of the termites. He got up to speed pretty quickly on the simple progressive levels I set up for him. On the quake level, which requires surrounding the egg with cushions and nailing them together in a certain way, he couldn't solve it, and took an interesting area of experimentation that I hadn't considered -- reinforcing the cushions with wood braces.

The other thing that I am conscious of is the game really has to be customized to age/ability. What appealed to him as a 6-year-old wouldn't appeal to older players.
One thing that’s worth mentioning about the testing is I didn’t need to pressure my kids to help out. Both of them love games. My son has probably tried a few dozen age-appropriate titles on my iPod touch, and regularly returns to the ones that are most entertaining. It was clear that Egg Drop fell into the same league as favorite games such as Angry Birds, Cro-Mag, Fruit Ninja, and the Simpsons game. He simply couldn’t get enough of Egg Drop, even during the early builds when the game was still rough around the edges. Here’s a video of him trying out an early version, about one week into the development process:



Beyond the experience of working on iOS game design, there were several other takeaways from the project. One was being able to participate in a rapid prototyping process integrated with user testing. This combination is held up as an ideal at MIT and elsewhere, but getting the right team and the right testers in place can be difficult. Before coming to MIT, I worked in Web media for years. Even on those rare occasions when my employers had adequate engineering resources in place to develop new products, testing was usually handled in-house and at a very late stage. Sometimes this was because testing was not considered a crucial part of the product development process, but at other times it was difficult to find actual users or the product had to be kept under wraps out of fear of premature leaks or tipping off the competition.

For Egg Drop, not only was the team technologically top-heavy (three out of five were programmers), but we had access to real users in our target audience, which let us observe gameplay, hangups, and other aspects of the user experience. This feedback loop led to better gameplay and helped us eliminate speed bumps and outright bugs at a relatively early stage.

A second takeaway related to gameplay theory. While the Egg Drop project was focused on real gameplay issues and the practicalities of developing a game for a mobile device, I did find myself looking back to some of the research that we had studied in class earlier in the semester, in particular the readings from James Paul Gee. He articulated a lot of modern thinking about models, video games, and learning in his 2008 paper, Learning and Games (e.g., “Video games offer people experiences in a virtual world ... and they use learning, problem solving, and mastery for engagement and pleasure”). His “situated learning matrix” for understanding how context-based learning in games can be applied to the world at large was described in terms of first-person shooters in 3D worlds. But one can see how a modeling experience in a 2D world like Egg Drop (such as my son’s experimentation with reinforcing braces that I observed in the user testing diary) might also be internalized, generalized, and applied to other situations, even if protecting eggs from meteors never figures into his daily life. This ties back to our proposal to "develop the hypothesize -> experiment -> redesign strategy of designing, which is a useful skill in many wider disciplines than construction."

Gee introduced another interesting concept in What Video Games Have to Teach Us About Learning and Literacy. The concept of “Semiotic Domains,” as it applies to video games, basically says that players will find it easier to transition to new scenarios that have similarities to old scenarios they have already encountered. In terms of gameplay, this not only helps explain the continued popularity of RPGs, "shooters," and other genres, but also how specific features work for some gamers and not for others. For instance, my son was already familiar with the iPod touch and physics-based games such as Ragdoll Blaster and Angry Birds, which made it easy for him to get into Egg Drop. However, he was perplexed by the preview of the next object in the upper right corner of the screen. This convention dates from 80s-era games like Tetris, which he had never tried. He therefore applied his own gaming experiences to Egg Drop, and attempted to drag the preview pieces onto the playing area (this can be seen in the video of game testing, above). In a commercial development project, such an observation among many early testers might be a cue to re-evaluate that feature.

A third takeaway from the Egg Drop concerned the design of the game, not only as it relates to gameplay, but also the artwork used in the game. While the cocos2D physics were slick, the graphic elements were very simple (I should know -- I made the bricks and a few other elements using Preview in OS X). But to our young testers, it didn’t matter. The game art was enough to convey the concept, and the gameplay was addictive.

Fourth takeaway: As our instructors mentioned at one point late in the semester, sandbox mode can really work for younger players. I saw proof with my testers on the first few builds, before Alec had integrated the disasters and win states for levels. In the proto-Egg Drop, it was possible to drop a practically unlimited number of horizontal planks around the egg, but there were no disasters or special materials to work with. It didn’t matter. The kids simply liked the physics of the game, which allowed them to fill up the screen and sometimes model strange situations, such as a mountain of planks for the egg to roll down. I have many screenshots from early versions that show the playing area filled with planks:



Now the reality check: The analysis and observations above are based upon an extremely small userbase playing with test versions of the game. The ultimate excitement for Egg Drop would be refining it and releasing it to the wild, to see how a much larger population of players reacts. Of course, “refining it” would involve not only working on some of the issues identified earlier (level design, artwork, etc.) but also considering the original educational vision of the game -- teaching concepts related to construction and physics. We were not able to do enough basic research around how kids might best learn such concepts, which is unfortunate, because I believe the game is a marvelous vehicle for learning. But this also leads to the question of how to balance desired learning outcomes with gameplay. More experimentation would be required.

In the meantime, here’s a video of the gameplay and design, based on the final build in mid-May:



If you are interested in finding out more about the class, take a look at the course website. You may also be interested in reading about another mobile educational game development project I worked on in Linked Data (6.898) last year.

Monday, July 04, 2011

Buttons won't solve the fundamental flaws of Wikipedia's editing policy

Wikipedia is rolling out a new tool called "WikiLove Buttons." The experiment, as explained by Howie Fung, Erik Moeller, and other top editors, is a weak response to a rather significant problem: Ordinary people ("new editors") don't like being shut out of articles, and when their edits are removed (or even savagely put down by experienced editors) they are less likely to want to contribute again. This undermines the crowdsourcing mission upon which Wikipedia was founded, and erodes quality. Unfortunately for Wikimedia, Inc. and its hundreds of millions of users, this roundabout way of showing appreciation for newbie edits by using a love button won't solve the problem of condescending uber-editors putting down perfectly good edits based on misguided policies, poor/incomplete understanding of topic issues, or inflated ego.


Ordinarily I wouldn't bother writing about this, but what prompted me to was the ReadWriteWeb review of the love button by Marshall Kirkpatrick. I generally like Kirkpatrick's writing but I really dislike when Wikipedia is unquestionably held up as a reliable source of information -- especially by people who speak with authority. While it can be considered a starting place for basic facts, it's hardly a reliable or complete source of information, as I described in my comment left at the bottom of the RWW article:

Disagree with the statement that Wikipedia is an "undeniably good source of information on almost any topic." For some topics, yes. But many others are flawed.

For instance, articles about famous living people are often sanitized by their handlers or supporters. Non-Western topics on English-language Wikipedia are shallow and/or unable to cite primary and secondary sources in other languages. Wikipedia editors do not view blogs as reliable sources, even if the authors are experts in said topic. And attempting to correct mistakes or add information to certain articles often brings up an array of badges, warnings, and restrictions that make it practically impossible for "the crowd" to edit.

As for the new feature, the love icons seem to be designed in a way that they make browsing and contributing more difficult. This may make things better for "top nerds at Wikipedia" but I doubt it will lead to a better product or experience for the rest of us.

Monday, June 27, 2011

WorldTV - our MIT Media Lab final project

One of the more interesting class projects I took part in during my last semester at MIT was our MIT Media Lab final project for MAS 571 ("Social TV: Creating New Connected Media Experiences"). The project was called WorldTV, and with my team (Jungmoo Park, MBA '11, and Giacomo Summa, MSMS '11), we created a pretty slick video demonstration of the proposed software UI. The video was shown at the MAS 571 demo day at the Media Lab (you can watch it below) and we wrote an accompanying concept paper that we are in the process of preparing for an IEEE CCNC workshop. In the following post, I'll describe not only what WorldTV is, but its genus and some of the reaction we've received so far.
world tv
WorldTV is a television app and accompanying mobile app for browsing user-generated video from one's social circle, as well as event video produced by strangers that tie into one's news and cultural interests. Instead of using traditional browsing methods -- scrolling through channels or searching for videos -- the proposed service uses a 3D globe as a navigational tool. WorldTV is aimed at people with global networks, which might include people with friends, relatives, and colleagues from other countries; people who spend a fair amount of time travelling; or people who are interested in news or culture in other countries.

The concept had great appeal to the entire team, not only because of our backgrounds (Giacomo is from Italy, Jungmoo is from Korea, and I spent most of the 1990s living overseas) but also because all of us have observed the exponential growth of user-generated video and realize its power and appeal to ordinary people. In 2006, I wrote about the potential of geotagged, time-stamped online photos to give insights into local events. I expanded the idea to include tweets and user-generated video in a proposal for my Linked Data Ventures class called PPP (PixPeoplePlaces). When I began the Social TV class, I took the PPP concept even further with user-generated video, emphasizing the social aspect of plotting event video on a local map (this was the basis of my first assignment for Social TV -- you can see the poster here).

Developing WorldTV - our MIT Media Lab final project

I envisioned all of these ideas as Web apps displayed on a computer monitor. For one of the early poster sessions for the Social TV class, Giacomo independently came up with a different approach. He asked, why not use a full-sized television screen to display a map of the entire earth with hot spots that reflected breaking hard news events that might be captured by amateur shooters? (This happened as anti-authoritarian demonstrations were breaking out across the Middle East in early 2011). Instead of being a "Lean Forward" experience (something that requires user input or interaction, such as a video game) this would be a "Lean Back" experience, in which the viewer could sit on the couch and take in the video. Giacomo also considered how video could be differentiated on the global map with different sized or colored markers, and how "likes", social networks, or newspaper articles could determine what appeared on the screen. He called it "WorldTV".

There was clearly some overlap between our ideas, and we decided to team up for the final project. We expanded the concept to include not only video from breaking news in other countries, but also cultural events (festivals, parades) and entertainment (sports, performances, etc.). The social filter would not only display streaming/recent videos from one's social circle, but could also reflect the collective interests of the social circle.

WorldTV business model

An additional requirement for the final project was a business model. I had already been thinking about using phone and laptop cameras as a way for ordinary people to access amateur expertise all over the globe, for a price. Examples of amateur expertise might be a power user demonstrating how to use a new gadget, advice on registering a company in a certain state by an experienced business owner, practicing foreign language conversation with a native speaker, etc. I dubbed the scheme Real Time Requests. (RTR). A live auction and reputation system would determine prices paid by people seeking expertise, and match them up with sellers. We decided to fold it into the proposal. The idea was then debuted at another MAS 571 poster session in April:


Jungmoo, who had a background as a professional television reporter for a Korean broadcaster, was intrigued by our poster session presentation and joined the team. Our next task was to take the concept and make a demo to show at demo day at the MIT Media Lab in the last week of class in May. For the final deliverable, we didn't have the skills to produce a working prototype. However, we did have the skills to produce a software mockup and accompanying video demo.

The team got to work. I created a simple WorldTV television UI using HTML and CSS, built the maps with Google Earth, and mocked up a mobile UX on an iPhone "remote". Giacomo wrote the script and starred in the video. Jungmoo took the raw video and graphics and used his professional editing skills to create a really slick video demo, which is shown below:



We presented the video and an accompanying slideshow on the business model last month at the Media Lab. Our Media Lab instructors, Marie-José Montpetit and Henry Holtzman, invited a group of industry pros from major cable and national broadcasters (including NBC and WGBH) to watch all six student presentations. After seeing our team present, one of the NBC visitors was interested in the idea of "shared experiences." Giacomo explained that user-generated video around sporting events and concerts could populate the global view, depending on how one's filters were set up. This prompted another executive, who I believed was from HBO, to question the legality of using amateur concert video. I responded that copyright law was decades behind the technological and social reality, but she was skeptical. I then said that there would always be artists who want to exercise strong control over this content, but there were also many artists who recognized the value of fan content to generate additional interest or loyalty, and in my opinion, the latter group would have a competitive advantage. But as I thought about it later, it was clear that addressing the entertainment industry's copyright concerns would be a huge issue, regardless of how outdated the laws are.

Our team also heard from Henry, who thought the Real Time Requests business model was really a separate concept that did not match WorldTV. We agreed. Jungmoo and Giacomo had actually raised the same concern in our planning discussions, but I felt we needed a business model that did not involve standard subscriptions. Henry noted that a subscription might actually work for some people.

So what's next for WorldTV? All members of the team have graduated, and none of the industry visitors seemed interested in taking it further. We hope, however, that if our paper is accepted to the IEEE CCNC '12 conference, it might get some traction. In the draft that we are now preparing, I outlined the "Future Work" required to make WorldTV a reality:
The next steps for WorldTV would be to create a working prototype using Google Earth, YouTube and Facebook APIs, the Android or iPhone SDKs, and other existing software and hardware components. Besides using the prototype to evaluate functionality and performance, ordinary users in the target audience (people having global networks) could also test the system with an eye toward determining which features and use cases hold the most promise. When the product is ready for wider distribution, identifying suitable “TV App” platforms and partnerships could take place. In the long run, creating a scalable architecture with its own API and opening up WorldTV to outside developers (much like Facebook and Twitter have done) would help unleash the greatest potential of the platform. This would require significant investments, but in the long run would help realize innovations for the next age of television.
If the paper is published, I will share a link on this space. In addition, if anyone is interested in learning more or helping to develop the idea, my contact information can be found here.

Wednesday, June 08, 2011

Why new data visualizations fail to catch on



Eric Hill, a buddy of mine from my old Industry Standard days, sent me a link to a RWW article about a cool new iPad application from Bloom Studio that comes up with an interesting way of visualizing a digital music collection. The app is called Planetary, and here's what it looks like:


Planetary (voiceover) from Bloom Studio, Inc. on Vimeo.

I was impressed with what they've done, but I am afraid it won't go far in the marketplace. At one time I had so much hope for data visualizations changing the way we browse and understand information -- in fact, Eric and I spent a lot of time discussing how Industry Standard site content (news and prediction market data) could be presented in new and potentially useful ways. But in the past several years, after checking out dozens of new interfaces and data visualization schemes, I've come to the conclusion that most will never catch on.


It's not the fault of the designers, but rather the limitations of audiences. For many consumers, simple formats (e.g., longitudinal line graphs, like the inset image of the US$/Euro exchange rate over the past three months) and plain ol' headlines are all they need. I think part of the problem is grokking a new visualization requires new mental models. In my opinion, most people simply aren't willing to expend the effort, especially considering the huge amounts of information out there and limited time that people have to consume it. I've seen so many interesting, creative visualizations out there but most never make it in the marketplace. Planetary is cool, but is a solar system/galactic metaphor for browsing music inherently better than an alphabetically ordered list of artists/albums/songs?

See also:

Wednesday, March 16, 2011

The challenges of creating a mobile educational app based on Linked Data


Earlier this month, my iPod touch flashed a warning message that the provisioning profile on the test application our team (Sloan Fellow Mads, Course 6 undergraduate Yod, and myself) had designed for 6.898 in the fall was about to expire. Before it did, I decided to make a quick video showing the basic design and functionality of our educational app for the iPhone and other iOS devices:

Video: Knowton demonstrated:


While the app was ostensibly designed to teach young children geography facts, the purpose of building it was to show how Linked Data could be used to make an educational application on a mobile device. Mads' original concept was to have an open-ended exploratory app that would let children freely jump from one object to an associated fact. For instance, the child might be interested in a monkey, be able to see a picture and read some information about it, including the facts that it lives in a tree and likes to eat bananas. At that point, the child could either choose to learn about trees or fruit.

This idea is eminently suited to Linked Data, which is essentially a distributed, global-scale database  built around Semantic Web standards such as RDF, turtle/N3 and SPARQL, shared definitions, and links between repositories. There is an enourmous collection of Semantic Web-based data already available, ranging from Wikipedia information to creative commons-licensed photos.

I suggested narrowing the focus to geography, as presenting facts about animals and their habitats could be tied to a specific learning outcome. I also designed a rudimentary user interface and flow (see wireframe  below), which was eventually adopted for the exploration part of the app. Yod designed the basic game flow and built several code repositories, including the mobile app (using the iPhone SDK) and a Web app that let editors (us) submit information such as photos and descriptions. Mads devised a business plan.

In a perfect Semantic Web world, it wouldn't be necessary to have the Web app for editors, as SPARQL queries on consistently structured graphs could build the data store, with only a minimum of cleanup and selection (such as choosing the most suitable photos). But we quickly discovered that DBPedia, a popular source of country-level information for local fauna and landmarks, was incomplete. Freebase filled in many of the information gaps, but there were so many differences from country to country that the only practical way to tackle the task of preparing the data for the mobile app was by using the Web interface that Yod created. For geography and many animal photos, we used a source that one of the guest lecturers in class had mentioned, Ookaboo, which contained creative commons and public domain photos. Others were sourced from Flickrwrapper using a feature in Yod's Web application.

But for good "people" photos that could not be easily accessed in Flickrwrapper using basic search strings, I had to resort to finding creative commons-licensed (CC-SA) on Flickr itself and copy and paste URLs into the Web app. Even if we had been able to use Linked Data without the manual workarounds, there is no way we would have been able to run live queries from the mobile app -- not only are mobile network connections unreliable, but we discovered that many of the sites have high latency and/or frequent downtime (DBPedia especially!). As an alternative, Yod built a database that loaded onto the app and was instantly accessible by users.

On demo day on December 7, all of the 6.898 teams gathered in a CSAIL conference room at the Stata Center. Tim Berners-Lee and a group of outside judges watched our demos and listened to our business pitches. TBL's quick assessment of the projects is in the video at the bottom of this post, but we approached him afterwards to ask him about the curation problem. He suggested some AI alternatives. For instance, if Linked Data sources identified "China" as alternately being a country or a person, he said the app could choose the most suitable definition based on the number of returned sites in competing Google searches.

TBL asked about photos in Flickrwrapper. Could Flickr ratings be used to choose better-quality photos? Yod said no. TBL suggested that some geocoded logic could be used to get the best Big Ben photo. "Make sure it's 300 meters west at a certain time during the day," he said, and then joked: "But how can you be sure that it's not a photo with Aunt Jenny in the frame?" He speculated that an algorithm could help choose photos based on contrast or some other value.

Video: Tim Berners-Lee reviews the 6.898/Linked Data Ventures class projects (Knowton comments at 2:50)



Other posts about my MIT Sloan Fellows experience:

Tuesday, March 01, 2011

Social TV poster #1: PeoplePixPlaces

(Update: This concept has evolved further and turned into a final project called WorldTV, complete with a software demo and video) From the Social TV class I'm taking this semester at the MIT Media Lab: A social TV application based on news. I came up with PeoplePixPlaces, a Web-based application that gives a window into local news, using geocoded video, pictures, and tweets, as well as individual users’ own social lenses. The poster explains the concept in more detail:

social TV

The genus of the idea predates MAS 571. Last semester in 6.898 (Linked Data Ventures), I proposed a similar project, PixPplPlaces. The one-sheet vision:


“People want to know a lot about their own neighborhoods.”

- Rensselaer Polytechnic Institute Professor Jim Hendler, discussing Semantic Web-based services in Britain, 10/18/2010

While superficial mashups that plot data about crime, celebrity sightings, or restaurants on street maps have been around for years, there is no service that takes geotagged tweets, photos, and videos, as well as associated semantic context, and plots it on a map according to the time the information was created. The idea behind PixPplPlaces:

• Index some publicly available location-based social media data in a Semantic Web-compatible form
• Plot the data by time (12:25 pm on 10/24/2010) and location (Lat 42.33565, Long -71.13366) on existing Linked Data geo resources
• Bring in other existing Linked Data resources (DBPedia, rdfabout U.S. Census, etc.) that can help describe the area or other aspects of what's going on, based on the indexed social media data

Potential business models:

• Professional services: News organizations can embed PPP mashups of specific neighborhoods on their websites, add location-based businesses who are their ad clients, or use the tool as an information resource for journalists -- what was the scene at the site of a fire on Monday evening, just before the fire broke out? Lawyers, insurance companies, and others might be interested in using this for investigations.
• Advertising services: A suggestion from Reed - "a source of ads/offers in Linked Data format - for the sutainability argument as a business. Maybe in the project you can develop an open definition that would let multiple providers publish ads in the right format that you could scrape /aggregate and then present to end users? If you demonstrate a click-wrap CPC concept you might be able to mock it up by scraping ads from Google Maps or just fake it."

To be researched:
• Is social media geodata (geotagged Flickr photos, geolocated Tweets) precise enough to be plotted on a map?
• Should this be a platform or a service?
• How can the data be scraped, indexed, or made into "good" Semantic Web information?
• Would any professional organization -- news, legal, insurance -- pay for it?
• How viable is the advertising model in a crowded field chasing a (currently) small pool of clients?
The Semantic Web requirements for the 6.898 project and emphasis on tweets and photos gave the tool a different flavor than the Social TV version; in addition, I didn't consider the possibility of using "social lenses" to filter the contributions of people in the user's social circle. But for both projects, I recognized that the business case is weak, not only in terms of revenue, but also in terms of maintaining a competitive advantage if open platforms and standards are used.

Incidentally, I first had the idea for a geocode-based application for user-generated content back in 2005 or 2006. My essay Meeting The Second Wave explains the original idea:

In the second wave of new media evolution, content creators and other 'Net users will not be able to manually tag the billions of new images and video clips uploaded to the 'Net. New hardware and software technologies will need to automatically apply descriptive metadata and tags at the point of creation, or after the content is uploaded to the 'Net. For instance, GPS-enabled cameras th at embed spatial metadata in digital images and video will help users find address- and time-specific content, once the content is made available on the 'Net. A user may instruct his news-fetching application to display all public photographs on the 'Net taken between 12 am and 12:01 am on January 1, 2017, in a one-block radius of Times Square, to get an idea of what the 2017 New Year's celebrations were like in that area. Manufacturers have already designed and brought to market cameras with GPS capabilities, but few people own them, and there are no news applications on the 'Net that can process and leverage location metadata — yet.

Other types of descriptive tags may be applied after the content is uploaded to the 'Net, depending on the objects or scenes that appear in user-submitted video, photographs, or 3D simulations. Two Penn State researchers, Jia Li and James Wang, have developed software that performs limited auto-tagging of digital photographs through the Automatic Linguistic Indexing of Pictures project. In the years to come, autotagging technology will be developed to the point where powerful back-end processing resources will categorize massive amounts of user-generated content as it is uploaded to the 'Net. Programming logic might tag a video clip as "violence", "car," "Matt Damon," or all three. Using the New Years example above, a reader may instruct his news-fetching application to narrow down the collection of Times Square photographs and video to display only those autotagged items that include people wearing party hats.

For the Social Television class, we have to submit two more ideas in poster sessions. I may end up posting some of them to this blog ...

Saturday, October 23, 2010

Disruption: Broadcast news vs. the humble iPod touch

The Newslab blog recently posted about the differences between "professional" video shot with TV crews and video created with mobile devices. Judging by the tone of the article, CNN and others are experimenting with such tools, but are doing so in a very cautious manner. It prompted me to leave a comment, in which I said:
I recently started using a 4th generation iPod touch (I bought the 32GB model on Amazon for $280), which has a decent video camera built in, to shoot simple clips/interviews. [This] blog post demonstrates what I was able to produce:



Note that the only editing I did on the interview consisted of trimming the ends off the clip, an ability which is included in the iPod's camera application. From within the app, I uploaded it to YouTube, and then switched to my laptop to embed the YouTube clip on my blog post. A few days ago, I bought a $4 app in Apple's mobile app store called "ReelDirector" that lets me mix clips, add titles, switch transitions, and even add music.

With the cheap price and high level of functionality on these devices, there's no excuse for trying out mobile video. Is it pro quality? Of course not. But it's certainly enough to do newsgathering and interviews on the fly. And, the gear fits in your pocket and can be operated by the journalist -- no need for expensive cameras, extra crew, and extra overhead to get the story out.

It's apparent that there is still a lot of resistance in the broadcast news industry to using cheap mobile devices, laptop cameras, as well as any production process that's not "pro." In the mid-1990s, I worked in a TV newsroom, and know the prevailing attitude among many broadcast journalists (and crews) is a near obsession over making sure only the best-looking people and best-looking footage appear on screen. At the time, our reporter/cameraman teams would spend three or four hours every morning shooting tape and setting up interviews, and the remainder of the day editing down the footage and doing voice-overs. The result? One or two 2-minute clips per team per day.

Long after I had transitioned to online, the Flip video camera came out, and was a hit. Until the Flip, consumer video cameras from Sony and JVC tended to have complicated user interfaces designed by Japanese engineers. The Flip did away with 90% of the UI clutter, and had just five buttons, a flash drive to store 60 minutes of video, and a flip-out USB plug to transfer video files to PCs. It was also very cheap -- just $125 dollars. I enthusiastically began using one for reviews and interviews, and evangelized it to everyone in the Computerworld newsroom. This was in 2007. However, the weak point with the Flip was the lack of good editing software, which forced us to turn to professional video staff for more complex editing tasks. Never mind the information or images captured by the Flip -- there was more than a little skepticism from the pro video people about the jerky, poorly lit footage, tinny audio, and the fact that there were compatibility issues with the expensive AVID editing suites they used.

Now, the Flip looks positively ancient compared to the iPod touch with its simple editing tools and wireless uploading. The iPhone and the iTouch have the potential to turn many online text-based journalists -- and even people who have never worked in a newsroom or been trained as journalists -- into effective online video journalists.

The professional broadcast community may not get it right now, but they will get the message soon enough when lots of quality work is performed by jackknife journalists and amateur producers, and audiences make it clear that expensive modes of production are not a prerequisite for their attention.

Saturday, July 17, 2010

My online math class review: Convenience gets an 'A,' but at what cost?

So I'm going to business school. It's an intense, full-time, on-campus program at MIT Sloan. However, before school started, a handful of us who don't have science, finance, or engineering backgrounds were asked to take an online precalculus course at another institution. This post serves as an online math class review. The online math class was taken at the University of California Extension School.

Why take an online math class before going to business school to get an MBA? It's an understandable requirement. Our business school curriculum has a very strong quantitative component. Using a scientific calculator and Microsoft Excel is required. Microeconomics, accounting, and data analysis are math-heavy subjects, and even the instructor for the core marketing class has illustrated theory with mathematical equations.  To give you an idea of the types of questions I'm dealing with, here's an example from the practice microeconomics exam:
You have a patent on a drug that has been approved for sale in the U.S. The U.S. demand for this product, for which you are the monopoly producer, is

ln Q = 3.4 - 1.5ln P + 0.5 ln A

where Q is millions of tablets sold, P is the price per tablet, A is expenditure on advertising, and ln denotes a natural logarithm.

If you are maximizing profits, and if the marginal cost of a tablet is $0.90, what price should you charge?
That was actually one of the easier questions using a standard economics equation with price, marginal cost, and elasticity variables. The more difficult ones involved transfer pricing and monopoly pricing, which use calculus. Moreover, I'm in the room with bankers, scientists, engineers, and others who have used math in their day-to-day careers. Even if you don't have this type of professional background, everyone has to be able to keep up to get the most out of the curriculum, and maintain an intelligent and productive level of discussion with classmates and faculty.

My online math class review - notes from trigonometry section
So why not stick with the concepts covered in the GMAT, which is intended for people heading to business school? While GMAT test preparation concentrates on algebra and geometry basics, it's basically 7th-grade math. Precalculus goes beyond typical GMAT review topics, and in my opinion, the  advanced math review is more appropriate for business school. For instance, in my microeconomics class, we had to deal with logarithms and graphing supply/demand curves -- two areas which were *not* part of the Barron's or Kaplan GMAT review, but were covered extensively in precalculus.

Online math class review - what the curriculum was like

The online precalc class I took was not through MIT, but rather through a well-regarded public university's continuing education division -- specifically, the University of California at Berkeley Extension School. The class has been in existence since at least early 2006, and during that time has apparently been taught by the same instructor.

On the class bulletin board, I noticed that there were other students from UPenn/Wharton, the University of Chicago, and NYU who were taking the same class before starting their respective MBA or masters programs. The precalc class was a for-credit course, but the credit will not be transferred to MIT. The main purpose of taking the class was to ensure that we come prepared for some of the concepts and exercises that are now being thrown at us, as opposed to checking off a math credit.

So, how was the class?

The curriculum was standard. It started with a basic algebra review, went on to quadratic equations and graphing, spent a few chapters on functions, and ended with three or four chapters on trigonometry. There was one chapter on logarithms, too.

The convenience was great. We could go at our own pace, and start at any time during the year. Beginning in April, after putting the kids to bed, I would go downstairs to the living room to spend about 2-3 hours on readings and homework, and about every week, the chapter tests. I never had to deal with driving. Homework and tests were completed online. I was able to make very steady progress, with the aim of completing the course by the time my full-time MBA program started in Cambridge in early June. I was able to finish the required chapters in about two months and take the proctored final exam in downtown Boston the day after Memorial Day, just a few days before heading to MIT.

I worked very hard at the online precalculus class, and did very well in the homework, tests, and final. Most importantly, I feel that I learned a great deal and came to business school well-prepared. The value of taking precalculus was very apparent in the on-campus microeconomics class, which had a substantial math component. The online math preparation allowed me to focus my attention on economics theory, instead of getting hung up on the calculations in the problem sets.

But there were drawbacks, too. For years, I've heard criticisms from other students, faculty, and even supporters of Web-based distance education, relating to a lack of interaction between students and faculty. I can verify that this was indeed the case in the online class that I took. Here's what observed:
  • There was no shared sense of community, or any efforts by the school (the state university that offered the online course) to create one, beyond setting up an online message board. Many of the students used this to introduce themselves at the start of class, but by chapter 1 or 2 in the book practically all shared dialogues had stopped using the official message board.
  • What few questions that were placed on the message board -- either relating to the course content, tests or the online software -- were never answered by the instructor. One sad example: "Can you please provide more information on the final? Will it be similar to the Practice Final? How many questions will there be? Please let me know when you have a chance." I am sure others had the same question, too, but it was never answered on the board -- although the teacher did answer this particular question in a private email, when I asked her.
  • Because old comments from previous students were never removed from the board, it gave the appearance of an abandoned ghost town -- the MySpace of math.
  • The only comments that I saw from the teacher were the first comment at the top of each thread ("please feel free to email me, etc.") which dated from March 2006. Three times, she responded to students who introduced themselves, but by mid-2007 even these responses had stopped. There was no shared response by the instructor to any question about tests, math problems, or software issues. It is uncertain if the students who asked them gave up, or attempted to contact her by email afterward.
  • The lack of an easy mechanism to ask complex questions was very frustrating. For instance, the trigonometry chapter covered difficult concepts and methods relating to trigonometric functions and equations. In a classroom setting, the instructor would be using the board to work out equations and would be referring to the unit circle while students asked questions. In an online setting, I could use the textbook, online exercises, and pen and paper, but I still had a ton of "why" questions that could not be easily described or diagrammed via email.
  • On the other hand, the teacher was very responsive to those questions that were asked by email. I sent more than 10 specific queries over the course of the semester, most relating to grading errors with the MyMathLab software we used to complete assignments and take tests, or questions relating to the final. She responded to every one within 12 hours. This was impressive, considering many university professors I've had contact with in classroom settings sometimes takes days or even more than a week to respond to email from students.
  • Even though she was responsive with email, I did not observe any spontaneous communication from the instructor in the way of asking about problems, or even "keep up the good work" encouragement.
  • The $170 precalc textbook contained two extra books which were never needed for the class, as well as a login key for the MyMathLab section. Interestingly, the book was reproduced entirely online, reducing the need for even buying a physical text.
  • The instructor prepared "lectures", which were actually explanatory essays with diagrams. The quality of these documents was generally quite good -- I'd say they were much clearer than the Sullivan precalc textbook I used.
  • However, the text "lectures" did not encourage a shared dialogue, and seldom/never change from year to year. I found this out when a link in one of them directed me to an external website, which generated the following error: "Web Hosting from beeb.net closed on 30th June 2008." In other words, the link had been added at least two years before and was never revised.
  • MyMathLab contained video clips of various concepts and exercises produced by Pearson employees, but there was no classroom video of the instructor. I watched one of the MyMathLab videos, but found they were much less engaging than the free videos produced by Khan Academy.
  • Grading was easy. As long as you studied, understood the concepts and questions likely to appear on the homework and exams (which were driven entirely by textbook content), it was nearly impossible to do poorly. For the online homework on MyMathLab, the system allowed unlimited attempts on each question and even gave step-by-step instructions on how to solve tricky problems. This does not mirror the homework or testing scenarios typically found in physical classrooms, in which you get one chance to get it right, and in the case of tests, cannot have open browser windows or the ability to communicate with other people at the same time.
  • The tests followed the textbook lessons very closely, and you were allowed to take practice tests as many times as you liked. The questions that appeared on the practice and real tests were practically identical. Rarely was there a "trick" question.
  • Only the final was proctored (I did it at the New England College of Finance in downtown Boston). There was no monitoring on any other graded content. This, combined with an almost complete lack of student/teacher interaction, makes it very easy to cheat on homework and chapter tests.
  • Because the homework and tests corresponded to the textbook lessons, it was more efficient for me time-wise to take notes and practice problems from the textbook and do the homework and tests without even reviewing the redundant (but better-written) "lectures".
  • The textbook had a fair number of word problems, but these almost never appeared on the homework or tests. I wish they had -- the practical applications of mathematics is where a lot of people struggle, but is the best way of illustrating abstract concepts.
  • Although I know how to use Excel (I actually wrote a book on the subject, which is a kind of  Excel for Dummies alternative) I did not need to use it for class. Most of the time, I used a piece of paper to work out problems and a scientific calculator to check them.  
In summary, my online math class review found that taking the class basically boiled down to being taught by a textbook, and getting university credit for it, from one of the top-ranked public universities in the United States. I use textbooks in my real-world classes too, but the big difference is the classroom sessions include a huge amount of discussion and focused questions on difficult topics, examples, and other areas worth exploring as part of a shared dialogue. In the online math class, there was almost no meaningful student/teacher or student/student interaction. To equate this type of online learning with a real-world classroom experience is a major stretch.

Further, struggling students tend to suffer in an environment when teachers aren't there to help, or even notice there's a problem. I wonder how many dropped out of my class, after attempting to make contact on the online message board, or getting hung up on the software? In the absence of any monitoring system for the exams and homework, how many have turned to cheating?

On the other hand, the convenience of taking a class at home was addictive. It was very easy to incorporate these classes into my home life, without dealing with wasting time or money on commuting to class. And, most importantly, I learned what I set out to learn.

Would I take an online class again? Maybe, if the topic lends itself to rote memorization and hands-on problem solving that does not require interaction with other students or faculty.

But for most college- or university-level subjects, online education is a poor substitute. In my opinion, the most effective learning takes place in the classroom, where you can easily raise your hand, engage in spontaneous discussions with classmates and faculty, turn to the person next to you to ask for clarification, or approach the professor after class or during office hours to ask questions or exchange viewpoints in a way that practically guarantees an instant response and is not constrained by typing, software interfaces, or waiting for a response.

To give you an example, in my on-campus microeconomics class, I suspect that about 3/4 of us were partially or fully baffled during our professor's first explanation of concepts like two-part tariffs and double-marginalization in certain transfer pricing scenarios. The only way we were able to "get it" were by some students raising their hands and asking the professor to explain a particular element, other students sharing their own experiences from their careers (with responses from other students or the professor), and the problem sets being explained in person by the TA, with more questions from us. Not everyone raised their hands or participated in the debates, but everyone in that classroom heard them, and learned something from them.

I doubt 10% of this interaction would have been possible online, even using technologies that allow instant feedback from remote students -- it's too easy for people to multitask, read email, or browse the Web while attending class, and unless sophisticated two-way video systems are involved (such as telepresence), it will be difficult for faculty to get important visual feedback cues from the students they are teaching.

A decade from now, there may be better technologies that truly bring the shared classroom experience to people's homes, but the asynchronous, Web-based technologies that seem to dominate the online education sector don't come close to the real thing.

Tuesday, July 13, 2010

Thoughts on the online information market, and why e-readers won't save journalism

e-readers won't save journalism
For some time, I've been wanting to write more about online news organizations and the competitive environment they now find themselves in. I was finally prompted to do so after reading an essay about e-readers by the Columbia Journalism Review's Curtis Brainard (linked from Mediagazer). Judging by its length and lack of a search engine-friendly title ("A Second Chance") it's clearly intended for a print audience. And, while Brainard brought up some valid points about the new e-reader technologies, he missed the boat in terms of the content that is best suited for mobile devices. Here's the comment that I left on the CJR.org website: 
No, e-readers won't save journalism -- at least not the kind that the  author and the Columbia Journalism Review practice.

Consider the people reading this essay. What percentage of readers are consuming it on an e-reader, iPod, iPad, Android phone, or any other mobile device, relative to the percentage of readers who are looking at it on a PC or laptop screen? I suspect the mobile:PC ratio is quite small -- maybe just a few percentage points, if that (perhaps the CJR can let us know?). I further believe that even among those who are looking at it on a mobile device or e-reader, very few are reading it from start to finish. Like many publishers, the Columbia Journalism Review is still oriented toward long prose pieces that are a poor fit for mobile devices or the people who own them. Who is going to read a 4,546-word analysis (the length of this essay) on a small screen, or even a 1,000-word news article. How many would be willing to shell out subscription fees for long-form Time, Wired, or WaPo print content on a Nook or iPhone?

Even short-form content may be a stretch, when there are so many other free and low-cost distractions available on mobile devices. Publishers no longer have a monopoly on information or entertainment, like they did a decade ago, when tabloids, metro newspapers, books, magazines and CD walkmans were the staple on subway cars. Now when I look around at my fellow commuters, I see people playing games, listening to mp3s, texting, watching videos, checking Facebook for updates, and sometimes even looking at a newspaper or mobile news app. If people don't want to read a 2,000-word feature, or don't feel like paying for news (print or mobile), they still have too many free/cheaper options to choose from -- options that they didn't have before, because the technology wasn't widely available.
This last point about competition deserves a little additional commentary. I've been thinking a lot about the competitive environment for online news. Publishers assume people want news, when in actuality many of them just want information. Increasingly, it's not so important where that information comes from.

Consider product news. In the mid-1990s, when a new Apple product came out, the channels for information were far more restricted. People depended on the news media (and advertising) to find out about these products, because the only other channels were word-of-mouth and retail outlets. I don't need to pay to learn about the iPhone 4 from the New York Times' David Pogue (or, for that matter, and other news publisher), when I can get facts about the product from Apple.com, video from YouTube, and numerous opinions from blogs, message boards, and my online networks -- all for free. What extra value is Pogue delivering? Certainly, he's a great writer and is accurate and unbiased (well, most of the time), but is he worth $1 in print or a $n monthly subscription? I don't think so.  Neither will most other people, when they are prompted to pay online or on their iPad.

Update March 2016: Six years after writing this post, I frequently read news articles on my iPhone. I also own a small-screen e-reader (a Kindle paperwhite) but I have found it difficult to read books or other long-form content. That said, millions of other people have adjusted to the new e-reader technology. I also have a digital publishing business that produces how-to guides, but sales of ebooks have actually leveled off or declined for most marketplaces while paperback sales have continued to grow. I don't think e-readers are the savior for long-form content  

Sources and research: Columbia Journalism Review, New York Times/Bits, EdibleApple.com, Mediagazer

Image: New York Times iPhone app on the iPod touch. Creative Commons Generic License 2.0 -- you are free to use it for commercial or noncommercial work, as long as you credit the source (Ian Lamont) and link back to this blog post.

Wednesday, May 19, 2010

Steve Jobs: Sorry kids, no Webkinz or Curious George games on the iPad for you!

My kids like Webkinz World, Ganz's kids-oriented virtual world that is based on the purchase of real-world stuffed toys. During a visit to my parents this morning, they said they wanted to play it on their older model Mac mini, and I was concerned it might not work (Webkinz technical problems have been problematic for us in the past).

But then it occurred to me: If it were an iPad, there would be no hope of Webkinz working at all. It requires Flash, which Steve Jobs does not want to see on his mobile devices, including the iPad, iPhone, and iPod touch.

That's excluding a lot of people. One estimate that I found a few months back puts the number of Webkinz users in the millions. And that made me wonder: What about all of the other kids-oriented sites that are out there? Is the iPad excluding these audiences as well?

Curious George Bring It
I did a quick check on the Web. Club Penguin: Requires Flash Player 9. Games on Nick.com require Flash, too. I put the question out to Twitter about any other cases. @Atul asked about PBSkids.org. I checked. Sure enough, Curious George and many other games indeed require Flash to play.

The irony of Steve Jobs' anti-Flash mantra is that kids really do love the Apple's touch-enabled features. My kids love Cro-Mag, Bounce On, and Paper Toss on my iPod touch. I read with much interest this account from an EMC CTO, who describes his teenagers practically abandoning laptops and desktops for the iPad. Some kids-oriented brands (such as the SpongeBob franchise, which has Flash-only games on Nick.com) have branched out to the iPhone/iTouch /iPad platform with dedicated apps.

Is there any hope for Webkinz ever making it to the iPad? While the Apple-approved iPad app Cloud Browse allows a workaround, it is an imperfect and potentially risky solution.  

It's pretty clear that Jobs wants all rich Web applications to retool themselves to a new world order, in which sites conform to official Web standards or conform to the App Store standards dictated by Apple Inc. For companies like Ganz that have invested years in building up a complex suite of Flash-based games that play on most desktop PCs, neither option may be possible (does HTML 5 even support the sort of artwork and UIs that Ganz has for Webkinz?), or palatable, owing to the raw costs and rearchitecting that would be required. That's unfortunate for everyone involved -- Jobs, Ganz, and an audience of mostly young kids numbering in the tens of millions.

Or is it? This is exactly the sort of situation that could lead to an opportunity for a tablet maker or software partner that recognizes a compelling niche opportunity in an area which Apple doesn't want to play.