Just came back from the StartupLA conference, where there was some interesting storytelling from Jason Calacanis and Mark Jeffrey (kudos to Mark and Jason for being very frank and entertaining speakers)– both now of Mahalo. It really highlights how the overwhelming amount of “free” content on the web has left us surrounded by mounds of advertising and commercially-biased “crap” and that the pendulum is swinging back to use of humans in organizing information. Mahalo I think creates a great mass market bridge by providing rich, edited content in key subject areas– but I think the true winners will be those that can mass-customize the information to be contextually relevant to the individual in addition to better addressing the content results for keywords inputted into search (more on that later)
Jeremy Liew of Lightspeed wrote a great post on Meaning = Data + Structure and highlights how user-generated content is often less helpful because it lacks structure. I would say that the argument could be extended further by adding context in 2 ways: cutting through the clutter and improving the contextual relevance. Both have great analogies in retail branding, as retail brands allow you to immediately select something with an appropriately targeted value proposition, making the choices relevant to immediate needs and reducing choice to a manageable subset for selection. Extending the analogy, paid professional selection is an important filter in markets with virtually unlimited selection.
It used to be that content was the limiting factor– and now that has changed to time and attention. The information product is increasingly becoming a service of providing relevant information– and the business models of information distribution will change to reflect the new currency.
Its interesting how the business model for search has changed as content has exploded over the web. As the video shows, the web itself has changed how we see information in its seemingly unlimited growth and distribution utilizing the “free” reach of the web (again via Jeremy).
Early models in search relied on finding what little content was good– and the indexing structures of dmoz, Yahoo, and Craigslist created real value for users by creating simple structure and populating it with relevant links.
Google and the mechanized web crawlers rode the wave of exploding content, realizing that it was too much work to index and store using the information hierarchies– and simplifying the categorization to a keyword match–information was everywhere and this was now the easiest way to find it.
As the volume of information has risen further, the mechanized results are becoming increasingly polluted and commercially skewed. It would appear that ways of increasing relevance now have less to do with breadth and more to do with filters: through trusted sources (social networks–Linkedin, facebook, Web 2.0–Digg), through professional editorial work (Mahalo, blogs), and through micro-targeting of deep content (e.g., vertically specialized sites like HealthShoppr).
It appears the next set of wars will be about relevance and context. It’ll be interesting to see which players rise to serve consumer needs– or if the fragmentation of properties will be used to drive context– and the web becomes about being the big fish in an ever-widening array of pools. Even more interesting will be what the combo of the cross-platform ad sites (with cookie data collected from across the web) with the existing search engines will create. Will Google be able to move past the privacy folks to utilize the terrabytes of data collected on its userbase to create automated segmented, personalized search results? We shall see…the battle for relevant information is just getting started.