Much has been said about the idea of Content Shock by Mark Schaefer and the hordes of other bloggers and pundits who have rushed to make commentary on his idea. The skinny on content shock can be summed up in a paragraph from Schaefer’s original post:
“Like any good discussion on economics, this is rooted in the very simple concept of supply and demand. When supply exceeds demand, prices fall. But in the world of content marketing, the prices cannot fall because the “price” of the content is already zero — we give it away for free. So, to get people to consume our content, we actually have to pay them to do it, and as the supply of content explodes, we will have to pay our customers increasing amounts to the point where it is not feasible any more.”
Economics in marketing? Say it ain’t so. But anyhow, I digress. What I want to discuss here today is one potential implication of content shock, should it indeed be found to be true, as I believe that it will. I’m assuming that you can use the linked article above to beef up your knowledge of the original concept, and you can further peruse this article, also by Schaefer, that addresses some of the primary objections to the theory, so I’m not going to spend a great deal of time conceptually mapping out why I think Schaefer is correct except to say that he is.
Let’s talk about Google for a minute. More specifically, let’s talk about algorithms. In the beginning of time, back when Al Gore invented the internet, people created content for other people. Mostly people who knew that the content was being created and knew where to find it. But then came the influx of people to the internet who really didn’t know anything about anything, and with them the operative question of “How can I get my content in front of people who don’t know where to find it?” Enter Google (OK, so I truncated the story, deal with it).
Google creates this fancy algorithm that is designed to approximate the level of interest of Person X in Webpage Y based upon such things as link structure, keyword density, meta information, and lots of other cool stuff. The algorithm would then spit out as many websites as it deemed relevant based upon the factors listed above, again, trying to use those factors as a proxy for how interested Person X might be in the content of a web page.
The problem, of course, is that Person X is a hopelessly complicated person with interests and motivations which are impossible to predict with an algorithm, so the results of the search would be equally complicated. Person X would then be left to sort through the results and match his motivation to the contextual snippets provided by Google. It worked pretty great for a while because there wasn’t really that much content to deal with.
However, given Mr. Schaefer’s Content Shock prediction with the increasing production of content over time, combined with the finite consumption appetite of the population of the Earth, sifting through the results and finding what you’re actually looking for becomes increasingly more of a kick in the chode. Semantic search will help some, as algorithms catch up to the sophistication level of how people actually talk when they’re looking for something. But I believe the real answer to the conundrum of content shock lies in the idea of allowing people to actually customize the algorithm itself to be more indicative of their thoughts and representative of their motivations.
How would that look? Dunno. I’m not a computer scientist. However, I can imagine an intense dialogue on the front end really questioning my preferences and allowing me to set filters or select levels of intensity for things like academic rigor (more rigorous if I’m looking for articles on Economics, and less rigorous if I want satirical Onion articles, for instance). I’d then be able to save those algorithmic modifications as presets so that I can engage based upon my mood with content curated pre-search. That’s cool, let’s stick with that. I’ll call it pre-search curation.