From Carmen Hermosillo’s (aka humdog) 1994 essay Pandora’s Vox
i have seen many people spill their guts on-line, and i did so myself until, at last, i began to see that i had commodified myself. commodification means that you turn something into a product which has a money-value. in the nineteenth century, commodities were made in factories, which karl marx called â€œthe means of production.â€ capitalists were people who owned the means of production, and the commodities were made by workers who were mostly exploited. i created my interior thoughts as a means of production for the corporation that owned the board i was posting to, and that commodity was being sold to other commodity/consumer entities as entertainment. that means that i sold my soul like a tennis shoe and i derived no profit from the sale of my soul. people who post frequently on boards appear to know that they are factory equipment and tennis shoes, and sometimes trade sends and email about how their contributions are not appreciated by management.
Seventeen years later, it’s still the same, but in one sense it’s worse. Before it was just selling ads based on traffic. Now we’re processing the text of your posts for sentiment. Processing your social connections to determine whether your or one of your friends are more of an “influencer.” We’re trying to peer into meaning. Typically the concerns about text-mining / social-network-analysis / big-data revolve around privacy, which I believe mostly clouds the issue.
When Google started serving ads on gmail, there was a big brouhaha over Google reading users’ email. In reality though, almost no one at Google is actually reading any messages. Those that do, can’t match any one message back to a real identity, nor do they have the desire to even if they could. Instead, a machine is “extracting value” from the thoughts expressed in your email. In Hermosillo’s words, Google is commodifying you, and that disturbing realization seems to be what motivates “privacy” concerns in gmail and social networks. Rarely are the concerns about how posted information is flowing to humans that it wasn’t intended for. Instead, it’s all about information flowing to other machines, that in turn recommodify that for advertising. The latent societal concerns of automatically commodifying human thoughts and behavior, that were never intended for commercial use, are never expressed.
Why not? Well probably because we realize that if we services like gmail and Facebook, we’re going to have to give something in trade. Since most aren’t willing to pay a subscription fee, we accept advertising. Society accepts the coarse ad targeting of the traditional mass media, but is a bit squeamish when the targeting feels more personal. We know it’s not personal, but perhaps more importantly, it’s what we grew up with. Children are growing up in a world where commodified social behavior is de rigueur. I suspect they’ll look at our panics over “privacy” as what they are: rather quaint.
There are two stated assumptions to ad targeting. First, is that individuals are more likely to act on messages that are relevant to them. This assumption seems to be a fairly obvious. Since I’m a man, the probability that that I will purchase a feminine hygiene product for myself is asymptotically zero. Where as the probability that I will purchase some brand of shaving foam is quite a bit higher. The second explicitly stated assumption is that people want relevant ads, because irrelevant ads are annoying. Now being in Silicon Valley and IR, I’ve heard this assertion banded about quite a bit. I’ve even heard people (mostly those working in computational advertising) claim that they like targeted ads. Of course, these are just anecdotes. Does this actually hold?
No. Not in the abstract, and especially not when the techniques are described. 66% don’t want ad targeting, and up to 86% don’t targeting after they’re told that targeting is done by tracking what websites you visit and your offline behavior.
The best defensive of the “people want relevant ads” hypothesis is that if you’re going to be shown advertising, you might as well have it be something you might be interested in. This still assumes that the recipient gets some marginal benefit from advertising (and one probably does), but it still seems a bit silly economically for the recipient. Does the tiny benefit of a highly targeted ad, justify commodification of the recipient’s experience? Probably not. Instead the benefit stays with the advertiser and the ad server. The advertiser can spend less money on ads since the precision of the ad buy increases, and the ad server can charge a premium for precisely targeting the ads. Not that he recipient is never a party to this transaction, and so only receives at most an indirect exceedingly slight benefit. The real utility is where the money in changing hands, no place else. To put it another way: so what if they don’t care about, or want targeted ads, we’re going to do it anyway because we make money at it. It’s fine to say that, but computational advertising brokers should say that, and not try and delude themselves into thinking that they’re doing recipients a favor.