Lessons on content curation from the shadow world of intelligence gathering
Content curation, reduced to its most basic functions, is a process of filtering and reporting. What we extract, from where, at what frequency, and to who or whom we deliver our results all depends on our objectives and abilities.
To really master something—anything—we need to study the techniques of established experts. If I’m learning chess, I might learn the basics by playing with some fellow beginners, but to truly grow into an advanced player, I’ll need to watch the old Russians in Palisades Park.
But who are the masters of the content curation craft? My own tracing of the history of content curation was intentionally limited to the Internet era, but in truth its roots go much deeper. Curation has long been a necessity in the realm of intelligence gathering for national security, and it’s there we’ll find our true experts.
Mind over machine
Echelon is an immensely-powerful “listening” network comprised of orbiting satellites and ground stations that monitor the world’s civilian telecommunications. Jointly operated by intelligence services in the U.S., the U.K., Australia, Canada and New Zealand, the system is used in the “sorting [of] captured signal traffic, rather than [as] a comprehensive analysis tool” (Wikipedia). Intelligence expert Gordon Thomas writes that it “sifts tens of billions of snippets of information, daily, matching them up,” and it has been estimated that “90 percent of all traffic that flows through the Internet” also flows through Echelon.
Here’s where curation comes into play:
“Suspects, names, key words, phone numbers, and e-mail addresses are all sucked up by NSA satellites—either circling or geopositioned around the earth—and downloaded to the computers. There the data are coded into ‘watch lists,’ then fed into the system that takes the lists on secure lines throughout the U.S. intelligence community.”
-Gordon Thomas, “Gideon’s Spies”
Human analysis of these lists is next, because even Echelon is far from perfect. For instance, it’s often stumped by “noisy or degraded signal[s],” and the linguistic nuances of “dialects and patois”.
But even at the initial gathering stages, human sources are seen as highly valuable, working in parallel to the supercomputers that process the data fire hose. “A spy on the ground could judge a conversation in its setting, obtain finer details that are lost to even the most sophisticated electronic surveillance.”
Tools to empower, not replace
What does this mean for our version of content curation? Simple: Tools are no substitute for the human brain. We might be building amazing programs for sentiment analysis, for instance, but curation can never be fully automated. After all, our brains are still the most powerful and efficient computing devices on the planet. A certain amount of automated sifting may be necessary given the staggering (and rapidly increasing) amount of data we encounter in our daily lives, but this sifting should always be near our information “intakes” (feeds, email reports, Twitter streams, etc.), and nowhere near our “outputs” (what we share through our blogs, tweets, emails, etc.). There is a point in every content curation cycle at which the human touch is necessary. This point differs for every use case and goal, but it always exists and it always adds value.
Avoid “shiny object syndrome,” and realize that looking for the one true curation tool that will take all the effort out of the process is ultimately self-defeating. Your time is much better spent learning and sharing, while developing personal processes that help you do this.
Isn’t that what your end users want, anyway? I suspect that most of us prefer to see this human footprint in the curation we consume. We want to know that the content we’re reading has been hand-selected by someone we trust as an expert, and not a lifeless set of algorithms—no matter how advanced.