Crowdsourcing of health data: a resource or a handicap?

Is it a valid or reliable way to gather information?

EDITOR’S NOTE: Jason Henninger is editor and chief product officer at MedLearn Media.

In recent headlines, the Food & Drug Administration (FDA) is investigating cases of food poisoning linked to Lucky Charms cereal. General Mills’ own investigation found nothing wrong with the food, but that alone is of course not enough to satisfy the FDA.

A notable aspect of this story is the role played by crowdsourcing, through the iwaspoisoned.com website. About 3,000 people there complained of stomach problems and blamed the cereal. The FDA itself received about 100 complaints through its own reporting methods.

While it is beyond the scope of MedLearn to comment on the veracity of these claims or the inner workings of the FDA or General Mills, the situation raises an interesting question about the nature of crowdsourcing. Is this, to put it bluntly, a valid or trustworthy way to gather information? In the pre-Internet era, the idea of ​​getting reliable information from mass sources would have been considered for the most part hearsay at worst, and popular wisdom at best.

The term crowdsourcing comes from the editors of Wired magazine in 2005, who meant outsourcing a task to a crowd (more in the sense of what we would now call the gig economy). The idea evolved to mean a sort of voluntary public database. Wikipedia, probably the most widely used participatory website (although it predates the term itself) is an incredible achievement in collaboration and a great place to start learning about a vast array of topics.

But no matter how much accurate material can be found there, Wikipedia should never be considered a definitive source. After all, academic, scientific, and journalistic research all have rigorous standards; crowdsourcing has at most publishing guidelines. That’s not to say crowdsourcing doesn’t have value. But it is not inherently reliable.

The complexity of the problem may affect the reliability of the information collected. A website called Does the Dog Die? crowdsources warnings about potentially triggering scenes in movies. Since the reports relate to something easily verifiable, this is a fairly uncontroversial website with little to complicate its conclusions. Compare that to the famous example of Redditors taking it upon themselves to identify the Boston Marathon suicide bomber, resulting in the harassment of several innocent people.

The website in question here, iwaspoisoned.com, requires no login, no input from doctors (although the site does employ doctors as advisors). In other words, there is no proof. After all, it’s crowdsourcing, and crowdsourcing doesn’t need proof. But one of the research truisms is that correlation does not imply causation. In other words, the plural of anecdote is not proof. To the website’s credit, they never claim to provide definitive proof of food poisoning outbreaks; they just provide a platform to report it and offer alerts. Additionally, they can and do report their findings to government agencies such as the FDA. There is probably value in doing so.

And yet, you don’t have to think too long to spot some pitfalls here. First and foremost, people without medical training do what amounts to self-diagnoses. This can be a dangerous practice for sick people, and a misleading practice when made public. Someone might get sick from the spaghetti they ate over the weekend, but think, wait, I had Lucky Charms for breakfast. That must be it, because all these other people are saying the same thing.

It will be interesting to follow this story, especially as the FDA investigates the claims. Will Iwaspoisoned.com emerge as an excellent source of real-time data of a scope the FDA cannot compete with, or as a platform for speculation? Either way, it might be time to switch to a less sweet breakfast.

Comments are closed.