fbpx
4 December 2018

WHY YOU NEED TO READ FULL TEXTS (PART 1)

0

INTRODUCTION It’s a familiar experience: someone makes a claim on Facebook (“Is butter a carb? Yes, it is!”), someone else chimes in to challenge them, and the first person responds with a link to… a study abstract. To be sure, full texts are not always freely available, and thus can’t always be presented as evidence…

INTRODUCTION
It’s a familiar experience: someone makes a claim on Facebook (“Is butter a carb? Yes, it is!”), someone else chimes in to challenge them, and the first person responds with a link to… a study abstract. To be sure, full texts are not always freely available, and thus can’t always be presented as evidence for one’s claims, but there is nonetheless something of a phenomenon of folks relying on study abstracts to justify their views.

Interestingly, for all the comments I’ve seen mocking the quintessential ‘abstract warrior’, I’ve seen far less in the way of a substantive discussion of why it’s actually a problem to rely on abstracts in this way. After all, abstracts do usually succeed in presenting the most important parts of a study, so one almost can’t even blame the aforementioned warrior for riding their abstracts into battle. However, it is – unsurprisingly – ultimately the case that this is a highly problematic approach. That’s why we’re here.

In this new series, I want to lay out a number of problems that arise from relying on study abstracts in hopes of making evidence-based decisions. I really want to emphasize the practical aspect of this, as there is truth to the cliche that one can get overly technical and approach these issues in a way which is ultimately one-dimensional and detached from real-world practice. Thus, I’m going to do my best to come at this from the perspective of the practitioner – from the perspective of someone who wants to apply what it is they’re learning, rather than someone who merely wants to learn.

With that said, let’s get going.

PROBLEM #1: ABSTRACTS EXCLUDE ALMOST THE ENTIRETY OF A STUDY’S METHODOLOGY, LIMITING YOUR ABILITY TO ASSESS THE STUDY’S EXTERNAL VALIDITY.
In case this isn’t self-evident: this is a hugely important fact. Given their extreme brevity, abstracts understandably focus disproportionately on the results of the relevant study. As such, all but the most salient facts regarding the study’s methodology are simply left out. While this, again, is understandable, it’s highly problematic for the ‘abstract warrior’, as study weaknesses are very often palpably built in to their methods.

For example, I’ll never forget a study which compared the effects of a carb load to the effects of a fat load, finding that both resulted in the storage of fat. What the abstract failed to mention, however, is that the “fat” load consisted of a combination of heavy cream and carbohydrates, with a total carbohydrate content of 25g (if I recall correctly; precision isn’t absolutely essential for this example). As such, the study – rather than being a comparison of a carb load to a fat load – was a de facto comparison of the effects of a carb load versus a fat load combined with a modest amount of carbohydrates. While this may sound like an insignificant difference, it ends up being hugely misleading when the study is used to justify that claim that fat can be stored in the absence of insulin, as even the 25g of carbs contained in the “fat” load were likely sufficient to create a significant insulin response.

Ultimately, it’s essential to expose yourself to the entirely of a study’s methodology to accurately assess the study’s external validity – the degree to which its results can be reasonably applied to non-study, ‘real world’ conditions. In the absence of a reading of the full text, you’re simply assuming methodological strength, and thus you run a real risk of drawing practical conclusions actually outside the scope of what the study demonstrates.

Your email address will not be published. Required fields are marked *

Send this to a friend