This seems pretty likely to be placebo effect, at least in the human trials which are the only parts of the paper I read. The paper cited [0] pre-registered on clinicaltrials.gov [1] -- which is great and everyone in the field should do so. However, looking at what they pre-registered, we have:
- Two different diet treatment regimes: fasting-like for 3 months followed by Mediterranean diet (FMD), and ketogenic diet (KD).
- Control diet was simply telling people to eat the same way as usual. So there's no real accounting for how well a placebo would do here.
- Measurements included a 54-question survey, adverse event counts, and various lab measurements. These measurements were taken at start, 1-month, 3-month, and 6-month intervals.
The problem then is that what was report was:
- Results of the first half of the first treatment (fasting-like for 3 months) for a subset of the measurements. What if things only worked in the second half? Or if things worked only for KD? So many implicit comparisons here.
- Comparison against the control group at 3 months, with reported p-values. Even though one of the reported measurements was the overall survey results, all of the values reported are p-values without any mention, that I can find, of multiple hypothesis testing across all measurements. This comes despite the fact that for all of the mouse results, they explicitly state they used Bonferroni correction.
- Baseline performance which involved no placebo. How many people would have improved if simply given some bullshit diet? Or if they had simply been given a diet that was vegetarian, or something that gave them the impression it was a treatment? Especially in surveys, placebo effect is a huge thing to look out for. Their more hard metrics like lab results show a more mixed bag with WBC dropping for fasting subjects. Sure, it returns once the 3 months is over, but then the supposed quality of life scores drop; so you can't have it both ways, though their writing makes it sound that way.
I'm not saying it isn't a great result from a bio standpoint. I'm sure the Cell reviewers found the mouse model results compelling. I just don't see any way to conclude the broad sweeping title of the article from the actual content of the paper, and it's unethical to do so without strong evidence.
There is a somewhat interesting philosophical question here. What does it mean for this result to be the 'placebo effect'?
That is, supposing it is in some sense caused by the psyche, and fasting reliably produces that effect...is there a sense in which fasting is not its cause?
Clearly for sugar pill vs drug we can measure the component which is 'placebo'. But here, since it is immeasurable...is it possible to call it a placebo effect? Or should we refer to effects derived psychologically from unique subjective experiences in another way.
First, one study doesn't tell us that fasting "reliably" produces this effect. Secondly, if it was a placebo effect then fasting would not reliably produce it. You would likely be able to produce no placebo effect by telling patients that there will very likely be no effect whatsoever on their symptoms.
>But here, since it is immeasurable...is it possible to call it a placebo effect?
If it is a placebo effect, you would probably be able to measure it by testing various other non-fasting diets. The difficulty lies in the inability to have a sham fasting diet, but you could likely mitigate that by telling patients there will likely be no effect on their symptoms from fasting.
Ya, you're totally right that one study does not prove reliability.
I guess my point is more theoretical. Let's say it was reliable, and let's say that other non-fasting diets didn't produce the effect. Further, let's say that telling them it had no effect didn't prevent it either.
Now, given all that, it's still possible that there is, in some sense, a psychological component (i.e. how you 'feel' when fasting) that causes you to get better in some way. I have always thought this about Airborne and such, for instance. They are total bullshit products that absolutely, categorically do not work. But I drink them when i'm sick anyway, because the effervescence and the flavor feel to me like something that works, so I feel like their placebo effect is better than other things.
I suppose what i'm getting at is that there are some activities/experiences that can engineer better placebo effects than others due to the subjective experience they produce. And I wonder then in what sense is that separable from the thing itself.
- Two different diet treatment regimes: fasting-like for 3 months followed by Mediterranean diet (FMD), and ketogenic diet (KD).
- Control diet was simply telling people to eat the same way as usual. So there's no real accounting for how well a placebo would do here.
- Measurements included a 54-question survey, adverse event counts, and various lab measurements. These measurements were taken at start, 1-month, 3-month, and 6-month intervals.
The problem then is that what was report was:
- Results of the first half of the first treatment (fasting-like for 3 months) for a subset of the measurements. What if things only worked in the second half? Or if things worked only for KD? So many implicit comparisons here.
- Comparison against the control group at 3 months, with reported p-values. Even though one of the reported measurements was the overall survey results, all of the values reported are p-values without any mention, that I can find, of multiple hypothesis testing across all measurements. This comes despite the fact that for all of the mouse results, they explicitly state they used Bonferroni correction.
- Baseline performance which involved no placebo. How many people would have improved if simply given some bullshit diet? Or if they had simply been given a diet that was vegetarian, or something that gave them the impression it was a treatment? Especially in surveys, placebo effect is a huge thing to look out for. Their more hard metrics like lab results show a more mixed bag with WBC dropping for fasting subjects. Sure, it returns once the 3 months is over, but then the supposed quality of life scores drop; so you can't have it both ways, though their writing makes it sound that way.
I'm not saying it isn't a great result from a bio standpoint. I'm sure the Cell reviewers found the mouse model results compelling. I just don't see any way to conclude the broad sweeping title of the article from the actual content of the paper, and it's unethical to do so without strong evidence.
[0] http://www.cell.com/cell-reports/pdfExtended/S2211-1247(16)3...
[1] https://clinicaltrials.gov/ct2/show/NCT01538355?term=NCT0153...