This month’s edition of the American Journal of Clinical
Nutrition carries a very important review of the epidemiological data relating
food to cancer[1].
It is well written and sensitive in its conclusions but, reading between the
lines, it is quite simply damning of the quality of the epidemiological
research basis linking food intakes and cancer. The authors started off with The Boston Cooking-School Cook Book. Using a random number generator to correspond
to page numbers, they searched the cook-book for recipes. All of the unique
ingredients in each recipe was identified and the process was repeated until 50
unique food ingredients were identified. The next stage of the process was to
explore the scientific literature to examine the most recent studies, if any,
linking any one of the 50 ingredients to cancer. The 10 most recent studies
were selected and if there were less than 10 studies available, synonyms (e.g.
mutton for lamb) were used to further explore the availability of studies. In
addition to individual studies, the authors also searched for meta-analysis
studies that combine data from several individual studies to increase
statistical power.
The next stage of the process was to extract data from each
individual study or meta-analysis. This involved an examination of the abstract
with an emphasis on the author’s conclusions, an analysis of the statistical
methodology used and an assessment of the exposure levels examined for each
ingredient. From the 50 ingredients randomly chosen from the cookbook, 40 (80%)
were found to be the subject of a scientific investigation into its links with
cancer. The food ingredients included: veal, salt, pepper spice, flour, egg,
bread, pork, butter, tomato, lemon, duck, onion, celery, carrot, parsley, mace,
sherry, olive, mushroom, tripe, milk, cheese, coffee, bacon, sugar, lobster,
potato, beef, lamb, mustard, nuts, wine, peas, corn, cinnamon, cayenne, orange,
tea, rum and raisin. A total of 216 publications were found linking these
ingredients to cancer. Of the 40 ingredients, 36 were identified in at least
one study as either increasing or decreasing the risk of cancer. In their
examination of the statistical analysis used, the authors of the review
concluded that, of the studies that claimed an increased risk of cancer, 75%
were associated with “weak or non-nominally significant” effects and for those
that reported a negative effect, the corresponding figure was 76%. In 65% of the studies, these effects were
based on comparison of extremes of consumption such as >43 drinks per week
versus none or “often” compared with “never”.
The authors compared the calculated risk from individual studies with
the risk calculated from meta-analyses where like studies were pooled. The latter
showed a narrow range of risk where as the former showed a huge range of
positive and negative risk.
The authors conclude that the vast majority of claims for
increased or decreased risk were based on “weak statistical evidence”.
Moreover, they show an appalling practice of relegating negative or weak
results to the fine detail of the text of the paper but excluding such from the
abstract. The abstract is most likely to be read and certainly to be the basis
of any media interpretation of the study.
All in all, this is a damning analysis of the field of
nutritional epidemiology. It would be wrong to throw the baby out with the
bathwater since nutritional epidemiology has been the basis for many
substantiated diet-disease links (folic acid and neural tube defects,
atherosclerosis and dietary lipids, calcium intake and osteoporosis and so
on). The problem with diet and cancer is
that intervention studies to experimentally prove a cause and effect
relationship are simply not possible. Heart disease relates to one organ, the
heart, whereas cancer can relate to almost all organs. In the study of diet and
heart disease we can use biomarkers (plasma HDL and LDL cholesterol for
example) while no such biomarkers exist for cancer.
In the analysis of how extremes of exposure were used, the
authors found that the meta-analysis approach was almost exclusively based
(92%) on extremes of consumption of the particular food ingredient.
Epidemiological studies, by definition are very large and as such, the tools to
measure diet must be relatively simple and this usually involves a food
frequency questionnaire, which examines frequency of intake. However,
nutritional epidemiology absolutely ignores the well know phenomenon of
under-reporting of food intake. Thus,
when extremes of food intake are compared, they are largely based on truly
flawed measures of food intake. This was not considered in the present study
and more often than not, insufficient data is presented in papers to allow
readers to make any conclusions as to the extent of under-reporting based on the
match of reported energy intake and estimated energy requirements. The
editorial boards of journals should insist that all studies involving food
intake have a section in which the authors explain the extent of energy
under-reporting and the specific implications for the study in question.
No doubt, this damning review of the epidemiology of diet and
cancer will be swept under the carpet by the field of nutritional epidemiology.
However, this blogger has always held the view that bad science will always be
found out.
[1]
Schoenfeld JD & Ioannidis JPA (2013) “Is everything we eat associated with
cancer? A systematic cookbook review” Am J Clin Nutr 97, 127