Total Pageviews

Wednesday, November 20, 2013

Food allergy ~ a fiscal and factual view


According to a very recent paper published in the Journal of the American Medical Association (Pediatrics)[1], food allergy costs a staggering $25 billion annually. This estimate was based on a survey of 1643 caregivers of children with food allergies. These costs are, by any measure, breath-taking. For example, in the case of obesity, for which we have very accurate data of the true prevalence, it is estimated that by 2030, the annual costs to the US will range somewhere between $ 48 and $66 billion[2].  How on earth can childhood food allergy cost as much as obesity? The authors break the costs down as 17% due to the direct costs of clinical care and 83% due to hidden costs such as time off work by parents in looking after sick children. However, the key figure in calculating the true national cost to the US is the prevalence of food allergy, which the authors cite as 8%. That figure was generated by the authors in an earlier study of US children and is based on a large survey involving the carers of over 38.000 children. Therein lies a huge problem in that the figure of 8% is self- reported. Two recent studies have completed a systematic review and meta analysis of the literature in this area. The first[3] focused on the prevalence of food allergy in Europe and found that the lifetime prevalence was 17.3 % meaning that at some point in their lives, about 1 in 6 people had encountered an allergy to food. However, the point estimate, that is the % at a specific time, was only 5.9 %. Both these values refer to self-reported food allergy. When food allergy diagnosis was made using a blood test (specific IgE) the point estimate was 10.1% falling to 2.7% when the more specific skin-prick test was applied. Importantly, when the gold standard test of food challenge was used, the value plummeted to just 0.9%.  The second study[4] focused on allergies to plant foods and accessed data from 36 studies with over 250,000 children. The majority of studies used self-reported values for the prevalence of food allergy. Taking fruit allergies as an example, some 21 studies using self-reporting methods found the prevalence of fruit allergy to be as high as 6.6 % and with most studies coming in at about 1%. However, with skin prick tests, the prevalence fell to less that 0.1%. A similar pattern was found for vegetable and nut allergies. The authors conclude: “Prevalence estimate of plant food allergy based on self-reported symptoms should be treated with caution”.

A final study explored the extreme event of fatal food allergic reactions[5]. The authors searched the literature from 1946 to 2012 and identified 13 studies involving 165 million food allergic person years which recorded 240 fatal food allergic reactions and 14 studies which explored such fatal events but found none. The estimated fatality rate among allergy sufferers was 1.8 per million person years which is less than that associated with accidental death. Peanut allergy, which is often cited in the popular media as the most dramatic of food allergies, had a mortality rate of 2.3 per million person years. Note that the term “person years” refers specifically to the population who are in fact food allergic and excludes the many who are not.

All of these data show that the popular estimates of food allergy and its consequences are grossly inflated, as are the putative economic costs of food allergy to carers. The true diagnosis of food allergy requires a slow methodical approach, ultimately identified by an exclusion diet and with the re-introduction of foods under clinical supervision. Skin prick tests are a next level and after that the diagnostic tools range from the low reliability ones such as blood IgE to downright quackery. For the few who are truly allergic to foods based on a proper diagnosis, the task of avoiding the offending food is a lifelong struggle. For the vast majority, where the diagnosis is self-made or based on some popular quackology, the allergy is no more than a social accessory.




[1] Gupta et al (2013) JAMA Pediatrics 167, 1026-1031
[2] Wang YC et al (2011) Lancet; 378: 815–25
[3] Nwaru et al (2013) Allergy, DOI 10 11 11
[4] Zuidmeer et al (2013) J Allergy Clin Immunol, 121, 1210-1219
[5] Umasunthar T et al (2013) Clin Exp Allergy (October 5th e print)

Monday, November 4, 2013

The new crop revolution: its red and blue, not green


Red is the colour of socialism representing the blood of the proletariat in their struggle against capitalism. Blue is the colour of conservatism after the concept of the blue ribbon, which signified high quality. When the ecological movement started as a political group in Germany in the late sixties, the obvious colour was green – the green of nature’s plains, forests and pastures. The reality is that plants are green because that is the fraction of white light (sunlight) that plants don’t want. They reflect back this unwanted light and that is the green of natural vegetation. In fact, plants only use the red and blue fractions of sunlight and thus are politically fully balanced! So, what would happen if instead of offering sunlight with an option to reflect green light, plants were given what they want, red and blue light? That is one of the pillars upon which a Dutch biotech company, Plantlab[1], is built. A second is their vision for plant agriculture in the future based on how global cities will dominate our planet.

According to a new report by the global consultancy company, McKinsey[2], The growth of the world’s population from its present level of 6-7 billion to 9 billion by 2050 will be dominated by urban growth. Every year, the world’s population expands by 65 million people, equivalent to 10 cities of Chicago or 5 Londons. At present, the top 600 cities in the world account for 65% of GDP growth and while that will remain so in 2025, the membership of this elite 600 cities will change, bringing in cities, which today are simply not household names and which will include over 130 completely new cities, 100 from China alone.

Traditionally, cities were fed from farms in their hinterland. Today, that hinterland stretches across continents. For example, Spanish tomatoes are major supplier of that food for Muscovites such that they are picked in Spain 5 days before they are shipped and sold in Moscow. The dream of Plant Labs is that the cities of the future will meet their vegetable and fruit needs through high throughput indoor farming. This will involve exposing plants to only red and green light in highly controlled climatic environments that can be managed on a minute-by-minute basis and which can be adjusted remotely, with one control center managing dozens of these plant production units. In addition to light efficiency, water efficiency is utmost in priority in this new vision of plant production. Traditional agriculture is a great waster of water and all the predictions of the future fragility of the food chain point to water as the weakest link.  Irrigation of agricultural crops has laid waste the great subterranean aquifers such as the Ogallala in the US or the above-ground water lakes such as the Aral sea in Central Asia, magnificently portrayed on Google Earth Time-lapse maps[3]. In the vegetable farm of the future, water efficiency is almost 100% with the only water loss being the water that exits the production unit inside the cells of the lettuce or tomato or whatever crop is grown.

From a consumer point of view, this system might have the added bonus that neither weed killers nor pesticides are needed, simply because the plants are incredibly healthy in this “plant-centric” environment. Plants grown in sunlight are weaker and need the use of the farmer by tillage or chemistry to protect the crops from weeds and pests. But not so for the plants that thrive on red and blue light and just-in-time technology to deliver the right nutrients for growth at the right temperature for every second of the day.

From a nutritional point of view, we need to look both at the potential of these new farms and backwards to the McKinsey Global Institute reports on cities. As regards the latter, the average value for any statistic hides some crucial data. For example, they point out that nationally, the number of children in China will fall over the next few decades. But in the new cities, there will be a growth of some 7 million new infants. Equally, the number of households will grow but the number of persons per household will fall. These demographic profiles will drive the nutritional needs of the cities of the future and by definition, the future world population. Returning to the plant production facilities, a daily supply of 200 grams of fruit and vegetable per head requires 1 square meter per person. Thus for 100,000 persons, we need ten floors of factory farming with 100 square meters of growing area per floor, a total of 100,000 square meters. By my calculations, the two main sports stadia in Dublin (The Aviva stadium and Croke Park) together could cover these daily needs of half the population of the city. In this vision, we could return to the era of agriculture in our hinterland and as Gertjan Meeuws CEO of Plantlab argues in his TEDx talk (available on their website), we could move from food miles to food steps. Of course any new technology poses new challenges and who is to say that a new plant virus could not enter such a system. But such biological disasters also happen in field agriculture such as that presently faced by the Californian citrus industry[4]. At least, in the indoor plant system, any infection in one unit can be destroyed without ant risk to another unit.

Feeding the world is a truly absorbing technological challenge and all technologies will be needed. Plant Labs look not only at mega multi-story production facilities. They are also thinking of this technology in supermarkets, in restaurants and even your own kitchen version growing crops such as herbs and condiments. The Dutch are ideally suited to lead man’s struggle with nature and water. Holland lies below sea level and the country relies totally on the strength of the Dutch dykes to keep it viable.


Wednesday, October 2, 2013

Food addiction: Myth or reality

Animal models appear to show that certain foods (usually the so-called “high-fat, high-sugar and high-salt foods) can be addictive. However, this experimental model of addiction bears no relationship to addiction in humans. In his book “An End to Over-eating”, David Kessler comments on this animal model of food addiction, one he incidentally finds attractive to human obesity. Citing work from Italian researchers, which showed that in the short term, a cheese flavoured snack food increased levels of dopamine in rat brains he writes thus: “Over time, habituation set in, dopamine levels declined and food lost its capacity to activate their behaviour. But there’s more to the story. It turns out that if the stimulus is powerful enough, or administered intermittently enough, the brain may not curb its dopamine response after all. Desire remains high. We see this with cocaine use, which does not result in habituation”. Effectively one can trick the mouse and then make a quick jump to human cocaine addiction. Simple isn’t it?

Not so, according to a recent review by researchers at the Department of Psychiatry at the University of Cambridge[1],[2] In their review, they begin by distinguishing between behavioural addiction such as gambling, which doesn’t have an additive agent (betting slips per se are not addictive) and substance addictions, which are agent-dependent. Alcohol and cocaine are examples of agents that can be addictive. Thus the first challenge to the food addiction model is to identify the agent. It can’t be “fat” since effectively a totally fat-free diet is lethal to populations – reproduction becomes impossible. It can’t be “high-fat” since olive oil, the elixir of all our ailments according to many, is pure, 100% undiluted fat ~ the real thing. The normal brain relies solely on glucose as a fuel so if “sugars” are the agent, we have a problem. If it’s a specific cocktail of fat-sugar-salt, then that needs to be articulated in terms of the human diet and as of now, no such norms exist, let alone exist in some unproven state. So the putative addictive agent in food is utterly ill defined. The authors go on to point out that the so-called addictive hyperpalatable foods are widely available and widely consumed but as yet, are not a widespread public health problem. Thus they argue that in addition to some vague and as yet undefined cutoff above which addiction may occur, they will have to find other factors to explain why some people might become addicted whereas others will not. It could be a genetic factor or an addiction, dependent on alcohol intake, or on a sedentary life style or on stature or age or gender or all of the above. The concept of food addiction, particularly in relation to obesity might be popular with the wannabe celeb scientists but it is as imprecise a concept as one could possibly imagine.

 The clinical management of addiction uses a standardised guideline to define substance dependence based on the “Statistical Manual of Mental Disorders, fourth edition (DSM-IV)”. There are 7 criteria to be considered within this tool, all of which help in judgment on addictions. The first deals with “tolerance” and specifically the need for the user to seek ever-increasing amounts to reach the desired level of intoxication. This is impossible to apply to food addiction since we know neither the exact agent or its dose or its physiological, genetic, social or lifestyle dependencies. The second relates to withdrawal symptoms and no such data exists for humans and their food habits. The manual refers to symptoms such as shakes and sweats! The third is a persistent desire for and unsuccessful attempts to cut drug use. Overweight persons certainly wish to rid themselves of excess fat and try repeatedly to do so but linking this concept to a specific and putative food addiction agent is not supported by scientific data. The fourth describes the taking of larger amounts of the drug than intended. This is impossible in food since we don’t know the agent or its intoxicating dose. The fifth recognises that a great deal of time is spent getting, using and recovering from the drug. Take a walk in Tesco or Wal-Mart! The sixth deals with the effect the drug has on the pursuit of important social, occupational or recreational activities.  It’s hard to think of work absenteeism arising from the pursuit of highly palatable and putatively “addictive” foods. The seventh and final area deals with the continued use of the drug with the user well aware of its dire consequences for health and social well-being. Again, it is impossible to see how this can apply to food.

The authors do however, point out that certain eating patterns are nearing the DM-IV criteria, most specifically Binge Eating Disorders (BED) which they say is characterized by: “ …recurrent episodes (‘binges’) of uncontrolled, often rapid consumption of large amounts of food, usually in isolation, even in the absence of hunger. This eating behaviour persists despite physical discomfort and binges are often associated with feelings of guilt and disgust”. This is the closest that psychiatrists see food as approaching addiction but researchers in Yale are working on an adaptation of DSM-IV to score and quantify food addictions[3]. This blogger’s read of this adaptation of the DSM-IV clinical guidelines is that it will be quite significant adaptation, if not a total re-write. Like it or not, this will sustain the nutrition-psychiatry gulf in understanding and characterizing addiction.

Food addiction has now begun to attract the interests of other groups, most notably the legal & ethical researchers[4]. If, and it is a big if, research were to point to a possible addiction among some to a particular food or nutrient or cocktails thereof, then how do we deal with this legally? Do we expect to see certain foods removed from the supermarket shelves and driven into the underworld of dodgy dealing such as the illegal cheesecake (high fat, high, sugar, High salt food par excellence) the Cambridge scientists refer to? Of course this is farcical but where else would a regulatory and policy framework go to tackle this problem, if indeed, such behaviour is deemed to be a problem in the first place?  For those who want to explore this further, watch the You Tube video of a stand up comedian (lead author of references 1& 2 above) who is also a board certified psychiatrist on the subject of food addiction[5].










[1] Ziauddeen H & Fletcher PC (2013) Obesity Reviews, 14, 19-28
[2] Ziauddeen H et al (2012) Nature Neuroscience, 13, 279- 286
[3] Gerhardt A et al (2009) Appetite, 52, 430-436
[4] Gearhardt A et al (2013) J Law Med Ethics. 41 Suppl 1:46-9

Monday, July 29, 2013

Childhood IQ and maternal iodine status

About a year ago, I blogged on the subject of iodine and its increasing importance in public health nutrition in developed countries[1]. Of course, iodine deficiency is one of the three elements of global hidden hunger. According to a letter to the Lancet from the WHO “urinary iodine has been collected for 92% of the world's population and globally, more than 1·9 billion individuals have inadequate iodine nutrition (defined as urinary iodine excretion <100 (μg/L), of whom 285 million are school-aged children.”[2] In developing countries, iodine deficiency primarily affects energy metabolism and reduces the capacity for physical work. More recently, the spotlight has also been shone on the developed world where the role of iodine in brain development is the main concern. Iodine is a component of the thyroid hormones, which play a central role in the brain development of the fetal infant. In a recent Lancet paper[3], the offspring of 1040 women who had spot urine analysed for iodine during her first trimester were studied. Only those children who had an IQ test at 8 years and a reading ability test at age 9 years were included. The study drew on a longitudinal health study of mothers and their offspring, the Avon Longitudinal Study of Parents and Children.[4] The iodine to creatinine ratio, a measure recommended by the WHO was used to determine maternal iodine status and these levels were classified above or below a cut off point recommended by the WHO (150 μg/g). The average urinary value was 91 μg/g, which indicated that this population had a mild-to-moderate level of iodine deficiency (Two thirds of the population were below the WHO cut-off). The problem with cut off values is that they are often, if not always, set by scientists working in the relevant field who tend (in my view) to always go for the highest value so that their pet problem is seen to be a really important issue. This isn’t skeptism on my part. It is cynicism.

The research output of this paper would make me retract my cynicism because what the authors found was a direct association between cognitive performance and maternal iodine status. Looking firstly at the mothers themselves, those below the 150 μg/g cut off tended to be younger and to have had less education than mothers above the cut off.  For the children of these mothers below the cut-off, followed up at age 8 years, the IQ values were significantly lower for total function and for verbal and performance function. At 9 years of age, their reading ability was also lower: words per minute, accuracy, comprehension and reading score.  Clearly, a child’s cognitive function could be influenced by a wide range of confounding factors and the authors accounted for a total of 21 possible confounding factors such as maternal age, life events, breastfeeding, alcohol and tobacco intake, the use of fish oil supplements in pregnancy, birth weight, maternal depression and so on. The inclusion of all these variables did not alter the conclusions.

Using a simple cut off can sometimes be a bit too simple because it wont show if a trend exists so the authors re-analysed the data into a continuous regression analysis rather than the dichotomous cut off approach. In general they found a linear positive relationship between maternal iodine status and subsequent childhood cognitive function, again after allowing for all manner of confounding variables.

These findings are really very important. Basically, soil levels determine dietary iodine levels and there are many regions of Europe, which were known in the past to be “Goitrogenic regions” by virtue of low soil iodine levels. All that vanished when dairy farmers used iodophors to clean milking byres. That has now vanished and we are back to reliance mainly on soil levels. As I wrote in my previous blog: “Ironically, the many pregnant women who shift to organic foods in the belief that this will help ensure as healthy a baby as possible, will see a very significant fall in iodine intake.  Organic animal production greatly restricts the use of mineral and vitamin supplements in animal feeds.  Recent survey of the iodine content of milk from organic and conventional farms shows that the organic milk is 42% lower in iodine than conventional milk, and milk accounts for almost half the UK iodine intake. In fact, pregnant women should be counselled to avoid organic milk”.
Now here is an interesting question. Which would you rather have if you had the utterly unthinkable choice between an overweight 8 year old or a cognitively impaired 8 year old. Childhood obesity is very important. But there are other equally important if not more important issues for childhood nutritional wellbeing. Methinks iodine is top of the future list.





[1] “Iodine, now a problem in developed countries July 23rd , 2012 - http://tinyurl.com/ox3dloj.
[2]De Benoist et al (2003) Lancet, 362, (9398), 1859-1860
[3]  Bath SC et al (2013) , Lancet. 2013 May 21. E pub ahead of print