3 Myths about Genetically Modified Crops

Genetically modified oilseed rape, one of the four main commercial GM crops. Photograph: David Levene

Genetically modified oilseed rape, one of the four main commercial GM crops. Photograph: David Levene

The debate about GM crops has reached a new level with many countries deciding on its fate. Among all this shrill and cacophony about it, we indeed have been fed many myths about it. Scientific American published a nice article on it some days ago, tiled – 3 Myths about Genetically Modified Crops . It looked into some detail about the 3 most important myths.

Lets have a look, shall we ?

Myth 1: GM crops have bred superweeds

Verdict: FALSE

This issue has been quite a contentious issue for more than a decade now.

US farmers had widely adopted GM cotton engineered to tolerate the herbicide glyphosate, which is marketed as Roundup by Monsanto in St Louis, Missouri. The herbicide–crop combination worked spectacularly well — until it didn’t. In 2004, herbicide-resistant amaranth was found in one county in Georgia; by 2011, it had spread to 76. 

Many scientists, and even some of my colleagues have argued that use of GM crops which are herbicide resistant are responsible for the evolution of herbicide resistance in many weeds.

Twenty-four glyphosate-resistant weed species have been identified since Roundup-tolerant crops were introduced in 1996.

However, herbicide resistance has been a problem for farmers regardless of whether they plant GM crops or not. For more see this chart on the rise of super-weeds:

‘The rise of superweeds’  Source: Scientific American

‘The rise of superweeds’
Source: Scientific American

So, blaming just the increased use of GM crops wont solve the problem of these super-weeds.

Myth 2. GM cotton has driven farmers to suicide

Verdict: FALSE

Now this has been a big news item in India recently when a leading rights activist and environmental campaigner Vandana Shiva alleged that some 270,000 farmers have committed suicide ever since GM crops have been used. Bt cotton which has a gene from the bacterium Bacillus thuringiensis has been planted in India and has been the major bone of contention in India.

Seeds initially cost five times more than local hybrid varieties, spurring local traders to sell packets containing a mix of Bt and conventional cotton at lower prices. The sham seeds and misinformation about how to use the product resulted in crop and financial losses. This no doubt added strain to rural farmers, who had long been under the pressures of a tight credit system that forced them to borrow from local lenders.

This claim was however refuted by researchers at the International Food Policy Research Institute in Washington DC, who scoured government data, academic articles and media reports about Bt cotton and suicide in India. Their findings, published in 2008 and updated in 2011, show that the total number of suicides per year in the Indian population rose from just under 100,000 in 1997 to more than 120,000 in 2007. But the number of suicides among farmers hovered at around 20,000 per year over the same period.

Suicide Rates and GM crops Source: Scientific American

Suicide Rates and GM crops
Source: Scientific American

The important thing to note here, is that the focus of argument in India has shifted from a balanced discussion on the various ways technology can benefit us to calls for outright bans on using it. This would never solve the issue but aggravate it.

Myth 3: Transgenes spread to wild crops in Mexico

Verdict: UNKNOWN

We finally come to another issue about how transgenes have spread to far-off maize fields in Mexico. What started all of it was:

In 2000, some rural farmers in the mountains of Oaxaca, Mexico, wanted to gain organic certification for the maize (corn) they grew and sold in the hope of generating extra income. David Quist, then a microbial ecologist at the University of California, Berkeley, agreed to help in exchange for access to their lands for a research project. But Quist’s genetic analyses uncovered a surprise: the locally produced maize contained a segment of the DNA used to spur expression of transgenes in Monsanto’s glyphosate-tolerant and insect-resistant maize.

Now, as GM crops are not approved in Mexico, the only possible source of such transgenes could only have come from GM crops imported from the United States for consumption and planted by local farmers who probably didn’t know that the seeds were transgenic. When the results were published it brought a furore in Mexico with people arguing for and against the issue. Ever since, few detailed studies have been done on the spread of transgenes via GM crops.

In 2003–04, Allison Snow, a plant ecologist at Ohio State University in Columbus, sampled 870 plants taken from 125 fields in Oaxaca and found no transgenic sequences in maize seeds.

But in 2009, a study led by Elena Alvarez-Buylla, a molecular ecologist at the National Autonomous University of Mexico in Mexico City, and Alma Piñeyro-Nelson, a plant molecular geneticist now at the University of California, Berkeley, found the same transgenes as Quist in three samples taken from 23 sites in Oaxaca in 2001, and in two samples taken from those sites in 2004.

In another study, Alvarez-Buylla and her co-authors found evidence of transgenes in a small percentage of seeds from 1,765 households across Mexico.

However, some scientists argue that transgene spread could in effect have a neutral or even a positive effect on local crops.

In 2003, Snow and her colleagues showed that when Btsunflowers (Helianthus annuus) were bred with their wild counterparts, transgenic offspring still required the same kind of close care as its cultivated parent but were less vulnerable to insects and produced more seeds than non-transgenic plants.

In the end, i would quote something from the article here:

Tidy stories, in favor of or against GM crops, will always miss the bigger picture, which is nuanced, equivocal and undeniably messy. Transgenic crops will not solve all the agricultural challenges facing the developing or developed world, says Qaim: “It is not a silver bullet.” But vilification is not appropriate either. The truth is somewhere in the middle.

What in fact, would be beneficial for ending the food insufficiency problems would be develop GM crops which would have more protein content, or even essential animal proteins or could produce various other required molecules in our body. These would benefit us in more ways than by simply developing GM crops for resistance to insecticides/ herbicides. The industry needs to look at developing a holistic view of GM crops and instead of creating shrill noise, detractors should sit together with the scientists from academia/industry,policy makers and industry honchos to use technology for our benefit.

For further reading:

1). Bt Cotton and Farmer Suicides in India: An Evidence-based Assessment, Guillaume Gruèrea & Debdatta Senguptaa,The Journal of Development Studies,Volume 47, Issue 2, 2011.

2). Field versus Farm in Warangal: Bt Cotton, Higher Yields, and Larger Questions, Glenn Davis Stone, World Development,Volume 39, Issue 3, March 2011.

3). Economic impacts and impact dynamics of Bt (Bacillus thuringiensis) cotton in India, Jonas Kathage and Matin Qaim, PNAS, 2012.

4). Are GM Seeds to Blame for Indian Farmer Suicides?, Adam Pugen, Feb 2013, The International.

The human brain simulation project (Blue Brain) wins a billion euros!!

article-0-0BF9712300000578-209_468x313

In one of the biggest funding exercises ever, European Commission has selected Prof. Henry Markrams (pictured above) dream project – The Human Brain Simulation Project for a mammoth grant of € 1 billion over a period of ten years.

The Human Brain Simulation Project or the Blue Brain project has been a center of quite a controversy ever since it started in 2005 at the  École Polytechnique Fédérale de Lausanne (Switzerland). It aims to create a synthetic brain by reverse engineering a human brain down to its molecular details. It uses the famed Blue GENE supercomputer  and uses Michael Hine’s NEURON software to recreate neural connections not just by using Artificial Neural Networks but by a closer approximate model of neurons.

What is it all about?

Neuroscientists have been trying to understand the inner workings of our human brain for some centuries now. First came, the detailed anatomical drawings by Rufus of Ephesus, Galen and Leonardo da VinciThen English physician Thomas Willis published his Anatomy of the Brain which assimilated all its inner structures. The goal of understanding what brain is and how it does work started from these anatomical drawings and has continued on to constructing detailed mathematical models of how each of the cells within it work. Of course, i am talking about the famous Hodgkin-Huxley model which for the first time describes how action potentials in neurons are initiated and propagated.

The quest for understanding how billions of neurons come together in a complex network with millions of feedback loops and yet function so harmoniously without any hint of chaos is considered to be one of the Holy Grails of Science. In this picture comes Prof Markram’s Human Brain Simulation project. With advanced supercomputer at one side, and brilliant electrophysiologists at the other the aim has been to model not just the neural circuits involved in, say, the sense of smell, but to model everything,

“from the genetic level, the molecular level, the neurons and synapses, how microcircuits are formed, macrocircuits, mesocircuits, brain areas — until we get to understand how to link these levels, all the way up to behaviour and cognition”

 

Progress until now?

Obviously to even start off this mammoth task, one has to first demonstrate this so-called unified approach on a smaller scale. And that was indeed what he started off with. From 1995 to 2006 he collected data on the simulation of a rat neocortical column, which can be considered the smallest functional unit of the neocortex (the part of the brain thought to be responsible for higher functions such as conscious thought). Such a column is about 2 mm tall, has a diameter of 0.5 mm and contains about 60,000 neurons in humans; rat neocortical columns are very similar in structure but contain only 10,000 neurons (and 108 synapses). By December 2006, Markram was able to map all the types of neurons and their connections in that column.

By 2008, the researchers had linked about 10,000 such models into a simulation of a tube-shaped piece of cortex known as a cortical column. Now, using a more advanced version of Blue Gene, they have simulated 100 interconnected columns.This has indeed proven that  such unifying models can, as promised, serve as repositories for data on cortical structure and function.

Source: Human Brain Project

All of this has only been possible due to the large-scale advances in supercomputing technology and data storage facilities. The computer power required to run such a grand unified theory of the brain would be roughly an exaflop, or 1018 operations per second, which were quite hopeless in the 1990’s when Markram started off the project. But as available computer power doubles roughly every 18 months, so exascale computers might be available by the 2020’s.

Source:

Source: Human Brain Project

There has been some criticisms to this project, and that has to do with the media hype generated by Markram. His critics argue that he has been making his case through talks, media interviews, well-placed ads, and through the traditional means of publishing articles, reviews etc. The detractors also argue that the Markram’s bottom-up approach might yield such a model  which could be so detailed that it is no easier to understand than the real brain. Also, the progress till now has not been daunting either, as the rat neocortex has no inputs from sensory organs or outputs to other parts of the brain, and produces almost no interesting behaviour.

But despite all the criticism, one hopes that this gargantuan project with its lofty aim would yield interesting results, even if not a complete replica of human brain but at least a shadow simulacrum would be enough. For all the critics, who are too afraid of Markram’s bold new ideas I would reiterate James Russell Lowell:

 “Creativity is not the finding of a thing, but the making something out of it after it is found.”

More on this:

  1. Turing at 100: Legacy of a universal mind, Nature News, 2012.
  2. European researchers chase billion-euro technology prize, Nature News, 2011.
  3. Bioinformatics: Industrializing neuroscience, Markram, Nature, 2007.
  4. The Blue Brain Project, Markram H, Nature, 2006.
  5. Human Brain Project, EU Initiative.
  6. The Blue Brain Project @ EPFL

Researchers find a hidden genetic code

Source: Stephanie Mitchell/Harvard Staff Photographer

Source: Stephanie Mitchell/Harvard Staff Photographer

A recent paper published by Arvind Subramaniam (pictured above) and co-authors in Proceedings of National Academy of Sciences have attempted to provide a solution to a decades old problem in genetics.

Though the genetic code, the rules by which DNA gets transcribed to RNA and then translated to proteins, is quite well understood, but what has remained puzzling is the degeneracy of the genetic code underlying protein synthesis. Now before understanding more about what they did, let me give you a brief primer on the problem itself.

What is it all about ?

Decoding of DNA is accomplished by the ribosome, which links amino acids in an order specified by mRNA (messenger RNA), using transfer RNA (tRNA) molecules to carry amino acids and to read the mRNA three nucleotides at a time.  So mathematically we can calculate that, with four different nucleotides (in RNA), a three nucleotide code  could code for a maximum of 43 or 64 amino acids. However, in the process of translation (RNA to proteins) only 20 kinds of standard amino acids are produced. So, many of these groups of nucleotides, called codons  produce same amino acids i.e, code synonymously.  For example, the amino acid can be produced in six different ways. A cool figure below shows all the 64 codons and the different amino acids they code for.

figure-09-08

This apparent degeneracy has been a core problem in genetics, i.e, whether those seemingly synonymous codons truly produced the same amino acids, or whether they represented a second, hidden genetic code. Now Harvard researchers have published a possible solution to this problem, and they hope the solution would lead to developing new methods to fight resistant bacteria.

How did they solve it?

To try and decipher this synonymous coding problem, they decide to use a simple bacterium Escherichia coli. First they considered the synonymous codons for seven amino acids: Leucine, Arginine, Serine, Proline, Isoleucine, Glutamine, and Phenylalanine. This set of seven amino acids is representative of the degeneracy of the genetic code,as it includes six-, four-, three-, and twofold degenerate codon families. Then they constructed a library of 29 yellow fluorescent protein (YFP) gene variants, in such a way that each version of the gene could code for a specific amino acid. Then all these genes were inserted into E.coli. To test whether the codons function similarly or not, they applied environmental perturbations on E.coli. This perturbation was in the form of amino acid availability. They monitored growth and YFP synthesis in these strains during amino acid-rich growth as well as during limitation for each of the seven amino acids.

What they found, was quite startling that under different environmental conditions (amino acid availability) the codons produced proteins at a different rate.  If the bacteria are in an environment where they can grow and thrive (amino acid rich), each synonymous codon produces the same amount of protein, but if they are starved of an amino acid, some codons produce a hundredfold more proteins than others.

The reason for such differences in protein production lie in the nature of tRNA, the Transport RNA which ferries the amino acids to the cellular machinery that manufactures proteins. The authors managed to rule out the usual rules associated with tRNA abundance and codon usage. Rather,it was the competition among tRNA isoacceptors for aminoacylation which was the underlying reason for the robustness of protein synthesis. In plain-speak, what this means that different tRNA molecules have different levels of amino acid carrying efficiency. So, if some tRNA molecules are not able to deliver the amino acid to where it needs to be, the cell would not be able to manufacture the proteins it needs. In an environment where amino acids are in short supply, that ability to hold onto them becomes very important.While the system helps cells to make certain proteins efficiently under stressful conditions, it also acts as a biological fail-safe  allowing the near-complete shutdown in the production of other proteins as a way to preserve limited resources.

What now?

Given the universality of the genetic code, it would very interesting to explore what role (if any) differences in the seemingly synonymous portions of the genetic code may have in other organisms. Also, in diseases like cancer, the cancerous cells deplete amino acids faster than normal cells, so given that environmental perturbations lead to different protein production efficiencies, would it be possible to devise any interventions or treatments to combat them!!

More on this:

  1.  Degeneracy and complexity in biological systems, Edelman GM, Gally JA, Proceedings of National Academy of Sciences, 2001
  2. Cooperation between translating ribosomes and RNA polymerase in transcription elongationProshkin S, Rahmouni AR, Mironov A, Nudler E, Science, 2010
  3. The GCN2-ATF4 pathway is critical for tumour cell survival and proliferation in response to nutrient deprivation, Ye J, et al., EMBO J, 2010
  4. High levels of tRNA abundance and alteration of tRNA charging by bortezomib in multiple myeloma, Zhou Y, Goodenbour JM, Godley LA, Wickrema A, Pan T, Biochem Biophys Res Commun, 2009

Goofy Monday:Louisiana senator asks whether E.coli evolve into people!!

Picture1

In a painful yet laughable story, we see a Louisiana senator asking a high school teacher whether E.coli can evolve into people !! This happened during the Senate Education Committee’s hearing on Senator Karen Carter Peterson‘s bill to repeal the odious Louisiana Science Education Act (the “LSEA”). Though the act was repealed by a tiny majority, but this kind of ignorance (in the face of reason) is quite scary.

So, lets come back to the strange quip by the senator. On asking the biology teacher, whether there was any evidence of evolution actually happening in modern world, she decided to put forth one of the best evolution experiments ever – Richard Lenski’s decades long experiment on E.coli adaptation . At this point the senator quipped: “They evolve into a person?”

Now this kind of question, points out to the main arsenal which creationists use in their never-ending, pointless battle against evolutionary biology and i.e, How does the tiny changes in the gene pool of a population translate into giant, morphological changes over time ?

Microevolution which is basically, a change in gene frequency within a population is understandable they say (though, not all of them), but what they vehemently deny is macroevolution. The “creationist skeptics” refuse to be convinced unless, as the senator  points out, they see an example of a bacterium directly transforming into a human being.

For the creationists like the Senator in question, this remains a sort of trump card which they wave every time in a debate about evolution. What they don’t realise is that, if they study a bit more about evolutionary biology then they can answer it themselves. The parsimonious explanation of how mutation, genetic drift, migration and natural selection can bring about  evolution at both levels – micro and macro is something which these creationists still need to understand.

More on this:

  1. How 19-year-old activist Zack Kopplin is making life hell for Louisiana’s creationists, Io9.com,2013
  2. Senator Asks For Proof Of Evolution, Discovers He Doesn’t Actually Understand What It Is

Nutrigenomics – The new pseudoscience in personalised medicine ?

Source: Netherlands Nutri­genomics Centre (NNC)

I recently came across many advertisements dealing with nutrigenomics and how it is going to revolutionize personalised medicine. So, i decided to dig further into them and found that what it basically contains is a jungle of half-baked claims, untested presumptions and sometimes dangerous prescriptions doled out to unsuspecting patients based on them.

Here is what one clinic prescribing nutrigenomics says:

Nutrigenomics seeks to unravel these medical mysteries by providing personalized genetics-based treatment. Even so, it will take decades to confirm what we already understand; that replacing specific nutrients and/or chemicals in existing pathways allows more efficient gene expression, particularly with genetic vulnerabilities and mutations.

The key phrase here is – ” It will decades to confirm what we already understand”. This is the main essence of pseudoscience – using science to confirm what one already “knows.” Obviously this is backwards, of course. Science is never done by “confirming” but by determining if a hypothesis is true or not. 

Before going any further, let me put a disclaimer: I am not against the study of metabolomics or understanding the effects of foods and food constituents on gene expression, but what i am against are the claims put forth by various pharma clinics saying that by analyzing one’s genes a personalized regimen of specific nutrients can be developed to help their gene’s function at optimal efficiency !!

So, lets start with understanding what nutrigenomics is? It deals with the study of one’s genes in order to personalize therapy, in this case nutrition. Scientifically speaking, this particular area of research is quite legitimate, as with research based stem cell therapy. We already are using genetic analysis to diagnose various diseases, and targeting chemotherapy. Recent research is beginning to identify specific genes that affect how different individuals metabolize and respond to specific drugs. Though our genes do exert a powerful influence over our health, but the environment in which we live and grow up also plays a considerable influence. Already gene based therapies are becoming an important part of science-based medicine.

As genetic analysis via high-throughput sequencing  become more rapid and cost-effective, there is an ever-increasing potential that it can be used as part of a routine screening and health evaluation in order to identify disease susceptibilities, target various preventive treatments, adjusting behaviors to target risks, and guide therapy. However, similar to stem cell treatments, our current knowledge with respect to genetic predispositions is still in its infancy. What is well established is already incorporated into mainstream medical practice. The rest is a matter for research, not current practice.

This creates an opportunity for exploitation of , using current cutting edge research to make clinical claims that are years or decades premature by pretending to have knowledge that simply does not exist. This type of medical pseudoscience is increasingly becoming a common practice by various quacks and is a manifestation of one common tactic – basing clinical claims on pre-clinical scientific research. This is especially insidious and difficult for the non-expert to properly evaluate (which makes for effective pseudoscientific marketing).

There is a great deal of basic science research going on , which asks questions about how the body works and how it is affected by all kinds of different factors- genetic or environmental. There is also a lot of translational or clinical research which looks forward to apply this knowledge to specific medical interventions. Biology is highly complex, so it is extremely difficult to extrapolate from basic science knowledge to tangible clinical effects. Most initial results that come from basic science research turn out to be wrong in a clinical setting. The only way to progress further is to conduct careful rigorous clinical research to measure the actual effects of a specific intervention in a specific population.This last point is specifically important since different interventions may have different effects on different populations.

As basic science research is quite vast , it is quite possible to find studies that may seem to support almost any conceivable intervention you wish. To the unsuspecting public this can make any medical intervention seem as if it is science-based and hence legitimate, even when the treatment is nothing but deception.

One nutrigenomics website, for example, claims to treat the following conditions:

Welcome to Genetics Based Integrative Medicine (GBIM), a telemedicine practice dedicated to the education, treatment, and recovery of those with autism spectrum disorders, ADD/ADHD, and PANDAS as well as highly complex & disabling disorders affecting adults such as CFS/ME/FM, Multiple Sclerosis, ALS, Parkinson’s and mitochondrial dysfunction.

Now to start of with, there is no compelling evidence for any nutritional treatment for the above diseases, let alone for personalized nutritional treatment based on specific genetic types. How the practitioners of GBIM came by the knowledge they are claiming to have is quite a mystery. As with the stem cell treatments I discussed previously, such clinical claims, if legitimate, would have a paper trail of hundreds of published studies in the literature. Further, if such studies existed such practice would be standard of care, not isolated to one or a few special clinics.

Thus it would be quite safe to add “nutrigenomics” to the list of red flags for dangerous quackery.  However, for me personally it is quite a shame because, like stem cells, it is a legitimate field of research, and the current level of quackery is likely to taint the reputation of what in the future might be a promising approach.

In the end I would like to finish, by quoting a line on scientists vs quacks:

Scientists are still working out the ‘syntax’, ‘morphology’,’phonology’,and ‘semantics’, while the pseudoscientists pretend that they not only have the dictionary,but also encyclopedias and novels written in those yet un-deciphered languages.

 

More on this:

  1. Center of Excellence for Nutritional Genomics (CENG) at the University of California, Davis 
  2. Consumers Not Ready For Tailor-Made Nutrition?, Science Daily, 2008.
  3. Personalised nutrition: ready for practice?, Proceedings of the Nutrition Society, 2012.
  4. Personalised nutrition: status and perspectives, British Journal of Nutrition, 2007.