My Dominant Hemisphere

The Official Weblog of 'The Basilic Insula'

Archive for the ‘Ask the Right Questions’ Category

Seeking Profundity In The Mundane

leave a comment »

seeking a new vision

Seeking A New Vision (via Jared Rodriguez/Truthout CC BY-NC-SA license)

The astronomer, Carl Sagan once said:

It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another, and to preserve and cherish the pale blue dot, the only home we’ve ever known.

– in the Pale Blue Dot

And likewise Frank Borman, astronaut and Commander of Apollo 8, the first mission to fly around the Moon said:

When you’re finally up on the moon, looking back at the earth, all these differences and nationalistic traits are pretty well going to blend and you’re going to get a concept that maybe this is really one world and why the hell can’t we learn to live together like decent people?

Why is it I wonder, that we the human race, have the tendency to reach such profound truths only when placed in an extraordinary environment? Do we have to train and become astronomers or cosmonauts to appreciate our place in the universe? To find respect for and to cherish what we’ve been bestowed with? To care about each other, our environment and this place that we are loath to remember is the one home for all of life as we know it?

There is much to be learned by reflecting upon this idea. Our capacity to gain wisdom and feel impressed really does depend on the level to which our experiences deviate from the banal, doesn’t it? Ask what a grain of food means to somebody who has never had the luxury of a mediocre middle-class life. Ask a lost child what it must be like to have finally found his mother. Or question the rejoicing farmer who has just felt rain-drops on his cheeks, bringing hope after a painful drought.

I’m sure you can think of other examples that speak volumes about the way we, consciously or not, program ourselves to look at things.

The other day, I was just re-reading an old article about the work of biomathematician, Steven Strogatz. He mentioned how as a high-school student studying science, he was asked to drop down on his knees and measure the dimensions of floors, graph the time periods of pendulums and figure out the speed of sound from resonating air columns in hollow tubes partly filled with water, etc. Each time, the initial reaction was that of dreariness and insipidity. But he would then soon realize how these mundane experiments would in reality act as windows to profound discoveries – such as the idea that resonance is something without which atoms wouldn’t come together to form material objects or how a pendulum’s time period when graphed reflects a specific mathematical equation.

There he was – peering into the abstruse and finding elegance in the mundane. The phenomenon reminded me of a favorite quote:

The real voyage of discovery consists not in seeking new landscapes, but in having new eyes.

Marcel Proust

For that’s what Strogatz, like Sagan and Borman was essentially experiencing. A new vision about things. But with an important difference – he was doing it by looking at the ordinary. Not by gazing at extra-ordinary galaxies and stars through a telescope. Commonplace stuff, that when examined closely, suddenly was ordinary no more. Something that had just as much potential to change man’s perspective of himself and his place in the universe.

I think it’s important to realize this. The universe doesn’t just exist out there among the celestial bodies that lie beyond normal reach. It exists everywhere. Here; on this earth. Within yourself and your environment and much closer to home.

Perhaps, that’s why we’ve made much scientific progress by this kind of exploration. By looking at ordinary stuff using ordinary means. But with extra-ordinary vision. And successful scientists have proven again and again, the value of doing things this way.

The concept of hand-washing to prevent the spread of disease for instance, wasn’t born out of a sophisticated randomized-clinical trial. But by a mediocre accounting of mortality rates using a much less developed epidemiologic study. The obstetrician who stumbled upon this profound discovery, long before Pasteur later postulated the germ theory of disease, was called Ignaz Semmelweis, later to be known as the “savior of mothers”. His new vision led to the discovery of something so radical, that the medical community of his day rejected it and his results were never seriously looked at during his lifetime (So much for peer-review, eh?). The doctor struggled with this till his last breath, suffering at an insane asylum and ultimately dying at the young age of 47.

That smoking is tied with lung cancer was first conclusively learned by an important prospective cohort study that was largely done by mailing a series of questionnaires out to smoking and non-smoking physicians over a period of time, asking how they were doing. Yes, even questionnaires, when used intelligently, could be more than just unremarkable pieces of paper; they could be gateways that open our eyes to our magnificent universe!

From the polymath and physician, Copernicus’s seemingly pointless calculations on the positions of planets to the dreary routine of looking at microbial growth in petri-dishes by physician Koch, to physicist and polymath, Young‘s proposal of a working theory for color vision, to the physician, John Snow’s phenomenal work on preventing cholera by studying water wells long before the microbe was even identified, time and time again we have learned about the enormous implications of science on the cheap. And science of the mundane. There’s wisdom in applying the KISS (Keep It Simple Stupid) principle to science after all! Even in the more advanced technologically replete scientific studies.

More on the topic of finding extraordinary ideas in ordinary things, I was reminded recently of a couple of enchanting papers and lectures. One was about finding musical patterns in the sequence of our DNA. And the second was an old but interesting paper1 that proposes a radical model for the biology of the cell and that seeks to reconcile the paradoxes that we observe in biological experiments. That there could be some deep logical underpinning to the maxim, “biology is a science of exceptions”, is really quite an exciting idea:

Surprise is a sign of failed expectations. Expectations are always derived from some basic assumptions. Therefore, any surprising or paradoxical data challenges either the logical chain leading from assumptions to a failed expectation or the very assumptions on which failed expectations are based. When surprises are sporadic, it is more likely that a particular logical chain is faulty, rather than basic assumptions. However, when surprises and paradoxes in experimental data become systematic and overwhelming, and remain unresolved for decades despite intense research efforts, it is time to reconsider basic assumptions.

One of the basic assumptions that make proteomics data appear surprising is the conventional deterministic image of the cell. The cell is commonly perceived and traditionally presented in textbooks and research publications as a pre-defined molecular system organized and functioning in accord with the mechanisms and programs perfected by billions years of biological evolution, where every part has its role, structure, and localization, which are specified by the evolutionary design that researchers aim to crack by reverse engineering. When considered alone, surprising findings of proteomics studies are not, of course, convincing enough to challenge this image. What makes such a deterministic perception of the cell untenable today is the massive onslaught of paradoxical observations and surprising discoveries being generated with the help of advanced technologies in practically every specialized field of molecular and cell biology [12-17].

One of the aims of this article is to show that, when reconsidered within an alternative framework of new basic assumptions, virtually all recent surprising discoveries as well as old unresolved paradoxes fit together neatly, like pieces of a jigsaw puzzle, revealing a new image of the cell–and of biological organization in general–that is drastically different from the conventional one. Magically, what appears as paradoxical and surprising within the old image becomes natural and expected within the new one. Conceptually, the transition from the old image of biological organization to a new one resembles a gestalt switch in visual perception, meaning that the vast majority of existing data is not challenged or discarded but rather reinterpreted and rearranged into an alternative systemic perception of reality.

– (CC BY license)

Inveigled yet :-) ? Well then, go ahead and give it a look!

And as mentioned earlier in the post, one could extend this concept of seeking out phenomenal truths in everyday things to many other fields. As a photography buff, I can tell you that ordinary and boring objects can really start to get interesting when viewed up close and magnified. A traveler who takes the time to immerse himself in the communities he’s exploring, much like Xuan Zang or Wilfred Thesiger or Ibn Battuta, suddenly finds that what is to be learned is vast and all the more enjoyable.

The potential to find and learn things with this new way to envision our universe can be truly revolutionary. If you’re good at it, it soon becomes hard to ever get bored!


  1. Kurakin, A. (2009). Scale-free flow of life: on the biology, economics, and physics of the cell. Theoretical Biology and Medical Modelling, 6(1), 6. doi:10.1186/1742-4682-6-6

Copyright Firas MR. All Rights Reserved.

“A mote of dust, suspended in a sunbeam.”

Search Blog For Tags: , , , ,

Written by Firas MR

November 13, 2010 at 10:48 am

Contrasts In Nerdity & What We Gain By Interdisciplinary Thinking

leave a comment »

scientific fields and purity

Where Do You Fit In This Paradigm? (via xkcd CC BY-NC license)

I’ve always been struck by how nerds can act differently in different fields.

An art nerd is very different from a tech nerd. Whereas the former could go on and on about brush strokes, lighting patterns, mixtures of paint, which drawing belongs to which artist, etc. the latter can engage in ad-infinitum discussions about the architecture of the internet, how operating systems work, whose grip on Assembly is better, why their code works better, etc.

And what about math and physics nerds? They tend to show their feathers off by displaying their understanding of chaos theory, why imaginary numbers matter, and how we are all governed by “laws of nature”, etc.

How about physicians and med students? Well, like most biologists, they’ll compete with each other by showing off how much of anatomy, physiology or biochemistry or drug properties they can remember, who’s uptodate on the most recent clinical trial statistics (sort of like a fan of cricket/baseball statistics), and why their technique of proctoscopy is better than somebody else’s, the latest morbidity/mortality rates following a given procedure, etc.

And you could actually go on about nerds in other fields too – historians (who remembers what date or event), political analysts (who understands the Thai royal family better), farmers (who knows the latest in pesticides), etc.

Each type has its own traits, that reflect the predominant mindset (at the highest of intellectual levels) when it comes to approaching their respective subject matter. And nerds, being who they are, can tend to take it all to their heads and think they’ve found that place — of ultimate truth, peace and solace. That they are at last, “masters” of their subjects.

I’ve always found this phenomenon to be rather intriguing. Because in reality, things are rarely that simple – at least when it comes to “mastery”.

In medicine for instance, the nerdiest of most nerds out there will be proud and rather content with the vast statistics, nomenclature, and learn-by-rote information that he has finally been able to contain within his head. Agreed, being able to keep such information at the tip of one’s tongue is an achievement considering the bounds of average human memory. But what about the fact that he has no clue as to what fundamentally drives those statistics, why one drug works for a condition whereas another drug with the same properties (i.e. properties that medical science knows of) fails or has lower success rates, etc.? A physicist nerd would approach this matter as something that lies at the crux of an issue — so much so that he would get sleepless nights without being able to find some model or theory that explains it mathematically, in a way that seems logical. But a medical nerd? He’s very different. His geekiness just refuses to go there, because of the discomforting feeling that he has no idea whatsoever! More stats and names to rote please, thank you!

I think one of the biggest lessons we learn from the really great stalwarts in human history is that, they refused to let such stuff get to their heads. The constant struggle to find and maintain humility in knowledge was central to how they saw themselves.

… I can live with doubt and uncertainty and not knowing. I think it’s much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things, but I’m not absolutely sure of anything and there are many things I don’t know anything about, such as whether it means anything to ask why we’re here, and what the question might mean. I might think about it a little bit and if I can’t figure it out, then I go on to something else, but I don’t have to know and answer, I don’t feel frightened by not knowing things, by being lost in a mysterious universe without having any purpose, which is the way it really is so far as I can tell. It doesn’t frighten me.

Richard Feynman speaking with Horizon, BBC (1981)

The scientist has a lot of experience with ignorance and doubt and uncertainty, and this experience is of great importance, I think. When a scientist doesn’t know the answer to a problem, he is ignorant. When he has a hunch as to what the result is, he is uncertain. And when he is pretty darn sure of what the result is going to be, he is in some doubt. We have found it of paramount importance that in order to progress we must recognize the ignorance and leave room for doubt. Scientific knowledge is a body of statements of varying degrees of certainty – some most unsure, some nearly sure, none absolutely certain.

Now, we scientists are used to this, and we take it for granted that it is perfectly consistent to be unsure – that it is possible to live and not know. But I don’t know everybody realizes that this is true. Our freedom to doubt was born of a struggle against authority in the early days of science. It was a very deep and very strong struggle. Permit us to question – to doubt, that’s all – not to be sure. And I think it is important that we do not forget the importance of this struggle and thus perhaps lose what we have gained.

What Do You Care What Other People Think?: Further Adventures of a Curious Character by Richard Feynman as told to Ralph Leighton

an interdisciplinary web of a universe

An Interdisciplinary Web of a Universe (via Clint Hamada @ Flickr; CC BY-NC-SA license)

Besides being an important aspect for high-school students to consider when deciding what career path to pursue, I think that these nerd-personality-traits also illustrate the role that interdisciplinary thinking can play in our lives and how it can add tremendous value in the way we think. The more one diversifies, the more his or her thinking expands — for the better, usually.

Just imagine a nerd who’s cool about art, physics, math or medicine, etc. — all put together, in varying degrees. What would his perspective of his subject matter and of himself be like? Would he make the ultimate translational research nerd? It’s not just the knowledge one could potentially piece together, but the mindset that one would begin to gradually develop. After all, we live in an enchanting web of a universe, where everything intersects everything!

Copyright Firas MR. All Rights Reserved.

“A mote of dust, suspended in a sunbeam.”

Search Blog For Tags: , , , , ,


Written by Firas MR

November 12, 2010 at 12:00 am

Revitalizing Science Education

with one comment

Richard Feynman: “… But you’ve gotta stop and think about it. About the complexity to really get the pleasure. And it’s all really there … the inconceivable nature of nature! …”

And when I read Feynman’s description of a rose — in which he explained how he could experience the fragrance and beauty of the flower as fully as anyone, but how his knowledge of physics enriched the experience enormously because he could also take in the wonder and magnificence of the underlying molecular, atomic, and subatomic processes — I was hooked for good. I wanted what Feynman described: to assess life and to experience the universe on all possible levels, not just those that happened to be accessible to our frail human senses. The search for the deepest understanding of the cosmos became my lifeblood [...] Progress can be slow. Promising ideas, more often than not, lead nowhere. That’s the nature of scientific research. Yet, even during periods of minimal progress, I’ve found that the effort spent puzzling and calculating has only made me feel a closer connection to the cosmos. I’ve found that you can come to know the universe not only by resolving its mysteries, but also by immersing yourself within them. Answers are great. Answers confirmed by experiment are greater still. But even answers that are ultimately proven wrong represent the result of a deep engagement with the cosmos — an engagement that sheds intense illumination on the questions, and hence on the universe itself. Even when the rock associated with a particular scientific exploration happens to roll back to square one, we nevertheless learn something and our experience of the cosmos is enriched.

Brian Greene, in The Fabric of The Cosmos

When people think of “science education”, they usually tend to think about it in the context of high school or college. When in reality it should be thought of as encompassing education life-long, for if analyzed deeply, we all realize that we never cease to educate ourselves no matter what our trade. Because we understand that what life demands of us is the capacity to function efficiently in a complex society. As we gain or lose knowledge, our capacities keep fluctuating and we always desire and often strive for them to be right at the very top along that graph.

When it comes to shaping attitudes towards science, which is what I’m concerned about in this post, I’ve noticed that this begins quite strongly during high school, but as students get to college and then university, it gradually begins to fade away, even in some of the more scientific career paths. By then I guess, some of these things are assumed (at times you could say, wrongfully). We aren’t reminded of it as frequently and it melts into the background as we begin coping with the vagaries of grad life. By the time we are out of university, for a lot of us, the home projects, high-school science fests, etc. that we did in the past as a means to understand scientific attitude, ultimately become a fuzzy, distant dream.

I’ve observed this phenomenon as a student in my own life. As med students, we are seldom reminded by professors of what it is that constitutes scientific endeavor or ethic. Can you recall when was the last time you had didactic discussions on the topic?

I came to realize this vacuum early on in med school. And a lot of times this status quo doesn’t do well for us. Take Evidence-Based-Medicine (EBM) for example. One of the reasons, why people make errors in interpreting and applying EBM in my humble opinion, is precisely because of the naivete that such a vacuum allows to fester. What ultimately happens is that students remain weak in EBM principles, go on to become professors, can not teach EBM to the extent that they ought to and a vicious cycle ensues whereby the full impact of man’s progress in Medicine will not be fulfilled. And the same applies to how individuals, departments and institutions implement auditing, quality-assurance, etc. as well.

A random post that I recently came across in the blogosphere touched upon the interesting idea that when you really think about it, most practicing physicians are ultimately technicians whose job it is to fix and maintain patients (like how a mechanic oils and fixes cars). The writer starts out with a provocative beginning,

Is There A Doctor In The House?


Medical doctors often like to characterize themselves as scientists, and many others in the public are happy to join them in this.

I submit, however, that such a characterization is an error.


and divides science professionals into,


SCIENTIST: One whose inquiries are directed toward the discovery of new facts.

ENGINEER: One whose inquiries are directed toward the new applications of established facts.

TECHNICIAN: One whose inquiries are directed toward the maintenance of established facts.


and then segues into why even if that’s the case, being a technician in the end has profound value.

Regardless of where you find yourselves in that spectrum within this paradigm, I think it’s obvious that gaining skills in one area helps you perform better in others. So as technicians, I’m sure that practicing physicians will find that their appraisal and implementation of EBM will improve if they delve into how discoverers work and learn about the pitfalls of their trade. The same could be said of learning about how inventors translate this knowledge from the bench to the bedside as new therapies, etc. are developed and the caveats involved in the process.

Yet it is precisely in these aspects that I find that medical education requires urgent reform. Somehow, as if by magic, we are expected to do the work of a technician and to get a grip on EBM practices without a solid foundation for how discoverers and inventors work.

I think it’s about time that we re-kindled the spirit of understanding scientific attitude at our higher educational institutions and in our lives (for those of us who are already out of university).

From self-study and introspection, here are a couple of points and questions that I’ve made a note of so far, as I strive to re-invigorate the scientific spirit within me, in my own way. As you reflect on them, I hope that they are useful to you in working to become a better science professional as well:

  1. Understand the three types of science professionals and their roles. Ask where in the spectrum you lie. What can you learn about the work professionals in the other categories do to improve how you yourself function?
  2. Learning about how discoverers work, helps us in getting an idea about the pitfalls of science. Ultimately, questions are far more profound than the answers we keep coming up with. Do we actually know the answer to a question? Or is it more correct to say that we think we know the answer? What we think we know, changes all the time. And this is perfectly acceptable, as long as you’re engaged as a discoverer.
  3. What are the caveats of using language such as the phrase “laws of nature”? Are they “laws”, really? Or abstractions of even deeper rules and/or non-rules that we cannot yet touch?
  4. Doesn’t the language we use influence how we think?
  5. Will we ever know if we have finally moved beyond abstractions to deeper rules and/or non-rules? Abstractions keep shifting, sometimes in diametrically opposite directions (eg: from Newton’s concepts of absolute space-time to Einstein’s concepts of relative space-time, the quirky and nutty ideas of quantum mechanics such as the dual nature of matter and the uncertainty principle, concepts of disease causation progressing from the four humours to microbes and DNA and ultimately a multifactorial model for etiopathogenesis). Is it a bad idea to pursue abstractions in your career? Just look at String Theorists; they have been doing this for a long time!
  6. Develop humility in approach and knowledge. Despite all the grand claims we make about our scientific “progress”, we’re just a tiny speck amongst the billions and billions of specks in the universe and limited by our senses and the biology of which we are made. The centuries old debate among philosophers of whether man can ever claim to one day have found the “ultimate truth” still rages on. However, recently we think we know from Kurt Godel’s work that there are truths out there in nature that man can never arrive at by scientific proof. In other words, truths that we may never ever know of! Our understanding of the universe and its things keeps shifting continuously, evolving as we ourselves as a species improve (or regress, depending on your point of view). Understanding that all of this is how science works is paramount. And there’s nothing wrong with that. It’s just the way it is! :-)
  7. Understand the overwhelming bureaucracy in science these days. But don’t get side-tracked! It’s far too big of a boatload to handle on one’s own! There are dangers that lead people to leave science altogether because of this ton of bureaucracy.
  8. Science for career’s sake is how many people get into it. Getting a paper out can be a good career move. But it’s far more fun and interesting to do science for science’s own sake, and the satisfaction you get by roaming free, untamed, and out there to do your own thing will be ever more lasting.
  9. Understand the peer-review process in science and its benefits and short-comings.
  10. Realize the extremely high failure rate in terms of the results you obtain. Over 90% by most anecdotal accounts – be that in terms of experimental results or publications. But it’s important to inculcate curiosity and to keep the propensity to question alive. To discover. And to have fun in the process. In short, the right attitude; despite knowing that you’re probably never going to earn a Fields medal or Nobel prize! Scientists like Carl Friederich Gauss were known to dislike publishing innumerable papers, favoring quality over quantity. Quite contrary to the trends that Citation Metrics seem to have a hand in driving these days. It might be perfectly reasonable to not get published sometimes. Look at the lawyer-mathematician, Pierre de Fermat of Fermat’s Last Theorem fame. He kept notes and wrote letters but rarely if ever published in journals. And he never did publish the proof of Fermat’s Last Theorem, claiming that it was too large to fit in the margins of a copy of a book he was reading as the thought occurred to him. He procrastinated until he passed away, when it became one of the most profound math mysteries ever to be tackled, only to be solved about 358 years later by Andrew Wiles. But the important thing to realize is that Fermat loved what he did, and did not judge himself by how many gazillion papers he could or could not have had to his name.
  11. Getting published does have a sensible purpose though. The general principle is that the more peer-review the better. But what form this peer-review takes does not necessarily have to be in the form of hundreds of thousands of journal papers. There’s freedom in how you go about getting it, if you get creative. And yes, sometimes, peer-review fails to serve its purpose. Due to egos and politics. The famous mathematician, Evariste Galois was so fed-up by it that he chose to publish a lot of his work privately. And the rest, as they say, is history.
  12. Making rigorous strides depends crucially on a solid grounding in Math, Probability and Logic. What are the pitfalls of hypothesis testing? What is randomness and what does it mean? When do we know that something is truly random as opposed to pseudo-random? If we conclude that something is truly random, how can we ever be sure of it? What can we learn from how randomness is interpreted in inflationary cosmology in the manner that there’s “jitter” over quantum distances but that it begins to fade over larger ones (cf. Inhomogeneities in Space)? Are there caveats involved when you create models or conceptions about things based on one or the other definitions of randomness? How important is mathematics to biology and vice versa? There’s value in gaining these skills for biologists. Check out this great paper1 and my own posts here and here. Also see the following lecture that stresses on the importance of teaching probability concepts for today’s world and its problems:


  13. Developing collaborative skills helps. Lateral reading, attending seminars and discussions at various departments can help spark new ideas and perspectives. In Surely You’re Joking Mr. Feynman!, the famous scientist mentions how he always loved to dabble in other fields, attending random conferences, even once working on ribosomes! It was the pleasure of finding things out that mattered! :-)
  14. Reading habits are particularly important in this respect. Diversify what you read. Focus on the science rather than the dreary politics of science. It’s a lot more fun! Learn the value of learning-by-self and taking interest in new things.
  15. Like it or not, it’s true that unchecked capitalism can ruin balanced lives, often rewarding workaholic self-destructive behavior. Learning to diversify interests helps take off the pressure and keeps you grounded in reality and connected to the majestic nature of the stuff that’s out there to explore.
  16. The rush that comes from all of this exploration has the potential to lead to unethical behavior. It’s important to not lose sight of the sanctity of life and the sanctity of our surroundings. Remember all the gory examples that  WW2 gave rise to (from the Nazi doctors to all of those scientists whose work ultimately gave way to the loss of life that we’ve come to remember in the euphemism, “Hiroshima and Nagasaki”). Here’s where diversifying interests also helps. Think how a nuclear scientist’s perspectives could change about his work if he spent time taking a little interest in wildlife and the environment. Also, check this.
  17. As you diversify, try seeing science in everything – eg: When you think about photography think not just about the art, but about the nature of the stuff you’re shooting, the wonders of the human eye and the consequences of the arrangement of rods and cones and the consequences of the eyeball being round, its tonal range compared to spectral cameras, the math of perspective, and the math of symmetry, etc.
  18. Just like setting photography assignments helps to ignite the creative spark in you, set projects and goals in every avenue that you diversify into. There’s no hurry. Take it one step at a time. And enjoy the process of discovery!
  19. How we study the scientific process/method should be integral to the way people should think about education. A good analogy although a sad one is, conservation and how biology is taught at schools. Very few teachers and schools will go out of their way to emphasize and interweave solutions for sustainable living and conserving wildlife within the matter that they talk about even though they will more than easily get into the nitty-gritty of the taxonomy, the morphology, etc. You’ll find paragraphs and paragraphs of verbiage on the latter but not the former. This isn’t the model to replicate IMHO! There has to be a balance. We should be constantly reminded about what constitutes proper scientific ethic in our education, and it should not get to the point that it begins to fade away into the background.
  20. The current corporate-driven, public-interest research model is a mixed bag. Science shouldn’t in essence be something for the privileged or be monopolized in the hands of a few. Good ideas have the potential to get dropped if they don’t make business sense. Understand public and private funding models and their respective benefits and shortcomings. In the end realize that there are so many scientific questions out there to explore, that there’s enough to fill everybody’s plate! It’s not going to be the end of the world, if your ideas or projects don’t receive the kind of funding you desire. It’s ultimately pretty arbitrary :-) ! Find creative solutions to modify your project or set more achievable goals. The other danger in monetizing scientific progress is the potential to inculcate the attitude of science for money. Doing science for the joy of it is much more satisfying than the doing it for material gain IMHO. But different people have different preferences. It’s striking a balance that counts.
  21. The business model of science leads us into this whole concept of patent wars and Intellectual Property issues. IMHO there’s much value in having a free-culture attitude to knowledge, such as the open-access and open-source movements. Imagine what the world would be like if Gandhi (see top-right) patented the Satyagrah, requiring random licensing fees or other forms of bondage! :-)
  22. It’s important to pursue science projects and conduct fairs and workshops even at the university level (just as much as it is emphasized in high school; I would say to an even greater degree actually). Just to keep the process of discovery and scientific spirit vibrant and alive, if for no other reason. Also, the more these activities reflect the inter-relationship between the three categories of science professionals and their work, the better. Institutions should recognize the need to encourage these activities for curricular credit, even if that means cutting down on other academic burdens. IMHO, on balance, the small sacrifice is worth it.
  23. Peer-review mechanisms currently reward originality. But at the same time, I think it’s important to reward repeatability/reproducibility. And to reward statistically insignificant findings. This not only helps remove bias in published research, but also helps keep the science community motivated in the face of a high failure rate in experiments, etc.
  24. Students should learn the art of benchmarking progress on a smaller scale, i.e. in the experiments, projects, etc. that they do. In the grand scheme of things however, we should realize that we may never be able to see humongous shifts in how we are doing in our lifetimes! :-)

    Srinivasa Ramanujan

    Srinivasa Ramanujan

  25. A lot of stuff that happens at Ivy League universities can be classified as voodoo and marketing. So it’s important to not fret if you can’t get into your dream university. The ability to learn lies within and if appropriately tapped and channelized can be used to accomplish great stuff regardless of where you end up studying. People who graduate from Ivy League institutes form a wide spectrum, with a surprising number who could easily be regarded as brain-dead. IMHO what can be achieved is a lot more dependent on the person rather than the institution he or she goes to. If there’s a will, there’s a way! :-) Remember some of science’s most famous stalwarts like Michael Faraday and Srinivasa Ramanujan were largely self-taught!
  26. Understand the value of computing in science. Not only has this aspect been neglected at institutes (especially in Biology and Medicine), but it’s soon getting indispensable because of the volume of data that one has to sift and process these days. I’ve recently written about bioinformatics and computer programming here and here.
  27. It’s important to develop a level of honesty and integrity that can withstand the onward thrust of cargo-cult science.
  28. Learn to choose wisely who your mentors are. Factor in student-friendliness, the time they can spend with you, and what motivates them to pursue science.
  29. I usually find myself repelled by demagoguery. But if you must, choose wisely who your scientific heroes are. Are they friendly to other beings and the environment? You’d be surprised as to how many evil scientists there can be out there! :-)

I’m sure there are many many points that I have missed and questions that I’ve left untouched. I’ll stop here though and add new stuff as and when it occurs to me later. Send me your comments, corrections and feedback and I’ll put them up here!

I have academic commitments headed my way and will be cutting down on my blogular activity for a while. But don’t worry, not for long! :-)

I’d like to end now, by quoting one of my favorite photographers, George Steinmetz:

George Steinmetz: “… I find that there is always more to explore, to question and, ultimately, to understand …”


  1. Bialek, W., & Botstein, D. (2004). Introductory Science and Mathematics Education for 21st-Century Biologists. Science, 303(5659), 788-790. doi:10.1126/science.1095480

Copyright Firas MR. All Rights Reserved.

“A mote of dust, suspended in a sunbeam.”

Search Blog For Tags: , , , ,

Written by Firas MR

November 6, 2010 at 5:21 am

ذرا غور فرمائیے اپنے انسان ہونے کی حیثیت پر

with 2 comments

واقعی کتنا چھوٹا ہے انسان

(ایک ضروری بات: اس مضمون کو سہی روپ میں دیکھنے کے لئے آپ ناظرین کو یہ font ڈاونلوڈ کرکے اپنے سسٹم پر ڈالنا ہوگا . یہ ایسی font ہے جو خاص کمپیوٹر سکرین پر باآسانی پڑھنے کے لئے بنائی گئی ہے .)

آداب دوستو ،

پچھلے کچھ لمحوں میں میرے ذہن میں ایک بات آئی . ہم انسان ‘ اشرف المخلوقات ‘ کے نام سے اپنے آپ کے بارے میں سوچتے ہیں ، اور یقیناً مذہبی طور پر بھی ہم کو یہی سکھایا جاتا ہے . اکثر اس بات کو لے کر ہم کافی مغرور بھی ہو جاتے ہیں . بھول جاتے ہیں کہ ، یے خطاب ہم پر یوں ہی نہیں نوازا گیا . کیا ہم اسکے واقعی حقدار ہیں ؟

سچ پوچھیں تو ہم اسکے حقدار تبھی کہلایںگے جب ہماری فکر اور ہمارے اعمال میں واضح طور پر اسکا ثبوت ہو . لیکن مجھے لگتا ہے کہ ہم میں سے اکثریت کا یہ حال ہے کہ نا تو ہم کو اس کا احساس ہوتا ہے اور نا اس بات کو لے کر دلچسپی . ہم اپنی روز مرّہ زندگیوں کو جینے میں اتنے گم و مبتلا ہیں کہ ‘ اشرف المخلوقات ‘ کا کردار نبھانا بھول جاتے ہیں . اور ایسی چیزوں پر توجہ دینا بھول جاتے ہیں جن پر وقت لگا کر ہم یے کردار نبھانے کی کم از کم کوشش تو کر سکتے ہیں .

کل میں نے جول سارٹور نام کے مشہور فوٹوگرافر کی تقریر دیکھی . اس میں وہ یے بیان کر رہے تھے کہ ہماری یے قدرتی ماحول کو تباہ کرنے کی رفتار کی تیزی ، ہماری اس رفتار کی تیزی سے کئی گنا بڑھ کر ہے جس سے ہم جیتے جاگتے جانور ، پیڑ – پودوں کے نئے اقسام کے بارے میں معلومات حاصل کرتے ہیں . آخر کتنی حیرت انگیز بات ہے یے . نا جانے کتنے ایسے قدرتی چیزوں کو دیکھے بغیر اور ان کے بارے میں غور کئے بغیر انسان یوں ہی آخر تک جیتا جا ے گا . وہ یے بھی بتا رہے تھے کہ اگلے دس سال میں دنیا کے تمام جل تھلیوں کے اقسام میں ، پچاس فیصد کا ناپیدا و غیب ہونے کا امکان ہے . یہ کوئی معمولی بات نہیں ہے !

طبّی دنیا میں بھی ہم نے حال ہی میں genetic codes کو سمجھنا شروع کیا ہے . اور ہماری بیماریوں کو لے کر جانکاری میں نئی طرح کی سوچ پیدا اب ہی ہو رہی ہے . نا جانے آگے اور کیا ترقی ہوگی  اور نئی باتوں کا پتا چلے گا .

ہم حیاتی دنیا کے بارے میں کتنا کم جانتے ہیں ، کیا ہم کو اس بات کا احساس ہے ؟

مشہور فوٹوگرافر جول سارٹور کی قدرتی ماحول کے بچاؤ پر لاجواب تقریر .

نامور فوٹوگرافر ، Yann Arthus-Bertrand کی یہ شاندار فلم ہم کو سمجھاتی ہے کہ ہم کیا داؤ پر لگا رہے ہیں . پوری فلم یہاں دیکھئے .

فزکس کی دنیا بھی ہم کو ہمارے ماحول کے بارے میں سکھاتی ہے . کبھی غور کیجئے ، چاہے واقعہ کیسا بھی ہو یا چیز کیسی بھی ہو ، وہ ایک probability wave کے ڈھنگ میں سوچا یا سوچی جا سکتی ہے . بہت ہی انوکھی بات ہے یے . کیونکہ انسان کا دماغ ایسا نہیں چلتا . وہ اس بات کو ماننے سے انکار کرتا ہے کہ بھلا کوئی چیز ایک ہی پل میں ، ایک سے زائد مقام پر بھی پائی جا سکتی ہے . لیکن یے علمی تجربوں سے ممکن پایا گیا ہے (جیسا کہ Wheeler’s Experiment) اور دانشوروں میں اس بات کو لے کر اب بھی بحث ہوتے رہتی ہے . اور ہان ، علمی تجربوں سے یے بھی ثابت ہوا ہے کہ دو علیحدہ چیزیں ، چاہے ان کے بیچ میں کتنا بھی فاصلہ ہو ، کبھی کبھی ان میں ایک قسم کا رابطہ ہوتا ہے ، جس کو quantum entanglement کہتے ہیں . ذہن ماننے کو انکار کرتا ہے ، لیکن ہاں یہ سچ ہے . اور بھی ایسی کئی دلچسپ باتیں ہیں جیسے multiple-space-dimensions , time dilation, wormholes وغیرہ . اور تو اور ، ان چیزوں کا جائزہ ہم گھر بیٹھے ہی لے سکتے ہیں . کبھی اندھیری رات کو ، اپنی نظریں آسمان کے ستاروں کی طرف دوڑ ایں اور غور کریں کہ ان کی روشنی کو ہماری آنکھوں تک پہنچنے میں کتنا وقت لگا ہوگا . کیا وہ جو ستارے آپ کو نظر آ رہے ہیں ، اتنی مدّت میں وہیں کے وہیں ہیں یا پھر چل بسے ہیں ؟ نظر دوڑانا شاید آپ کو مشکل کام لگتا ہوگا . لیکن Orion Nebula جیسی دور کی چیزیں تو آپ کے سامنے ہی ہیں ! ذرا دیکھئے تو !

ہم طبیعی کائنات کے بارے میں کتنا کم جانتے ہیں ، کیا ہم کو اس بات کا احساس ہے ؟

The Elegant Universe - PBS NOVA

PBS NOVA سے جاری کی گئی فزکس پر بہترین فلم

حساب کی دنیا میں ہم کو حال ہی میں پتا چلا ہے کہ infinity کے بھی اقسام ہوتے ہیں . بلکہ infinity کے infinite اقسام ہیں ! ایک infinity دوسری infinity سے بڑھی یا چوٹی ہو سکتی ہے . غور کر یے دو circles A & B پر . ایک بڑھا ہے ، تو دوسرا چھوٹا . یعنی کہ ایک کا circumference دوسرے سے بڑھا ہے . لیکن کیا یہ سچ نہیں کہ ہر circumference کے اندر infinite sides ہوتے ہیں ؟ تو بڑھے circle میں جو infinite sides ہیں وہ چھوٹے circle کے infinite sides سے زیادہ ہوئے ؛ ہے نا ؟ کتنی دلچسپ بات ہے . اور ایسی نا جانے کئی ایسی چیزیں ہیں جن کا اندازہ ہم کو آج بھی نہیں ہے .

ہم حساب کے بارے میں کتنا کم جانتے ہیں ، کیا ہم کو اس بات کا احساس ہے ؟

The Story of Maths - BBC

BBC کی شاندار فلم ، The Story of Maths

Marcus du Sautoy بتاتے ہیں symmetry کی اہمیت

پرسوں میں نے ایک اور تقریر دیکھی جس میں خطیب نے یے نقطہ اٹھایا کہ ہر دو ہفتے ، ایک ایسے انسان کی موت ہوتی ہے جو اپنے ساتھ ساتھ اپنی زبان اور ادبیات لے کر اس دنیا سے چل بستا ہے . وہ اپنی زبان کا واحد اور آخری بول چال میں استعمال کرنے والا فرد ہوتا ہے اور اس کے جانے کے بعد اس کی نسل ، اس زبان سے ناتا ہمیشہ کے لئے کھو بیٹھتی ہے . اور اس نسل کے ساتھ ساتھ دنیا کے باقی سبھی لوگ بھی . صدیوں سے اکٹھا کی گئی حکمت یافتہ باتیں جو اس زبان میں بندھی پڑھی تھیں ، اب وہ گویا ہمیشہ کے لئے غیب ہو جاتی ہیں .

ہم بیٹھے بیٹھے صدیوں کا علم ہاتھوں سے گنوا رہے ہیں ، کیا ہم کو اس بات کا احساس ہے ؟

ہر دو ہفتے اس دنیا سے ایک زبان چل بستی ہے

سچ پوچھیں توعام طور پر اس سوال کا جواب نفی میں ہوتا ہے .

اور انسان ان چیزوں پر غور و فکر کرنے کے بجائے ایسی چیزوں پر توانائی اور وقت ضائع کرنے کو ہمیشہ تیّار رہتا ہے . جیسا کہ ایک دوسرے سے جھگڑنا یا جنگ لڑنا ، یا پھر غیر اخلاقی اقتصادی برتری کے نشے میں آ کر ایک دوسرے کو کچلنا یا ایک دوسرے کے سامنے ٹانگ اڑانا وغیرہ وغیرہ .

کیا یے دانشمند مخلوق کی صفت ہے ؟ کیا ہم ایسا برتاؤ کر کے اپنے ‘ اشرف المخلوقات ‘ والے کردار سے منہ نہیں موڈ رہے ہیں ؟ کبھی کبھی لگتا ہے کہ آخرت تک یہی ماجرا رہے گا . جب تک کہ ہم میں اپنے ترجیحات‌‍ٍ زندگی کو صحیح کرنے کا احساس نہیں ہوگا یہی حال رہے گا .

اس نقطے سے ذہن میں ایک اور ایک خیال آیا . نا جانے انسان کی عقل کا کتنا حصّہ ماضی کے تجربوں پر منحصر ہوتا ہے . یعنی کہ ذکاوت کی دین (output) ، ماضی کی لین (input) پر کتنی منحصر ہے ؟

اگر انسان اپنے ماضی میں ، اپنے گرد و نواح سے اور اپنی حیثیت و کیفیت سے ناواقف ہوتا تو کیا یے توقع سے بعید ہے کہ وہ چھوٹی موتی ، نا گوار چیزوں پر اپنا وقت ضائع کرتا ؟ اپنا اصلی کردار ادا کرنا بھول جاتا ؟ غور کیا جاے تو ہم یے مظہرartificial intelligence programming میں بھی پا سکتے ہیں . میرا قیاس ہے کہ کئی AI systems اسی طرح چلتے ہیں . ان میں ضرور ایسا code ہوتا ہوگا جس سے روبوٹ کو باہر کی دنیا کا احساس پانے میں مدد ملتی ہوگی . جیسے ہی اس کو باہر کے ماحول میں تبدیلی کا احساس ہوتا ہے ، وہ اپنا رویّہ بدل دیتا ہے . شاید ہم بھی اسی طرح ہی ہیں . بس ہمارے شعور و احساسات کو مزید جگانا ہوگا .

کبھی سوچا نہیں تھا کہ اتنا طویل فلسفیانہ مضمون لکھو نگا . لکھتے لکھتے اتنا وقت گزر گیا ! تو بس دوستو ، آج کے لئے اتنا ہی . ملتے ہیں اگلی بار . اس طرح کبھی کبھی ہمیں اس تیز رفتار زندگی سے وقت نکال کر اپنے کردار کو نبھانے کے بارے میں سوچنا ہوگا . کہ بھئی ، ہم ‘ اشرف المخلوقات ‘ خطاب کے واقعی حقدار بنتے ہیں یا نہیں ؟ اپنے رب اور اس کی کائنات کو سمجھتے ہیں یا نہیں ؟ اور کس طرح اپنا وقت گزارنا چاہتے ہیں .

جانے سے پہلے اسی موضوع پر دیکھئے ایک لطف بھرا انگریزی گیت :

… The sky calls to us,
If we do not destroy ourselves,
We will one day venture to the stars …

Copyright Firas MR. All rights reserved.

On Literature Search Tools And Translational Medicine

with 2 comments

Courtesy danmachold@flickr (by-nc-sa license)

Howdy all!

Apologies for the lack of recent blogular activity. As usual, I’ve been swamped with academia.

A couple of interesting pieces on literature search strategies & tools that caught my eye recently, some of which were quite new to me. Do check them out:

  • Matos, S., Arrais, J., Maia-Rodrigues, J., & Oliveira, J. (2010). Concept-based query expansion for retrieving gene related publications from MEDLINE. BMC Bioinformatics, 11(1), 212. doi:10.1186/1471-2105-11-212


The most popular biomedical information retrieval system, PubMed, gives researchers access to over 17 million citations from a broad collection of scientific journals, indexed by the MEDLINE literature database. PubMed facilitates access to the biomedical literature by combining the Medical Subject Headings (MeSH) based indexing from MEDLINE, with Boolean and vector space models for document retrieval, offering a single interface from which these journals can be searched [5]. However, and despite these strong points, there are some limitations in using PubMed or other similar tools. A first limitation comes from the fact that keyword-based searches usually lead to underspecified queries, which is a main problem in any information retrieval (IR) system [6]. This usually means that users will have to perform various iterations and modifications to their queries in order to satisfy their information needs. This process is well described in [7] in the context of information-seeking behaviour patterns in biomedical information retrieval. Another drawback is that PubMed does not sort the retrieved documents in terms of how relevant they are for the user query. Instead, the documents satisfying the query are retrieved and presented in reverse date order. This approach is suitable for such cases in which the user is familiar with a particular field and wants to find the most recent publications. However, if the user is looking for articles associated with several query terms and possibly describing relations between those terms, the most relevant documents may appear too far down the result list to be easily retrieved by the user.

To address the issues mentioned above, several tools have been developed in the past years that combine information extraction, text mining and natural language processing techniques to help retrieve relevant articles from the biomedical literature [8]. Most of these tools are based on the MEDLINE literature database and take advantage of the domain knowledge available in databases and resources like the Entrez Gene, UniProt, GO or UMLS to process the titles and abstracts of texts and present the extracted information in different forms: relevant sentences describing a biological process or linking two or more biological entities, networks of interrelations, or in terms of co-occurrence statistics between domain terms. One such example is the GoPubMed tool [9], which retrieves MEDLINE abstracts and categorizes them according to the Gene Ontology (GO) and MeSH terms. Another tool, iHOP [10], uses genes and proteins as links between sentences, allowing the navigation through sentences and abstracts. The AliBaba system [11] uses pattern matching and co-occurrence statistics to find associations between biological entities such as genes, proteins or diseases identified in MEDLINE abstracts, and presents the search results in the form of a graph. EBIMed [12] finds protein/gene names, GO annotations, drugs and species in PubMed abstracts showing the results in a table with links to the sentences and abstracts that support the corresponding associations. FACTA [13] retrieves abstracts from PubMed and identifies biomedical concepts (e.g. genes/proteins, diseases, enzymes and chemical compounds) co-occurring with the terms in the user’s query. The concepts are presented to the user in a tabular format and are ranked based on the co-occurrence statistics or on pointwise mutual information. More recently, there has been some focus on applying more detailed linguistic processing in order to improve information retrieval and extraction. Chilibot [14] retrieves sentences from MEDLINE abstracts relating to a pair (or a list) of proteins, genes, or keywords, and applies shallow parsing to classify these sentences as interactive, non-interactive or simple abstract co-occurrence. The identified relationships between entities or keywords are then displayed as a graph. Another tool, MEDIE [15], uses a deep-parser and a term recognizer to index abstracts based on pre-computed semantic annotations, allowing for real-time retrieval of sentences containing biological concepts that are related to the user query terms.

Despite the availability of several specific tools, such as the ones presented above, we feel that the demand for finding references relevant for a large set of is still not fully addressed. This constitutes an important query type, as it is a typical outcome of many experimental techniques. An example is a gene expression study, in which, after measuring the relative mRNA expression levels of thousands of genes, one usually obtains a subset of differentially expressed genes that are then considered for further analysis [16,17]. The ability to rapidly identify the literature describing relations between these differentially expressed genes is crucial for the success of data analysis. In such cases, the problem of obtaining the documents which are more relevant for the user becomes even more critical because of the large number of genes being studied, the high degree of synonymy and term variability, and the ambiguity in gene names.

While it is possible to perform a composite query in PubMed, or use a list of genes as input to some of the IR tools described above, these systems do not offer a retrieval and ranking strategy which ensures that the obtained results are sorted according to the relevance for the entire input list. A tool more oriented to analysing a set of genes is microGENIE [18], which accepts a set of genes as input and combines information from the UniGene and SwissProt databases to create an expanded query string that is submitted to PubMed. A more recently proposed tool, GeneE [19], follows a similar approach. In this tool, gene names in the user input are expanded to include known synonyms, which are obtained from four reference databases and filtered to eliminate ambiguous terms. The expanded query can then be submitted to different search engines, including PubMed. In this paper, we propose QuExT (Query Expansion Tool), a document indexing and retrieval application that obtains, from the MEDLINE database, a ranked list of publications that are most significant to a particular set of genes. Document retrieval and ranking are based on a concept-based methodology that broadens the resulting set of documents to include documents focusing on these gene-related concepts. Each gene in the input list is expanded to its various synonyms and to a network of biologically associated terms, namely proteins, metabolic pathways and diseases. Furthermore, the retrieved documents are ranked according to user-defined weights for each of these concept classes. By simply changing these weights, users can alter the order of the documents, allowing them to obtain for example, documents that are more focused on the metabolic pathways in which the initial genes are involved.


(Creative Commons Attribution License:

  • Kim, J., & Rebholz-Schuhmann, D. (2008). Categorization of services for seeking information in biomedical literature: a typology for improvement of practice. Brief Bioinform, 9(6), 452-465. doi:10.1093/bib/bbn032
  • Weeber, M., Kors, J. A., & Mons, B. (2005). Online tools to support literature-based discovery in the life sciences. Brief Bioinform, 6(3), 277-286. doi:10.1093/bib/6.3.277

I’m sure there are many other nice ones out there. Don’t forget to also check out the NCBI Handbook. Another great resource …


On a separate note, a couple of NIH affiliated authors have written some thought provoking stuff about Translational Medicine:-

  • Nussenblatt, R., Marincola, F., & Schechter, A. (2010). Translational Medicine – doing it backwards. Journal of Translational Medicine, 8(1), 12. doi:10.1186/1479-5876-8-12


The present paradigm of hypothesis-driven research poorly suits the needs of biomedical research unless efforts are spent in identifying clinically relevant hypotheses. The dominant funding system favors hypotheses born from model systems and not humans, bypassing the Baconian principle of relevant observations and experimentation before hypotheses. Here, we argue that that this attitude has born two unfortunate results: lack of sufficient rigor in selecting hypotheses relevant to human disease and limitations of most clinical studies to certain outcome parameters rather than expanding knowledge of human pathophysiology; an illogical approach to translational medicine.


A recent candidate for a post-doctoral fellowship position came to the laboratory for an interview and spoke of the wish to leave in vitro work and enter into meaningful in vivo work. He spoke of an in vitro observation with mouse cells and said that it could be readily applied to treating human disease. Indeed his present mentor had told him that was the rationale for doing the studies. When asked if he knew whether the mechanisms he outlined in the mouse existed in humans, he said that he was unaware of such information and upon reflection wasn’t sure in any event how his approach could be used with patients. This is a scenario that is repeated again and again in the halls of great institutions dedicated to medical research. Any self respecting investigator (and those they mentor) knows that one of the most important new key words today is “translational”. However, in reality this clarion call for medical research, often termed “Bench to Bedside” is far more often ignored than followed. Indeed the paucity of real translational work can make one argue that we are not meeting our collective responsibility as stewards of advancing the health of the public. We see this failure in all areas of biomedical research, but as a community we do not wish to acknowledge it, perhaps in part because the system, as it is, supports superb science. Looking this from another perspective, Young et al [2] suggest that the peer-review of journal articles is one subtle way this concept is perpetuated. Their article suggests that the incentive structure built around impact and citations favors reiteration of popular work, i.e., more and more detailed mouse experiments, and that it can be difficult and dangerous for a career to move into a new arena, especially when human study is expensive of time and money.


(Creative Commons Attribution License:

Well, I guess that does it for now. Hope those articles pique your interest as much as they did mine. Until we meet again, adios :-) !

Copyright © Firas MR. All rights reserved.

Written by Firas MR

June 29, 2010 at 4:33 pm

Decision Tree Questions In Genetics And The USMLE

with 2 comments

Courtesy cayusa@flickr. (creative commons by-nc license)

Courtesy cayusa@flickr. (creative commons by-nc license)

Just a quick thought. It just occurred to me that some of the questions on the USMLE involving pedigree analysis in genetics, are actually typical decision tree questions. The probability that a certain individual, A, has a given disease (eg: Huntington’s disease) purely by random chance is simply the disease’s prevalence in the general population. But what if you considered the following questions:

  • How much genetic code do A and B share if they are third cousins?
  • If you suddenly knew that B has Huntington’s disease, what is the new probability for A?
  • What is the disease probability for A‘s children, given how much genetic code they share with B?

When I’d initially written about decision trees, it did not at all occur to me at the time how this stuff was so familiar to me already!

Apply a little Bayesian strategy to these questions and your mind is suddenly filled with all kinds of probability questions ripe for decision tree analysis:

  • If the genetic test I utilize to detect Huntington’s disease has a false-positive rate x and a false-negative rate y, now what is the probability for A?
  • If the pre-test likelihood is m and the post-test likelihood is n, now what is the probability for A?

I find it truly amazing how so many geneticists and genetic counselors accomplish such complex calculations using decision trees without even realizing it! Don’t you :-) ?

Copyright © Firas MR. All rights reserved.

A Force Weaker Than Gravity?

with 2 comments

Courtesy laurenatclemson @ Flickr (attribution license)

Courtesy laurenatclemson @ Flickr (attribution license)

Just thinking aloud a question that’s been ringing in my head recently. Gravity is the weakest force that we know of. In flapping its tiny wings, a fly easily overcomes the gravitational pull of this gigantic earth that we inhabit. A massive airplane can carry hundreds of people on board as it cruises the skies.

But what is it that makes gravity so weak? I think the secret lies in the gravitational constant. What if there’s a force out there whose constant(s) make it so weak that we just haven’t experienced its direct effects yet? A force weaker than gravity?

Could the Higgs field be a candidate for what I’m thinking about?

Copyright © Firas MR. All rights reserved.

Written by Firas MR

September 1, 2009 at 2:50 pm

Elegance In Inelegance

with 2 comments

Courtesy Lydia Elle @ Flickr (by-nc license)

Courtesy Lydia Elle @ Flickr (by-nc license)

I just finished a great lecture series on the history of mathematics by Dr. David Bressoud recently1. Remember how I once spoke about elegance in inelegance? How some people have argued (eg: Lee Smolin) that the universe just might be complex by nature? How mankind might just be wrong about looking for simple and thus elegant solutions to explain physical phenomena?

Well, I was pretty intrigued by some of the stuff I learned about Henri Poincare‘s work in this regard. Poincare is famous for a number of things, his Poincare conjecture being the most obvious of them. A Russian math guru, Grigori Perelman, apparently proved this conjecture some years back and among other peculiar things, not only declined the Fields medal but also a million dollar prize for solving one of the toughest math problems ever known.

But I was particularly piqued by how Poincare was fascinated by this idea of finding elegance and hidden patterns even where one might expect junk. Here are what might be interesting questions as crude examples:

Take a random set of 100 beads. Throw these beads on the floor. They scatter randomly. How many throws would be needed to find at least three beads on the floor that yield an equilateral triangle when they are connected? How many throws would you need to find a cluster of beads that is of a certain shape or size?

That there is some sense of order even in randomness and chaos, is truly an enchanting concept.

Have any thoughts of your own? Do send in your feedback :-)!

1. Queen Of The Sciences (Lectures by David Bressoud)

Copyright © Firas MR. All rights reserved.

Written by Firas MR

August 31, 2009 at 11:49 pm

Why Equivalence Studies Are So Fascinating

with 4 comments

Bronze balance pans and lead weights from the Vapheio tholos tomb, circa 15th century BC. National Museum, Athens. Shot courtesy dandiffendale@Flickr. by-nc-ca license.

Bronze balance pans and lead weights from the Vapheio tholos tomb, circa 15th century BC. National Museum, Athens. Shot courtesy dandiffendale@Flickr. by-nc-sa license.

Objectives and talking points:

  • To recap basic concepts of hypothesis testing in scientific experiments. Readers should read-up on hypothesis testing in reference works.
  • To contrast drug vs. placebo and drug vs. standard drug study designs.
  • To contrast non-equivalence and equivalence studies.
  • To understand implications of these study designs, in terms of interpreting study results.


Howdy readers! Today I’m going to share with you some very interesting concepts from a fabulous book that I finished recently – “Designing Clinical Research – An Epidemiologic Approach” by Stephen Hulley et al. The book speaks fairly early on, on what are called “equivalence studies”. Equivalence studies are truly fascinating. Let’s see how.

When a new drug is tested for efficacy, there are multiple ways for us to do so.

A Non-equivalence Study Of Drug vs. Placebo

A drug can be compared to something that doesn’t have any treatment effect whatsoever – a ‘placebo’. Examples of placebos include sugar tablets, distilled water, inert substances, etc. Because pharmaceutical companies try hard to make drugs that have a treatment effect and that are thus different from placebos, the objective of such a comparison is to answer the following question:

Is the new drug any different from the placebo?

Note the emphasis on ‘any different’. As is usually the case, a study of this kind is designed to test for differences between drug and placebo effects in both directions1. That is:

Is the new drug better than the placebo?


Is the new drug worse than the placebo?

The boolean operator ‘OR’, is key here.

Since we can not conduct such an experiment on all people in the target ‘population’ (eg. all people with diabetes from the whole country), we conduct it on a random and representative ‘sample’ of this population (eg. randomly selected diabetes patients from the whole country). Because of this, we can not directly extrapolate our findings to the target population without doing some fancy roundabout thinking and a lot of voodoo first – a.k.a. ‘hypothesis testing’. Hypothesis testing is crucial to take in to account random chance (error) effects that might have crept in to the experiment.

In this experiment:

  • The null hypothesis is that the drug and the placebo DO NOT differ in the real world2.
  • The alternative hypothesis is that the drug and the placebo DO differ in the real world.

So off we go, with our experiment with an understanding that our results might be influenced by random chance (error) effects. Say that, before we start, we take the following error rates to be acceptable:

  1. Even if the null hypothesis is true in the real world, we would find that the drug and the placebo DO NOT differ only 95% of the time, purely by random chance. [Although this rate doesn't have a name, it is equal to (1 - Type 1 error)].
  2. Even if the null hypothesis is true in the real world, we would find that the drug and the placebo DO differ 5% of the time, purely by random chance. [This rate is also called our Type 1 error, or critical level of significance, or critical α level, or critical 'p' value].
  3. Even if the alternative hypothesis is true in the real world, we would find that the drug and the placebo DO differ only 80% of the time, purely by random chance. [This rate is also called the 'Power' of the experiment. It is equal to (1 - Type 2 error)].
  4. Even if the alternative hypothesis is true in the real world, we would find that the drug and the placebo DO NOT differ 20% of the time, purely by random chance. [This rate is also called our Type 2 error].

The strategy of the experiment is this:

If we are able to accept these error rates and show in our experiment that the null hypothesis is false (that is ‘reject‘ it), the only other hypothesis left on the table is the alternative hypothesis. This has then, GOT to be true and we thus ‘accept’ the alternative hypothesis.

Q: With what degree of uncertainty?

A: With the uncertainty that we might arrive at such a conclusion 5% of the time, even if the null hypothesis is true in the real world.

Q: In English please!

A: With the uncertainty that we might arrive at a conclusion that the drug DOES differ from the placebo 5% of the time, even if the drug DOES NOT differ from the placebo in the real world.

Our next question would be:

Q: How do we reject the null hypothesis?

A: We proceed by initially assuming that the null hypothesis is true in the real world (i.e. Drug effect DOES NOT differ from Placebo effect in the real world). We then use a ‘test of statistical significance‘ to calculate the probability of observing a difference in treatment effect in the real world, as large or larger than that actually observed in the experiment.  If this probability is <5%, we reject the null hypothesis. We do this with the belief that such a conclusion is within our pre-selected margin of error. Our pre-selected margin of error, as mentioned previously, is that we would be wrong about rejecting the null hypothesis 5% of the time (our Type 1 error rate)3.

If we fail to show that this calculated probability is <5%, we ‘fail to reject‘ the null hypothesis and conclude that a difference in effect has not been proven4.

A lot of scientific literature out there is riddled with drug vs. placebo studies. This kind of thing is good if we do not already have an effective drug for our needs. Usually though, we already have a standard drug that we know works well. It is of more interest to see how a new drug compares to our standard drug.

A Non-equivalence Study Of Drug vs. Standard Drug

These studies are conceptually the same as drug vs. placebo studies and the same reasoning for inference is applied. These studies ask the following question:

Is the new drug any different than the standard drug?

Note the emphasis on ‘any different’. As is often the case, a study of this kind is designed to test the difference between the two drugs in both directions1. That is:

Is the new drug better than the standard drug?


Is the new drug worse than the standard drug??

Again, the boolean operator ‘OR’, is key here.

In this kind of experiment:

  • The null hypothesis is that the new drug and the standard drug DO NOT differ in the real world2.
  • The alternative hypothesis is that the new drug and the standard drug DO differ in the real world.

Exactly like we discussed before, we initially assume that the null hypothesis is true in the real world (i.e. the new drug’s effect DOES NOT differ from the standard drug’s effect in the real world). We then use a ‘test of statistical significance‘ to calculate the probability of observing a difference in treatment effect in the real world, as large or larger than that actually observed in the experiment.  If this probability is <5%, we reject the null hypothesis – with the belief that such a conclusion is within our pre-selected margin of error. Just to repeat ourselves here, our pre-selected margin of error, is that we would be wrong about rejecting the null hypothesis 5% of the time (our Type 1 error rate)3.

If we fail to show that this calculated probability is <5%, we ‘fail to reject’ the null hypothesis and conclude that a difference in effect has not been proven4.

An Equivalence Study Of Drug vs. Standard Drug

Sometimes all you want is a drug that is as good as the standard drug. This can be for various reasons – the standard drug is just too expensive, just too difficult to manufacture, just too difficult to administer, … and so on. Whereas the new drug might not have these undesirable qualities yet retain the same treatment effect.

In an equivalence study, the incentive is to prove that the two drugs are the same. Like we did before, let’s explicitly formulate our two hypotheses:

  • The null hypothesis is that the new drug and the standard drug DO NOT differ in the real world2.
  • The alternative hypothesis is that the new drug and the standard drug DO differ in the real world.

We are mainly interested in proving the null hypothesis. Since this can’t be done4, we’ll be content with ‘failing to reject’ the null hypothesis. Our strategy is to design a study powerful enough to detect a difference close to 0 and then ‘fail to reject’ the null hypothesis. In doing so, although we can’t ‘prove’ for sure that the null hypothesis is true, we can nevertheless be more comfortable saying that it in fact is true.

In order to detect a difference close to 0, we have to increase the Power of the study from the usual 80% to something like 95% or higher. We wan’t to maximize power to detect the smallest difference possible. Usually though, it’s enough if we are able to detect the the largest difference that doesn’t have clinical meaning (eg: a difference of 4mm on a BP measurement). This way we can compromise a little on Power and choose a less extreme figure, say 88% or something.

And then just as in our previous examples, we proceed with the assumption that the null hypothesis is true in the real world. We then use a ‘test of statistical significance‘ to calculate the probability of observing a difference in treatment effect in the real world, as large or larger than that actually observed in the experiment.  If this probability is <5%, we reject the null hypothesis – with the belief that such a conclusion is within our pre-selected margin of error. And to repeat ourselves yet again (boy, do we like doing this :-P ), our pre-selected margin of error is that we would be wrong about rejecting the null hypothesis 5% of the time (our Type 1 error rate)3.

If we fail to show that this calculated probability is <5%, we ‘fail to reject‘ the null hypothesis and conclude that although a difference in effect has not been proven, we can be reasonably comfortable saying that there is in fact no difference in effect.

So Where Are The Gotchas?

If your study isn’t designed or conducted properly (eg: without enough power, inadequate  sample size, improper randomization, loss of subjects to followup, inaccurate measurements, etc.)  you might end up ‘failing to reject’ the null hypothesis whereas if you had taken the necessary precautions, this might not have happened and you would have come to the opposite conclusion. Purely because of random chance (error) effects. Such improper study designs usually dampen any obvious differences in treatment effect in the experiment.

In a non-equivalence study, researchers, whose incentive it is to reject the null hypothesis, are thus forced to make sure that their designs are rigorous.

In an equivalence study, this isn’t the case. Since researchers are motivated to ‘fail to reject’ the null hypothesis from the get go, it becomes an easy trap to conduct a study with all kinds of design flaws and very conveniently come to the conclusion that one has ‘failed to reject’ the null hypothesis!

Hence, it is extremely important, more so in equivalence studies than in non-equivalence studies, to have a critical and alert mind during all phases of the experiment. Interpreting an equivalence study published in a journal is hard, because one needs to know the very guts of everything the research team did!

Even though we have discussed these concepts with drugs as an example, you could apply the same reasoning to many other forms of treatment interventions.

Hope you’ve found this post interesting :-) . Do send in your suggestions, corrections and comments!

Adios for now!

Copyright © Firas MR. All rights reserved.

Readability grades for this post:

Flesch reading ease score: 71.4
Automated readability index: 8.1
Flesch-Kincaid grade level: 7.4
Coleman-Liau index: 9
Gunning fog index: 11.8
SMOG index: 11

1. An alternative hypothesis for such a study is called a ‘two-tailed alternative hypothesis‘. A study that tests for differences in only one direction has an alternative hypothesis that is called a ‘one-tailed alternative hypothesis‘.
2. This situation is a good example of a ‘null’ hypothesis also being a ‘nil’ hypothesis. A null hypothesis is usually a nil hypothesis, but it’s important to realize that this isn’t always the case.
4. Note that we never use the term, ‘accept the null hypothesis’.

Does Changing Your Anwer In The Exam Help?

with 8 comments

monty hall paradox

The Monty Hall Paradox

One of the 3 doors hides a car. The other two hide a goat each. In search of a new car, the player picks a door, say 1. The game host then opens one of the other doors, say 3, to reveal a goat and offers to let the player pick door 2 instead of door 1. Is there an advantage if the the player decides to switch? (Courtesy: Wikipedia)

Hola amigos! Yes, I’m back! It’s been eons and I’m sure many of you may have been wondering why I was MIA. Let’s just say it was academia as usual.

This post is unique as it’s probably the first where I’ve actually learned something from contributors and feedback. A very critical audience and pure awesome discussion. The main thrust was going to be an analysis of the question, “If you had to pick an answer in an MCQ randomly, does changing your answer alter the probabilities to success?” and it was my hope to use decision trees to attack the question. I first learned about decision trees and decision analysis in Dr. Harvey Motulsky’s great book, “Intuitive Biostatistics“. I do highly recommend his book. As I pondered over the question, I drew a decision tree that I extrapolated from his book. Thanks to initial feedback from BrownSandokan (my venerable computer scientist friend from yore :P) and Dr. Motulsky himself, who was so kind as to write back to just a random reader, it turned out that my diagram was wrong and so was the original analysis. The problem with the original tree (that I’m going to maintain for other readers to see and reflect on here) was that the tree in the book is specifically for a math (or rather logic) problem called the Monty Hall Paradox. You can read more about it here. As you can see, the Monty Hall Paradox is a special kind of unequal conditional probability problem, in which knowing something for sure, influences the probabilities of your guesstimates. It’s a very interesting problem, and has bewildered thousands of people, me included. When it was originally circulated in a popular magazine,  “nearly 1000 PhDs” (cf. Wikipedia) wrote back to say that the solution put forth was wrong, prompting numerous psychoanalytical studies to understand human behavior. A decision tree for such a problem is conceptually different from a decision tree for our question and so my original analysis was incorrect.

So what the heck are decision trees anyway? They are basically conceptual tools that help you make the right decisions given a couple of known probabilities. You draw a line to represent a decision, and explicitly label it with a corresponding probability. To find the final probability for a number of decisions (or lines) in sequence, you multiply or add their individual probabilities. It takes skill and a critical mind to build a correct tree, as I learned. But once you have a tree in front of you, its easier to see the whole picture.

Let’s just ignore decision trees completely for the moment and think in the usual sense. How good an idea is it to change an answer on an MCQ exam such as the USMLE? The Kaplan lecture notes will tell you that your chances of being correct are better off if you don’t. Let’s analyze this. If every question has 1 correct option and 4 incorrect options (the total number of options being 5), then any single try on a random choice gives you a probability of 20% for the correct choice and 80% for the incorrect choice. The odds are higher that on any given attempt, you’ll get the answer wrong. If your choice was correct the first time, it still doesn’t change these basic odds. You are still likely to pick the incorrect choice 80% of the time. Borrowing from the concept of “regression towards the mean” (repeated measurements of something, yield values closer to said thing’s mean), we can apply the same reasoning to this problem. Since the outcomes in question are categorical (binomial to be exact), the measure of central tendency used is the Mode (defined as the most commonly or frequently occurring thing in a series). In a categorical series – cat, dog, dog, dog, cat – the mode is ‘dog’. Since the Mode in this case happens to be the category “incorrect”, if you pick a random answer and repeat this multiple times, you are more likely to pick an incorrect answer! See, it all make sense :) ! It’s not voodoo after all :D !

Coming back to decision analysis, just as there’s a way to prove the solution to the Monty Hall Paradox using decision trees, there’s also a way to prove our point on the MCQ problem using decision trees. While I study to polish my understanding of decision trees, building them for either of these problems will be a work in progress. And when I’ve figured it all out, I’ll put them up here. A decision tree for the Monty Hall Paradox can be accessed here.

To end this post, I’m going to complicate our main question a little bit and leave it out in the void. What if on your initial attempt you have no idea which of the answers is correct or incorrect but on your second attempt, your mind suddenly focuses on a structure flaw in one or more of the options? Assuming that an option with a structure flaw can’t be correct, wouldn’t this be akin to Monty showing the goat? One possible structure flaw, could be an option that doesn’t make grammatical sense when combined with the stem of the question. Does that mean you should switch? Leave your comments below!

Hope you’ve found this post interesting. Adios for now!

Copyright © Firas MR. All rights reserved.

Readability grades for this post:

Flesch reading ease score:  72.4
Automated readability index: 7.8
Flesch-Kincaid grade level: 7.3
Coleman-Liau index: 8.5
Gunning fog index: 11.4
SMOG index: 10.7


Intuitive Biostatistics, by Harvey Motulsky

The Monty Hall Problem: The Remarkable Story Of Math’s Most Contentious Brain Teaser, by Jason Rosenhouse

, , , , , , , , ,

Powered by ScribeFire.


Get every new post delivered to your Inbox.