My Dominant Hemisphere

The Official Weblog of 'The Basilic Insula'

Posts Tagged ‘Health

Infusions Redux, DNS And Cerebral Edema

leave a comment »

Source, Author and License

There’s a book on fluid and electrolyte management that I’ve been reading recently. Called, “Practical Guideline on Fluid Therapy” and authored, as probably evident by the English used in the title, by a very Indian Sanjay Pandya, the book contains many interesting nuggets for day to day practice. Although like most Indian books there is a distinct absence of the emphasis on applying one’s brain, it is nevertheless worth the time to peruse. Today I will be discussing two equations from the book and a question that came up in my mind about the usage of a specific fluid.

Calculating ECF volume deficit (in dehydration, etc.)

  1. If the patient’s previous body weight is known, all you gotta do to obtain ECF deficit is find out the difference between his present and past weight.
  2. Another technique uses changes in the Hematocrit to discern ECF volume deficit. This method is applicable only if there is no hemorrhage, hemolysis or other situations involving loss of blood cells, the idea being that any change in blood volume is caused by plasma volume change. So if there’s dehydration and loss of ECF volume, plasma volume shrinks and causes the hematocrit to rise.

ECF Volume Deficit in liters = 0.2 * lean body weight * [(Current hematocrit/Desired hematocrit) - 1]

Can someone figure out the proof for the above equation and post it here? Like most other stuff, I absolutely hate roting math formulas and prefer remembering their derivations. This equation is taking me some time to prove.

To help get started, here are a couple of possible pointers I’m currently exploring:

Total body water (TBW) when expressed as a percentage of Total body weight (TBwt), varies by gender and age. In young adult men for example

TBW = 60% TBwt

TBW in liters

TBwt in kg

Interestingly enough, TBW when expressed as a percentage of lean body weight (LBwt) is a constant and isn’t conditioned upon gender or age.

TBW = 70% LBwt

LBwt = (100/70) * TBW

= (100/70) * [(x/100) * TBwt]

= (x/70) * TBwt

x is the percentage of TBwt that is TBW

Plasma volume is related to blood volume as follows

Plasma volume = Blood volume * [(100 - Hematocrit)/Hematocrit]

Plasma volume is also 1/4 of ECF volume. ECF is 1/3 of TBW. So plasma volume is 1/12 of TBW.

Calculating Electrolyte Infusion Rates

Change in plasma electrolyte concentration in mEq/L when 1 liter of  infusate is given

= [Infusate electrolyte concentration in mEq/L - Actual electrolyte concentration in mEq/L] / (TBW + 1)

This one’s easy to derive. Taking Na+ as our electrolyte example,

Initial Na+ content = x * TBW

Initial Na+ concentration = (x * TBW)/TBW

Final Na+ content after infusing 1L infusate = (x * TBW) + {y * 1}

Final Na+ concentration = [(x * TBW) + {y}]/(TBW + 1)

Change in Na+ concentration due to infusion = [(x * TBW) + {y}/(TBW + 1)] – [(x * TBW)/TBW]

= (yx)/(TBW+1)

x = mEq/L of Na+ initially in the body

y = mEq/L of Na+ in the infusate

And voila! There you have it!

And now for that promised question:

Given the fact that DNS (Dextrose Normal Saline) only stays in the ECF, would it be right to assume that it’s contraindicated in cerebral edema?

The interesting thing is that on exploring the scientific literature, I found that recent research shows that it isn’t just the shifting of fluid into the brain parenchyma that should be avoided when infusing fluid; hyperglycemia is a real danger as well. How hyperglycemia contributes to cerebral edema and especially in situations of cerebral ischemia is a topic of ongoing research and multiple plausible hypotheses are being investigated.

As per Pandya’s book, by the way, it is best to restrict glucose infusion to ≤ 0.5 grams/kg/hour when infusing any glucose containing fluid to avoid complications of hyperglycemia.

Readability grades for this post:

Kincaid: 11.4
ARI: 12.4
Coleman-Liau: 11.2
Flesch Index: 57.0/100
Fog Index: 14.6
Lix: 46.9 = school year 8
SMOG-Grading: 12.4

Copyright © Firas MR. All rights reserved.

Infusion Confusion – How To Calculate Drug Infusion Rates

with 7 comments

Source, author and license

The erosion of math and analytical skills that occurs with medics is truly astounding. Not surprising some might argue, what with it being such a memory oriented field. One area that many medics struggle with is drug dosage calculations. In the ER, one often doesn’t have the luxury of time and instant thinking is absolutely critical. Numbers need to be played out in seconds and optimal drug regimens have to be formulated. I was helping a colleague understand calculations for dopamine infusion the other day and thought like sharing with you folks some of the things we talked about.

Dopamine is used especially in ER settings to increase perfusion/blood pressure by means of its vasopressor, inotropic and chronotropic effects. When re-establishing blood pressure in a patient,  attention not only needs to be paid to drugs that might be used but also fluid replacement for any amount of fluid loss from the body. Two questions need to be asked before starting a dopamine infusion:

  1. How much dopamine?
  2. How much fluid and how fast?

The usual dosage of dopamine is somewhere between 5-10 μg/kg/min. For the following example I’ll use 10 μg/kg/min.

1μg = 0.001mg.

For a patient weighing x kg, the dosage is therefore 0.01x mg/min. Now that you’ve established how much dopamine you need to infuse per minute, here comes the second part.

Suppose you intend to infuse y ml of fluid (as part of the dopamine infusion, i.e. aside from any other fluid infusions already in place). Say also that you’ve added z mg of dopamine to form the infusate. Dopamine is supplied in liquid form, so any amount of dopamine occupies a certain volume in ml, which in most situations is negligible.

y ml of infusate = volume of Normal Saline, etc. + volume of dopamine

If z mg of dopamine is contained in y ml of infusate,

0.01x mg dopamine is contained in [0.01x/z] * y ml of infusate.

Thus you’re interested in giving [0.01x/z] * y ml of infusate every minute and a simple formula is derived where:

rate of dopamine infusion in ml/min = [0.01x/z] * y

and therefore, z = [0.01x/(rate of infusion in ml/min)] * y

x = body weight in kg

z = amount of dopamine added in mg

y = total volume of infusate in ml

For any drug infusion:

rate of infusion in ml/min = [(total drug dose in mg/min)/(amount of drug added in infusate in mg)] * volume of infusate in ml

This infusate is typically given via an infusion set that specifies a unique drops per ml ratio. At our pediatrics ER for example, infusion sets come in two forms – microdrip infusion sets (1 ml = 60 drops) and macrodrip infusion sets (1 ml = 20 drops). Simply multiply the rate of infusion in ml/min with 60 or 20 to get the infusion rate in drops/min for micro and macro IV sets respectively.

As seen from the formula above, when deciding to add a given amount of drug to form the infusate, three things need to be fixed first:-

  1. Dose of drug in the mg/min format (should be appropriate to the clinical condition of the patient).
  2. Total volume of infusate in ml (again, this depends on the clinical condition and hemodynamic stability of the patient).
  3. Speed or rate of fluid replacement in ml/min (this is important as sudden fluid-volume changes in the body can be problematic in certain cases and you want to go for a rate that is optimal, neither too slow nor too fast.)

And with that I end this post. Hope readers find this useful. Comments and corrections are welcome!

Readability grades for this post:

Kincaid: 8.4
ARI: 7.9
Coleman-Liau: 10.2
Flesch Index: 65.7/100 (plain English)
Fog Index: 12.7
Lix: 39.4 = school year 6
SMOG-Grading: 11.6

Copyright © Firas MR. All rights reserved.

Written by Firas MR

June 13, 2008 at 1:41 pm

Tech bytes: Konqueror and Java – Opera on Kubuntu 8.04

with one comment

Source, Author and License

Today’s tech bytes:

Some very nice people over at Kubuntu’s tech support IRC channel brought my attention to the fact that Kubuntu 8.04 doesn’t have an LTS version. Apparently, both the KDE 3.5.9 and KDE4 versions have not been given that status as KDE development has been in flux lately. So people, all those Powered by Kubuntu 8.04 LTS post-scripts in my previous posts stand corrected as …Kubuntu 8.04 (KDE 3.5.9).

Ever found the fact that from a fresh install of Kubuntu 8.04, Konqueror’s java behavior is a tad odd? No matter what you do, when you enable Tools>HTML Settings>Java, the java setting never sticks. It stays on the website you’re visiting but that’s it. As soon as you go to some other website, that java setting resets back to disabled. Furthermore, when you restart Konqueror, it’s the same deal again.

One nice person over at Kubuntu’s IRC channel was kind enough to share his solution. Goto Settings>Configure Konqueror>Java & Javascript>Java Runtime Settings. Uncheck/disable the option ‘Use Security Manager’. Click ‘Apply’>’OK’. Now enable java under Tools>HTML Settings>Java. Restart Konqueror. Yay! It sticks! Now, go back to Settings>Configure Konqueror>Java & Javascript>Java Runtime Settings. Check/enable the option ‘Use Security Manager’. Click ‘Apply’>’OK’. It’s a little weird but doing so doesn’t cause the funny java behavior to turn back on again and having any sort of security on a web browser is good :) .

Tried the latest Opera 9.5 Beta 2/weekly snapshot on Kubuntu/Ubuntu? If you live outside of the US, there’s a good chance that your system locale settings are set to use something other than English US by default. It so happens that this causes Opera 9.5b2 to crash with a segmentation fault. In order to enjoy Opera 9.5b2, make sure you have Sun Java set as the default Java version (use this howto) and set your locale to English US (en_US). On Kubuntu 8.04 (KDE 3.5.9) do the following as discussed here :-

  1. Goto System Settings>Regional & Language>Country/Region & Language
  2. Click on the ‘Locale’ tab
  3. Click on ‘Select System Language’ and choose ‘English US’
  4. Click ‘Apply’
  5. Restart KDE (log out and then log in) for settings to take effect

I have found the flash support to be a little flaky, at least with Opera 9.5b2. Opera, for me, often suffers from this grey box phenomenon. One moment a flash video works perfectly, other times I’d find grey boxes with audio but no sound. This becomes particularly the case when I’d have two or more tabs with flash video open in them and keep switching between them.

Quick user tip: To set your middle-click options, press the Shift key and then middle-click.

Is it just me or does Firefox 3 RC1 seem faster on Windows XP than on Ubuntu/Linux? For me, FF3RC1 on Kubuntu 8.04 still seems to take a lot more memory than on Windows. I guess their Linux development is slow or something.


Google announced their Google Health service recently. Privacy concerns abound.

That’s it for today folks. See ya!

-

Copyright © 2006 – 2008 Firas MR. All rights reserved.

Written by Firas MR

May 25, 2008 at 1:21 pm

USMLE Scores – Debunking Common Myths

with 23 comments

Source, Author and License

Lot’s of people have misguided notions as to the true nature of USMLE scores and what exactly they represent. In my opinion, this occurs in part due to a lack of interest in understanding the logistic considerations of the exam. Another contributing factor could be the bordering brainless, mentally zero-ed scientific culture most exam goers happen to be cultivated in. Many if not most of these candidates, in their naive wisdoms got into Medicine hoping to rid themselves of numerical burdens forever!

The following, I hope, will help debunk some of these common myths.

Percentile? Uh…what percentile?

This myth is without doubt, the king of all :-) . It isn’t uncommon that you find a candidate basking in the self-righteous glory of having scored a ’99 percent’ or worse, a ’99 percentile’. The USMLE at one point used to provide percentile scores. That stopped sometime in the mid to late ’90s. Why? Well, the USMLE organization believed that scores were being unduly given more weightage than they ought to in medics’ careers. This test is a licensure exam, period. That has always been the motto. Among other things, when residency programs started using the exam as a yard stick to differentiate and rank students, the USMLE saw this as contrary to its primary purpose and said enough is enough. To make such rankings difficult, the USMLE no longer provides percentile scores to exam takers.

The USMLE does have an extremely detailed FAQ on what the 2-digit (which people confuse as a percentage or percentile) and 3-digit scores mean. I strongly urge all test-takers to take a hard look at it and ponder about some of the stuff said therein.

Simply put, the way the exam is designed, it measures a candidate’s level of knowledge and provides a 3-digit score with an important import. This 3-digit score is an unfiltered indication of an individual’s USMLE know-how, that in theory shouldn’t be influenced by variations in the content of the exam, be it across space (another exam center and/or questions from a different content pool) or time (exam content from the future or past). This means that provided a person’s knowledge remains constant, he or she should in theory, achieve the same 3-digit score regardless of where and when he or she took the test. Or, supposedly so. The minimum 3-digit score that is required to ‘pass’ the exam is revised on an annual basis to preserve this space-time independent nature of the score. For the last couple of years, the passing score has hovered around 185. A ‘pass’ score makes you eligible to apply for a license.

What then is the 2-digit score? For god knows what reason, the Federation of State Medical Boards (these people provide medics in the US, licenses based on their USMLE scores) has a 2-digit format for a ‘pass’ score on the USMLE exam. Unlike the 3-digit score this passing score is fixed at 75 and isn’t revised every year.

How does one convert a 3-digit score to a 2-digit score? The exact conversion algorithm hasn’t been disclosed (among lots of other things). But for matters of simplicity, I’m going to use a very crude approach to illustrate:

Equate the passing 3-digit score to 75. So if the passing 3-digit score is 180, then 180 = 75. 185 = 80, 190 = 85 … and so on.

I’m sure the relationship isn’t linear as shown above. For one, by very definition, a 2-digit score ends at 99. 100 is a 3-digit number! So let’s see what happens with our example above:

190 = 85, 195 = 90, 199 = 99. We’ve reached the 2-digit limit at this point. Any score higher than 199 will also be equated to 99. It doesn’t matter if you scored a 240 or 260 on the 3 digit scale. You immediately fall under the 99 bracket along with the lesser folk!

These distortions and constraints make the 2-digit score an unjust system to rank test-takers and today, most residency programs use the 3-digit score to compare people. Because the 3-digit to 2-digit scale conversion changes every year, it makes sense to stick to the 3-digit scale which makes comparisons between old-timers and new-timers possible, besides the obvious advantage in helping comparisons between candidates who deal/dealt with different exam content.

Making Assumptions And Approximate Guesses

The USMLE does provide Means and Standard Deviations on students’ score cards. But these statistics don’t strictly apply to them because they are derived from different test populations. The score card specifically mentions that these statistics are “for recent” instances of the test.

Each instance of an exam is directed at a group of people which form its test population. Each population has its own characteristics such as whether or not it’s governed by Gaussian statistics, whether there is skew or kurtosis in its distribution, etc. The summary statistics such as the mean and standard deviation will also vary between different test populations. So unless you know the exact summary statistics and the nature of the distribution that describes the test population from which a candidate comes, you can’t possibly assign him/her a percentile rank. And because Joe and Jane can be from two entirely different test populations, percentiles in the end don’t carry much meaning. It’s that simple folks.

You could however make assumptions and arbitrary conclusions about percentile ranks though. Say for argument sake, all populations have a mean equal to 220 and a standard deviation equal to 20 and conform to Gaussian statistics. Then a 3-digit score of:

220 = 50th percentile

220 + 20 = 84th percentile

220 + 20 + 20 = 97th percentile

[Going back to our '99 percentile' myth and with the specific example we used, don't you see how a score equal to 260 (with its 2-digit 99 equivalent) still doesn't reach the 99 percentile? It's amazing how severely people can delude themselves. A 99 percentile rank is no joke and I find it particularly fascinating to observe how hundreds of thousands of people ludicrously claim to have reached this magic rank with a 2-digit 99 score. I mean, doesn't the sheer commonality hint that something in their thinking is off?]

This calculator makes it easy to calculate a percentile based on known Mean and Standard Deviations for Gaussian distributions. Just enter the values for Mean and Standard Deviation on the left, and in the ‘Probability’ field enter a percentile value in decimal form (97th percentile corresponds to 0.97 and so forth). Hit the ‘Compute x’ button and you will be given the corresponding value of ‘x’.

99th Percentile Ain’t Cake

Another point of note about a Gaussian distribution:

The distance from the 0th percentile to the 25th percentile is also equal to the distance between the 75th and 100th percentile. Let’s say this distance is x. The distance between the 25th percentile and the 50th percentile is also equal to the distance between the 50th percentile and the 75th percentile. Let’s say this distance is y.

It so happens that x>>>y. In a crude sense, this means that it is disproportionately tougher for you to score extreme values than to stay closer to the mean. Going from a 50th percentile baseline, scoring a 99th percentile is disproportionately tougher than scoring a 75th percentile. If you aim to score a 99 percentile, you’re gonna have to seriously sweat it out!

It’s the interval, stupid

Say there are infinite clones of you existent in this world and you’re all like the Borg. Each of you is mentally indistinguishable from the other – possessing ditto copies of USMLE knowhow. Say that each of you took the USMLE and then we plot the frequencies of these scores on a graph. We’re going to end up with a Gaussian curve depicting this sample of clones, with its own mean score and standard deviation. This process is called ‘parametric sampling’ and the distribution obtained is called a ‘sampling distribution’.

The idea behind what we just did is to determine the variation that we would expect in scores even if knowhow remained constant – either due to a flaw in the test or by random chance.

The standard deviation of a sampling distribution is also called ‘standard error’. As you’ll probably learn during your USMLE preparation, knowing the standard error helps calculate what are called ‘confidence intervals’.

A confidence interval for a given score can be calculated as follows (using the Z-statistic):-

True score = Measured score +/- 1.96 (standard error of measurement) … for 95% confidence

True score = Measured score +/- 2.58 (standard error of measurement) … for 99% confidence

For many recent tests, the standard error for the 3-digit scale has been 6 [Every score card quotes a certain SEM (Standard Error of Measurment) for the 3-digit scale]. This means that given a measured score of 240, we can be 95% certain that the true value of your performance lies between a low of 240 – 1.96 (6) and a high of 240 + 1.96 (6). Similarly we can say with 99% confidence that the true score lies between 240 – 2.58 (6) and 240 + 2.58 (6). These score intervals are probablistically flat when graphed – each true score value within the intervals calculated has an equal chance of being the right one.

What this means is that, when you compare two individuals and see their scores side by side, you ought to consider what’s going on with their respective confidence intervals. Do they overlap? Even a nanometer of overlapping between CIs makes the two, statistically speaking, indistinguishable, even if in reality there is a difference. As far as the test is concerned, when two CIs overlap, the test failed to detect any difference between these two individuals (some statisticians disagree. How to interpret statistical significance when two or more CIs overlap is still a matter of debate! I’ve used the view of the authors of the Kaplan lecture notes here). Capiche?

Beating competitors by intervals rather than pinpoint scores is a good idea to make sure you really did do better than them. The wider the distance separating two CIs, the larger is the difference between them.

There’s a special scenario that we need to think about here. What about the poor fellow who just missed the passing mark? For a passing mark of 180, what of the guy who scored, say 175? Given a standard error of 6, his 95% CI definitely does include 180 and there is no statistically significant (using a 5% margin of doubt) difference between him and another guy who scored just above 180. Yet this guy failed while the other passed! How do we account for this? I’ve been wondering about it and I think that perhaps, the pinpoint cutoffs for passing used by the USMLE exist as a matter of practicality. Using intervals to decide passing/failing results might be tedious, and maybe scientific endeavor ends at this point. Anyhow, I leave this question out in the void with the hope that it sparks discussions and clarifications.

If you care to give it a thought, the graphical subject-wise profile bands on the score card are actually confidence intervals (95%, 99% ?? I don’t know). This is why the score card clearly states that if any two subject-wise profile bands overlap, performance in these subjects should be deemed equal.

I hope you’ve found this post interesting if not useful. Please feel free to leave behind your valuable suggestions, corrections, remarks or comments. Anything :-) !

Readability grades for this post:

Kincaid: 8.8
ARI: 9.4
Coleman-Liau: 11.4
Flesch Index: 64.3/100 (plain English)
Fog Index: 12.0
Lix: 40.3 = school year 6
SMOG-Grading: 11.1

Powered by Kubuntu Linux 8.04

-

Copyright © 2006 – 2008 Firas MR. All rights reserved.

Calling For A Common Worldwide Medical Licensure Pathway

with 10 comments

Obstacles

Source, Author and License

Medicine – Realm Of The Unknown

For ages, the medical sphere has been shrouded in mystery – for people outside of medicine that is. And this hasn’t been too good for the medical profession because many policy makers on matters of healthcare/medicine aren’t sufficiently acquainted with its many nuances to yield considered judgements. Sometimes you just can’t help get the feeling that doctors have a language of their own, with a community so tightly knit that it borders some sort of illuminati like cult.

Earlier, most of this mystery was limited to the knowledge base of medicine. Doctors were treated like gods walking on earth and people had no qualms whatsoever in having blind faith in them. With the rapid rise of web technologies however, doctors find themselves facing tough and pointed questions by their patients and policy makers about the decisions they make.

Some aspects, for the large part, still remain hidden away however. Stuff that affects policy decisions and how medical communities across the world interact with each other. Issues concerning licensure and taxonomy immediately come to mind.

An aspect of medicine that to this day, remains an enigma for many ‘outsiders’ is the entire academic hierarchy that applies to medical systems across the globe. Many ‘insiders’ end up at their wits ends too. The taxonomy is definitely confusing. What the heck is a Senior Registrar? Or for that matter, what in god’s name is the difference between house surgeons/officers, resident medical officers, civil surgeons, residents, interns, attendings, senior house officers and all that jargon? The world could definitely use a universal taxonomic architecture for medical systems akin to the WHO’s International Classification of Diseases (ICD) to streamline stuff and make interactions between communities easier.

Licensure – One Too Many Exams For A Globalised Age

When medical students step into the medical world, being relatively new ‘insiders’ at this stage, very few are cognizant of the fact that their careers depend on having to satisfy licensure requirements before even thinking about pursuing higher education. Getting through medical school is one step. After that, students are required to go through long winded licensure pathways before even beginning to gain higher training. Licensure serves as a quality control measure to ensure the safety of patients and is arguably, a necessary evil.

Modern society depends on the exchange of ideas and talent between countries. The same applies to medicine as well. Unfortunately, due to the myriads of medical licensure exams across different countries, this kind of exchange and collaboration can become extremely tedious and at times impractical. Getting into higher training for the international trainee becomes a daunting task. Take the following hypothetical scenario:-

Dr. Underdog went to medical school in a country bordering Angola and got his local medical license after graduating and passing local licensure exams. He now intends to gain higher training in colorectal surgery (… of all things :-) ) in the US. Before getting into a higher training program he needs an American license. He proceeds to sit for the United States Medical Licensure Exam (USMLE) and passes all 4 component exams in this process with flying colors. Good for him, Dr. Underdog’s thirst for knowledge is relentless. After gaining qualifications as a colorectal surgeon, he is now interested in learning a highly advanced and experimental procedure involving cosmic radiation and bizarre tumor polyps :-P , only available in Australia. He is now required to pass the Australian Medical Council licensure exams before he begins. He goes ahead with that and gains the skills he’s always dreamed about :-) . By now, Dr. Underdog has been through at least a dozen different licensure exams. The exams he gave in the US and Australia weren’t directly related to the subjects he studied at those places. Seeing great potential in this emerging pioneer, a group of people from a country near Chile invite Dr. Underdog over. They’d like him to impart some of the training he received to a couple of their fortunate students. Unfortunately, he needs to clear their local licensure exams before he can begin. He candidly goes through that as well. In this new land, Dr. Underdog meets a fellow international doc who’s been through twice the number of licensure exams as he has, to get to a position as senior faculty member while also dealing with some mind blowing research – literally involving blowing stuff :-P , partly as an outlet for his bottled up frustrations over licensure systems. … See how tedious it can get?

If I’m interested in gaining specialized skills and/or knowledge available in only certain parts of the world, I need to get straight down to business without having to worry about sitting for multiple licensure exams. Sitting for multiple licensure exams is not only wasteful of time and money, it is also redundant. Most of these exams test the same content anyway. Most importantly, as an aspiring international trainee, my focus has to be on the exams directly related to the training I intend to pursue rather than random licensure tests.

Solution? A universal licensure pathway ratified by an international body such as the WHO that should be acceptable to all countries.

At the moment, a few agencies such the Medical Council of Canada and the Australian Medical Council are conducting joint licensure tests. Their efforts in this direction are laudable and should be wholeheartedly welcomed. Hopefully other countries will follow suit and some day a universal licensure pathway will become a reality. Until then, international trainees can only follow in Dr. Underdog’s tortuous footsteps!

Readability grades for this post:

Kincaid: 10.0
ARI: 11.2
Coleman-Liau: 14.4
Flesch Index: 53.2/100
Fog Index: 13.1
Lix: 48.9 = school year 9
SMOG-Grading: 12.0

Powered by Kubuntu Linux 8.04
-

Copyright © 2006 – 2008 Firas MR. All rights reserved.

Our Backyard

with 9 comments

Source, Author and License

Over 80% of healthcare privately owned. Roughly 13% of the populace insured. That’s incredible, India!
-

Copyright © 2006 – 2008 Firas MR. All rights reserved.

Written by Firas MR

April 27, 2008 at 12:28 pm

Quantifying Medicine – A Tricky Road

with 6 comments

Source, Author and License

I have been really enjoying Feinstein’s “Principles of Medical Statistics” the past couple of days. And today I felt like sharing a nifty and pragmatic lesson from the book. Now I’d love to put up an entire chunk from the book right here, but I’m not sure if that would do justice to the copyright. So I’ll just stick to as little of excerpt as possible. But to honestly enjoy it, I recommend reading the entire section. So grab yourself a copy at a local library or whatever and dive in. The chapter of interest is Chapter 6 in Unit 1. Towards the end, there’s a section that goes into interesting detail as to the merits and possible demerits of quantifying medicine. To demonstrate the delicate interplay of qualitative and quantitative descriptions in modern medicine, the author quotes a number of research studies that investigated how qualitative terms like “more”, “a lot more”,  “a great deal”, “often”, etc. meant different things to different people. They were able to do this using clever research designs that allowed them to correlate a given qualitative term and its corresponding quantitative estimate and they did this for different groups of people – doctors, clerks, etc. Frustrated at the lack of a consensus on the exact amount or probability or percentile/percentage and so on, of mundane terms like the above, one scientist even thought of a universal coding mechanism for day to day use. What frustrations you ask? One example is where an ulcer deemed “large” on one visit to a doctor at the clinic could actually be deemed “small” on a subsequent visit to a different doctor, even though the ulcer might have really grown larger during this time.

It is quite clear then, that qualitativeness in medicine often seems like a roadblock of some sort. Not to dismay however, as Dr. Feinstein ends this chapter with a subsection called “virtues of imprecision”. I found this part to be the most worth savoring. He describes some of the advantages of using qualitative terms and why on some occasions they might in fact be better in communication:-

  1. Qualitative terms allow you to convey a message without resorting to painstaking detail. Detail that you might not have the ability to perceive or compute.
  2. Patients find qualitative terms more intuitive and so do doctors.
  3. Defining or maybe replacing qualitative terms with quantitative ones, potentially could lead to endless debates on where cut-offs would lie (why should 1001 come under ‘large’ and 1000 under ‘small’…hope you get the drift).
  4. Many statistical estimates like survival rates, etc. come out of potentially biased studies and it may be wrong to say that “good” survival is say 90% in 5 years and “better” is 99% in 5 years. Which is to say, that it may be wrong to give an impression of precision when in fact it isn’t present.
  5. Perhaps the most important and pragmatic lesson he gave, was about the false sense of security/insecurity numbers could give to either patients or doctors. Naivety plays devil here. He demonstrated this using the cancer staging system. Each cancer stage has some sort of survival statistic attached to it, right? So for example (the numbers here are solely arbitrary), for Stage I cancer, the 5-year survival is 90%. Stage III cancer in contrast is given a 5-year survival probability of 40%. A patient with Stage III cancer, will be given this information by his or her physician and management plans will be made. What the physician might not realize is that if Stage III is split into further sub-stages, say from Stage III-substage 1 to Stage III-substage 10, the survival probabilities range from 75% to 5%. The 40% statistic is the ‘average’ and may not be sufficiently relevant to this particular patient, who for all we know could belong to Stage III-substage 1. So, broad statistical numbers are not necessarily pertinent to individual cases.

Oh and did I mention excerpt? Ah, never mind. I’ve covered most of the juice paraphrasing anyway :-) .

Hope you’ve found this post interesting. And if you have, do send in your comments :-) .

Readability grades for this post:

Kincaid: 8.8
ARI: 9.1
Coleman-Liau: 11.8
Flesch Index: 62.3/100 (plain English)
Fog Index: 12.2
Lix: 40.4 = school year 6
SMOG-Grading: 11.3

Powered by Kubuntu Linux 7.10
-

Copyright © 2006 – 2008 Firas MR. All rights reserved.

Written by Firas MR

April 24, 2008 at 1:50 pm

Follow

Get every new post delivered to your Inbox.