My Dominant Hemisphere

The Official Weblog of 'The Basilic Insula'

Posts Tagged ‘Medical Statistics

The Doctor’s Apparent Ineptitude

leave a comment »

ineptitude

via Steve Kay@Flickr (by-nc-nd license)

As a fun project, I’ve decided to frame this post as an abstract.

AIMS/OBJECTIVES:

To elucidate factors influencing perceived incompetence on the part of the doctor by the layman/patient/patient’s caregiver.

MATERIALS & METHODS:

Arm-chair pontification and a little gedankenexperiment based on prior experience with patients as a medical trainee.

RESULTS:

Preliminary analyses indicate widespread suspicions among patients on the ineptitude of doctors no matter what the level of training. This is amply demonstrated in the following figure:

As one can see, perceived ineptitude forms a wide spectrum – from most severe (med student) to least severe (attending). The underlying perceptions of incompetence do not seem to abate at any level however, and eyewitness testimonies include phrases such as ‘all doctors are inept; some more so than others’. At the med student level, exhausted patients find their anxious questions being greeted with a variety of responses ranging from the dumb ‘I don’t know’, to the dumber ‘well, I’m not the attending’, to the dumbest ‘uhh…mmmm..hmmm <eyes glazed over, pupils dilated>’. Escape routes will be meticulously planned in advance both by patients and more importantly by med students to avert catastrophe.

As for more senior medics such as attendings, evasion seems to be just a matter of hiding behind statistics. A gedankenexperiment was conducted to demonstrate this. The settings were two patients A and B, undergoing a certain surgical procedure and their respective caregivers, C-A and C-B.

Patient A

Consent & Pre-op

C-A: (anxious), Hey doc, ya think he’s gonna make it?

Doc: It’s difficult to say and I don’t know that at the moment. There are studies indicating that 95% live and 5% die during the procedure though.

C-A: ohhh kay (slightly confused) (murmuring)…’All this stuff about knowing medicine. What does he know? One simple question and he gives me this? What the heck has this guy spent all these years studying for?!’

Post-op & Recovery

C-A: Ah, I just heard! He made it! Thank you doctor!

Doc: You’re welcome (smug, god-complex)! See, I told ya 95% live. There was no reason for you to worry!

C-A: (sarcastic murmur) ‘Yeah, right. Let him go through the pain of not knowing and he’ll see. Look at him, so full of himself – as if he did something special; luck was on our side anyway. Heights of incompetence!’

Patient B

Consent & Pre-op

C-B: (anxious) Hey doc, ya think he’s gonna make it?

Doc: It’s difficult to say and I don’t know that at the moment. There are studies indicating that 95% live and 5% die during the procedure though.

C-B: ohhh kay (slightly confused) (murmuring)…’All this stuff about knowing medicine. What does he know? One simple question and he gives me this? What the heck has this guy spent all these years studying for?!’

Post-op & Recovery

C-B: (angry, shouting numerous explicatives) What?! He died on the table?!

Doc: Well, I did mention that there was a 5% death rate.

C-B: (angry, shouting numerous explicatives).. You (more explicatives) incompetent quack! (murmuring) “How convenient! A lawsuit should fix him for good!”

The Doctor’s Coping Strategy

Although numerous psychology models can be applied to understand physician behavior, the Freudian model reveals some interesting material. Common defense strategies that help doctors include:

Isolation of affect: eg. Resident tells Fellow, “you know that patient with the …well, she had a massive MI and went into VFib..died despite ACLS..poor soul…so hey, I hear they’re serving pizza today at the conference…(the conference about commercializing healthcare and increasing physician pay-grades for ‘a better  and healthier tomorrow’)”

Intellectualization: eg. Attending tells Fellow, “so you understand why that particular patient bled to death? Yeah it was DIC in the setting of septic shock….plus he had a prior MI with an Ejection Fraction of 33% so there was that component as well..but we couldn’t really figure out why the antibiotics didn’t work as expected…ID gave clearance….(ad infinitum)…so let’s present this at our M&M conference this week..”

Displacement: eg. Caregiver yells at Fellow, “<explicatives>”. Fellow yells at intern, “You knew that this was a case that I had a special interest in and yet you didn’t bother to page me? Unacceptable!…” Intern then yells at med student, “Go <explicatives> disimpact Mr. X’s bowels…if I don’t see that done within the next 15 minutes, you’re in for a class! Go go go…clock’s ticking…tck tck tck!”

We believe there are other coping mechanisms that are important too, but in our observations these appear to be the most common. Of the uncommon ones, we think med students as a group in particular, are the most vulnerable to Regression & Dissociation, duly accounting for confounding factors.

All of these form a systematic ego-syntonic pattern of behavior, but for reasons we are still exploring, is not included in the DSM-IV manual’s section on Personality Disorders.

CONCLUSIONS:

Patients and their caregivers seem to think that ALL doctors are fundamentally inept, period. Ineptitude follows a wide spectrum however – ranging from the bizarre to the mundane. Further studies (including but not limited to arm-chair pontification) need to be carried out to corroborate these startling results and the factors that we have reported. Other studies need to elucidate remedial measures that can be employed to save the doctor-patient relationship.

NOTE: I wrote this piece as a reminder of how the doctor-patient relationship is experienced from the patient’s side. In our business-as-usual frenzy, we as medics often don’t think about these things. And these things often DO matter a LOT to our patients!

Copyright © Firas MR. All rights reserved.

USMLE – Designing The Ultimate Questions

leave a comment »

Question

Shot courtesy crystaljingsr @ Flickr (Creative Commons Attribution, Non-Commercial License)

 

There are strategies that examiners can employ to frame questions that are designed to stump you on an exam such as the USMLE. Many of these strategies are listed out in the Kaplan Qbook and I’m sure this stuff will be familiar to many. My favorite techniques are the ‘multi-step’ and the ‘bait-and-switch’.

The Multi-Step

Drawing on principles of probability theory, examiners will often frame questions that require you to know multiple facts and concepts to get the answer right. As a crude example:

“This inherited disease exclusive to females is associated with acquired microcephaly and the medical management includes __________________.”

Such a question would be re-framed as a clinical scenario (an outpatient visit) with other relevant clinical data such as a pedigree chart. To get the answer right, you would need:

  1. Knowledge of how to interpret pedigree charts and identify that the disease manifests exclusively in females.
  2. Knowledge of Mendelian inheritance patterns of genetic diseases.
  3. Knowledge of conditions that might be associated with acquired microcephaly.
  4. Knowledge of medical management options for such patients.

Now taken individually, each of these steps – 1, 2, 3 and 4 – has a probability of 50% that you could get it right purely by random guessing. Combined together however, which is what is necessary to get the answer, the probability would be 50% * 50% * 50% * 50% = 6.25% [combined probability of independent events]. So now you know why they actually prefer multi-step questions over one or two-liners! 🙂 Notice that this doesn’t necessarily have anything to do with testing your intelligence as some might think. It’s just being able to recollect hard facts and then being able to put them together. They aren’t asking you to prove a math theorem or calculate the trajectory of a space satellite 😛 !

The Bait-and-Switch

Another strategy is to riddle the question with chock-full of irrelevant data. You could have paragraph after paragraph describing demographic characteristics, anthropometric data, and ‘bait’ data that’s planted there to persuade you to think along certain lines and as you grind yourself to ponder over these things you are suddenly presented with an entirely unrelated sentence at the very end, asking a completely unrelated question! Imagine being presented with the multi-step question above with one added fly in the ointment. As you finally finish the half-page length question, it ends with ‘<insert-similar-disease> is associated with the loss of this enzyme and/or body part: _______________’. Very tricky! Questions like these give flashbacks and dejavu of  days from 2nd year med school, when that patient with a neck lump begins by giving you his demographic and occupational history. As an inexperienced med student you immediately begin thinking: ‘hmmm..okay, could the lump be related to his occupation? …hmm…’. But wait! You haven’t even finished the physical exam yet, let alone the investigations. As medics progress along their careers they tend to phase out this kind of analysis in favor of more refined ‘heuristics’ as Harrison’s puts it. A senior medic will often wait to formulate opinions until the investigations are done and will focus on triaging problems and asking if management options are going to change them. The keyword here is ‘triage’. Just as a patient’s clinical information in a real office visit is filled with much irrelevant data, so too are many USMLE questions. That’s not to say that demographic data, etc. are irrelevant under all conditions. Certainly, an occupational history of being employed at an asbestos factory would be relevant in a case that looks like a respiratory disorder. If the case looks like a respiratory disorder, but the question mentions an occupational history of being employed as an office clerk, then this is less likely to be relevant to the case. Similarly if it’s a case that overwhelmingly looks like an acute abdomen, then a stray symptom of foot pain is less likely to be relevant. Get my point? That is why many recommend reading the last sentence or two of a USMLE question before reading the entire thing. It helps you establish what exactly is the main problem that needs to be addressed.

Hope readers have found the above discussion interesting :). Adios for now!

Copyright © Firas MR. All rights reserved.

USMLE Scores – Debunking Common Myths

with 23 comments

Source, Author and License

Lot’s of people have misguided notions as to the true nature of USMLE scores and what exactly they represent. In my opinion, this occurs in part due to a lack of interest in understanding the logistic considerations of the exam. Another contributing factor could be the bordering brainless, mentally zero-ed scientific culture most exam goers happen to be cultivated in. Many if not most of these candidates, in their naive wisdoms got into Medicine hoping to rid themselves of numerical burdens forever!

The following, I hope, will help debunk some of these common myths.

Percentile? Uh…what percentile?

This myth is without doubt, the king of all 🙂 . It isn’t uncommon that you find a candidate basking in the self-righteous glory of having scored a ’99 percent’ or worse, a ’99 percentile’. The USMLE at one point used to provide percentile scores. That stopped sometime in the mid to late ’90s. Why? Well, the USMLE organization believed that scores were being unduly given more weightage than they ought to in medics’ careers. This test is a licensure exam, period. That has always been the motto. Among other things, when residency programs started using the exam as a yard stick to differentiate and rank students, the USMLE saw this as contrary to its primary purpose and said enough is enough. To make such rankings difficult, the USMLE no longer provides percentile scores to exam takers.

The USMLE does have an extremely detailed FAQ on what the 2-digit (which people confuse as a percentage or percentile) and 3-digit scores mean. I strongly urge all test-takers to take a hard look at it and ponder about some of the stuff said therein.

Simply put, the way the exam is designed, it measures a candidate’s level of knowledge and provides a 3-digit score with an important import. This 3-digit score is an unfiltered indication of an individual’s USMLE know-how, that in theory shouldn’t be influenced by variations in the content of the exam, be it across space (another exam center and/or questions from a different content pool) or time (exam content from the future or past). This means that provided a person’s knowledge remains constant, he or she should in theory, achieve the same 3-digit score regardless of where and when he or she took the test. Or, supposedly so. The minimum 3-digit score that is required to ‘pass’ the exam is revised on an annual basis to preserve this space-time independent nature of the score. For the last couple of years, the passing score has hovered around 185. A ‘pass’ score makes you eligible to apply for a license.

What then is the 2-digit score? For god knows what reason, the Federation of State Medical Boards (these people provide medics in the US, licenses based on their USMLE scores) has a 2-digit format for a ‘pass’ score on the USMLE exam. Unlike the 3-digit score this passing score is fixed at 75 and isn’t revised every year.

How does one convert a 3-digit score to a 2-digit score? The exact conversion algorithm hasn’t been disclosed (among lots of other things). But for matters of simplicity, I’m going to use a very crude approach to illustrate:

Equate the passing 3-digit score to 75. So if the passing 3-digit score is 180, then 180 = 75. 185 = 80, 190 = 85 … and so on.

I’m sure the relationship isn’t linear as shown above. For one, by very definition, a 2-digit score ends at 99. 100 is a 3-digit number! So let’s see what happens with our example above:

190 = 85, 195 = 90, 199 = 99. We’ve reached the 2-digit limit at this point. Any score higher than 199 will also be equated to 99. It doesn’t matter if you scored a 240 or 260 on the 3 digit scale. You immediately fall under the 99 bracket along with the lesser folk!

These distortions and constraints make the 2-digit score an unjust system to rank test-takers and today, most residency programs use the 3-digit score to compare people. Because the 3-digit to 2-digit scale conversion changes every year, it makes sense to stick to the 3-digit scale which makes comparisons between old-timers and new-timers possible, besides the obvious advantage in helping comparisons between candidates who deal/dealt with different exam content.

Making Assumptions And Approximate Guesses

The USMLE does provide Means and Standard Deviations on students’ score cards. But these statistics don’t strictly apply to them because they are derived from different test populations. The score card specifically mentions that these statistics are “for recent” instances of the test.

Each instance of an exam is directed at a group of people which form its test population. Each population has its own characteristics such as whether or not it’s governed by Gaussian statistics, whether there is skew or kurtosis in its distribution, etc. The summary statistics such as the mean and standard deviation will also vary between different test populations. So unless you know the exact summary statistics and the nature of the distribution that describes the test population from which a candidate comes, you can’t possibly assign him/her a percentile rank. And because Joe and Jane can be from two entirely different test populations, percentiles in the end don’t carry much meaning. It’s that simple folks.

You could however make assumptions and arbitrary conclusions about percentile ranks though. Say for argument sake, all populations have a mean equal to 220 and a standard deviation equal to 20 and conform to Gaussian statistics. Then a 3-digit score of:

220 = 50th percentile

220 + 20 = 84th percentile

220 + 20 + 20 = 97th percentile

[Going back to our ’99 percentile’ myth and with the specific example we used, don’t you see how a score equal to 260 (with its 2-digit 99 equivalent) still doesn’t reach the 99 percentile? It’s amazing how severely people can delude themselves. A 99 percentile rank is no joke and I find it particularly fascinating to observe how hundreds of thousands of people ludicrously claim to have reached this magic rank with a 2-digit 99 score. I mean, doesn’t the sheer commonality hint that something in their thinking is off?]

This calculator makes it easy to calculate a percentile based on known Mean and Standard Deviations for Gaussian distributions. Just enter the values for Mean and Standard Deviation on the left, and in the ‘Probability’ field enter a percentile value in decimal form (97th percentile corresponds to 0.97 and so forth). Hit the ‘Compute x’ button and you will be given the corresponding value of ‘x’.

99th Percentile Ain’t Cake

Another point of note about a Gaussian distribution:

The distance from the 0th percentile to the 25th percentile is also equal to the distance between the 75th and 100th percentile. Let’s say this distance is x. The distance between the 25th percentile and the 50th percentile is also equal to the distance between the 50th percentile and the 75th percentile. Let’s say this distance is y.

It so happens that x>>>y. In a crude sense, this means that it is disproportionately tougher for you to score extreme values than to stay closer to the mean. Going from a 50th percentile baseline, scoring a 99th percentile is disproportionately tougher than scoring a 75th percentile. If you aim to score a 99 percentile, you’re gonna have to seriously sweat it out!

It’s the interval, stupid

Say there are infinite clones of you existent in this world and you’re all like the Borg. Each of you is mentally indistinguishable from the other – possessing ditto copies of USMLE knowhow. Say that each of you took the USMLE and then we plot the frequencies of these scores on a graph. We’re going to end up with a Gaussian curve depicting this sample of clones, with its own mean score and standard deviation. This process is called ‘parametric sampling’ and the distribution obtained is called a ‘sampling distribution’.

The idea behind what we just did is to determine the variation that we would expect in scores even if knowhow remained constant – either due to a flaw in the test or by random chance.

The standard deviation of a sampling distribution is also called ‘standard error’. As you’ll probably learn during your USMLE preparation, knowing the standard error helps calculate what are called ‘confidence intervals’.

A confidence interval for a given score can be calculated as follows (using the Z-statistic):-

True score = Measured score +/- 1.96 (standard error of measurement) … for 95% confidence

True score = Measured score +/- 2.58 (standard error of measurement) … for 99% confidence

For many recent tests, the standard error for the 3-digit scale has been 6 [Every score card quotes a certain SEM (Standard Error of Measurment) for the 3-digit scale]. This means that given a measured score of 240, we can be 95% certain that the true value of your performance lies between a low of 240 – 1.96 (6) and a high of 240 + 1.96 (6). Similarly we can say with 99% confidence that the true score lies between 240 – 2.58 (6) and 240 + 2.58 (6). These score intervals are probablistically flat when graphed – each true score value within the intervals calculated has an equal chance of being the right one.

What this means is that, when you compare two individuals and see their scores side by side, you ought to consider what’s going on with their respective confidence intervals. Do they overlap? Even a nanometer of overlapping between CIs makes the two, statistically speaking, indistinguishable, even if in reality there is a difference. As far as the test is concerned, when two CIs overlap, the test failed to detect any difference between these two individuals (some statisticians disagree. How to interpret statistical significance when two or more CIs overlap is still a matter of debate! I’ve used the view of the authors of the Kaplan lecture notes here). Capiche?

Beating competitors by intervals rather than pinpoint scores is a good idea to make sure you really did do better than them. The wider the distance separating two CIs, the larger is the difference between them.

There’s a special scenario that we need to think about here. What about the poor fellow who just missed the passing mark? For a passing mark of 180, what of the guy who scored, say 175? Given a standard error of 6, his 95% CI definitely does include 180 and there is no statistically significant (using a 5% margin of doubt) difference between him and another guy who scored just above 180. Yet this guy failed while the other passed! How do we account for this? I’ve been wondering about it and I think that perhaps, the pinpoint cutoffs for passing used by the USMLE exist as a matter of practicality. Using intervals to decide passing/failing results might be tedious, and maybe scientific endeavor ends at this point. Anyhow, I leave this question out in the void with the hope that it sparks discussions and clarifications.

If you care to give it a thought, the graphical subject-wise profile bands on the score card are actually confidence intervals (95%, 99% ?? I don’t know). This is why the score card clearly states that if any two subject-wise profile bands overlap, performance in these subjects should be deemed equal.

I hope you’ve found this post interesting if not useful. Please feel free to leave behind your valuable suggestions, corrections, remarks or comments. Anything 🙂 !

Readability grades for this post:

Kincaid: 8.8
ARI: 9.4
Coleman-Liau: 11.4
Flesch Index: 64.3/100 (plain English)
Fog Index: 12.0
Lix: 40.3 = school year 6
SMOG-Grading: 11.1

Powered by Kubuntu Linux 8.04

Copyright © 2006 – 2008 Firas MR. All rights reserved.

Quantifying Medicine – A Tricky Road

with 6 comments

Source, Author and License

I have been really enjoying Feinstein’s “Principles of Medical Statistics” the past couple of days. And today I felt like sharing a nifty and pragmatic lesson from the book. Now I’d love to put up an entire chunk from the book right here, but I’m not sure if that would do justice to the copyright. So I’ll just stick to as little of excerpt as possible. But to honestly enjoy it, I recommend reading the entire section. So grab yourself a copy at a local library or whatever and dive in. The chapter of interest is Chapter 6 in Unit 1. Towards the end, there’s a section that goes into interesting detail as to the merits and possible demerits of quantifying medicine. To demonstrate the delicate interplay of qualitative and quantitative descriptions in modern medicine, the author quotes a number of research studies that investigated how qualitative terms like “more”, “a lot more”,  “a great deal”, “often”, etc. meant different things to different people. They were able to do this using clever research designs that allowed them to correlate a given qualitative term and its corresponding quantitative estimate and they did this for different groups of people – doctors, clerks, etc. Frustrated at the lack of a consensus on the exact amount or probability or percentile/percentage and so on, of mundane terms like the above, one scientist even thought of a universal coding mechanism for day to day use. What frustrations you ask? One example is where an ulcer deemed “large” on one visit to a doctor at the clinic could actually be deemed “small” on a subsequent visit to a different doctor, even though the ulcer might have really grown larger during this time.

It is quite clear then, that qualitativeness in medicine often seems like a roadblock of some sort. Not to dismay however, as Dr. Feinstein ends this chapter with a subsection called “virtues of imprecision”. I found this part to be the most worth savoring. He describes some of the advantages of using qualitative terms and why on some occasions they might in fact be better in communication:-

  1. Qualitative terms allow you to convey a message without resorting to painstaking detail. Detail that you might not have the ability to perceive or compute.
  2. Patients find qualitative terms more intuitive and so do doctors.
  3. Defining or maybe replacing qualitative terms with quantitative ones, potentially could lead to endless debates on where cut-offs would lie (why should 1001 come under ‘large’ and 1000 under ‘small’…hope you get the drift).
  4. Many statistical estimates like survival rates, etc. come out of potentially biased studies and it may be wrong to say that “good” survival is say 90% in 5 years and “better” is 99% in 5 years. Which is to say, that it may be wrong to give an impression of precision when in fact it isn’t present.
  5. Perhaps the most important and pragmatic lesson he gave, was about the false sense of security/insecurity numbers could give to either patients or doctors. Naivety plays devil here. He demonstrated this using the cancer staging system. Each cancer stage has some sort of survival statistic attached to it, right? So for example (the numbers here are solely arbitrary), for Stage I cancer, the 5-year survival is 90%. Stage III cancer in contrast is given a 5-year survival probability of 40%. A patient with Stage III cancer, will be given this information by his or her physician and management plans will be made. What the physician might not realize is that if Stage III is split into further sub-stages, say from Stage III-substage 1 to Stage III-substage 10, the survival probabilities range from 75% to 5%. The 40% statistic is the ‘average’ and may not be sufficiently relevant to this particular patient, who for all we know could belong to Stage III-substage 1. So, broad statistical numbers are not necessarily pertinent to individual cases.

Oh and did I mention excerpt? Ah, never mind. I’ve covered most of the juice paraphrasing anyway 🙂 .

Hope you’ve found this post interesting. And if you have, do send in your comments 🙂 .

Readability grades for this post:

Kincaid: 8.8
ARI: 9.1
Coleman-Liau: 11.8
Flesch Index: 62.3/100 (plain English)
Fog Index: 12.2
Lix: 40.4 = school year 6
SMOG-Grading: 11.3

Powered by Kubuntu Linux 7.10

Copyright © 2006 – 2008 Firas MR. All rights reserved.

Written by Firas MR

April 24, 2008 at 1:50 pm

Know Thy Numbers!

with 5 comments

Source, Author and License

Being face to face with writer’s block, I suppose there isn’t anything particularly exciting I feel like writing about for today. I will therefore talk about a couple of things that I’ve been learning from biostatistics and that I feel many of my fellow medics would benefit from.

We all make comparisons between numbers. If ‘A’ weighs 100 kg and ‘B’ weighs 50 kg, we often say A is twice as heavy as B (wt. of A / wt. of B). We can also say A is 50 kg heavier than B (weight of A – weight of B). Is the same true for temperature in Fahrenheit? Is 100F twice as hot as 50F? Well interestingly, no! A temperature of 100F is 50F hotter than a temperature of 50F but not twice as hot. Therein lies a fundamental difference between two different kinds of ‘Dimensional‘ (otherwise called ‘Continuous‘) data:

  1. Interval data: a dimensional data set that has values with an equal difference between them. So if numbers denoting Fahrenheit in F are listed as 1, 2, 3, 4, … we clearly know that as we progress from 1 to 2 and then to 3, every subsequent number in that set is separated from its predecessor by an equal interval.
  2. Ratio data: a dimensional data set having properties of an Interval data set and, in addition has an absolute zero. Kelvin vs. Fahrenheit is a classic example. Kelvin has an absolute zero while Fahrenheit does not. Weight in kg, too belongs to the class of Ratio data.

The implications of the above dictate how we can manipulate and handle our data. In making comparisons between interval data such as Fahrenheit, we don’t have a universal reference against which two compare two different values – in our example 100F and 50F. The 0F standard is purely arbitrary. If in a fit of mad-hatter rage, we suddenly said that from now on 0F is no longer 0F but 10F, our original values for 100F and 50F now become 110F and 60F. The difference (110-60) remains the same as before (100-50) but the ratio (110/60) changes from the original (100/50). All of this occurs because there isn’t anything stopping you from making a change to your arbitrary 0F standard.

Ratio data sets on the other hand have an absolute standard – the absolute zero. By definition, you can’t change it! This standard is not subject to arbitrary whims and fancies. Taking our Kelvin example, 100K is 50K hotter than a temperature of 50K (100-50). Not only that, it is absolutely fine for you to say 100K is twice as hot as 50K (100/50). Similarly for weight in kilograms, 0kg is absolute. And thus 100kg is 50kg heavier than a weight measurement of 50kg (100-50) and it is also twice as heavy as 50kg (100/50).

The crude analogy is that of a sailor out in the sea. In order to navigate, he could use objects in the ocean such as rocks that could very well change their positions due to climatic conditions (~interval data). Or he could use the Pole Star to help him navigate (~ratio data).

Lessons Learned

You can compare interval data by calculating their difference. No matter what you set as your arbitrary standard, the difference will not change. For ratio data, in addition to calculating differences you also have the luxury of calculating ratios.

A Comedy of Errors

Most people don’t realize this but the IQ score is an example of interval data. A guy scoring 200 on the test did not do twice as good as another who scored 100. He did 100 points better. Standards for a given IQ testing method are set arbitrarily. Not only that, different testing methods could have different arbitrary standards. The WAIS has a different standard than the Stanford-Binet. Remember that.

[In real life, the IQ score isn’t truly interval in nature. How is one to assume that there’s an equal interval of ‘intelligence’ between subsequent scores of 100, 101, 102, … ? It’s analogous to cancer staging actually. Stage IV disease is no doubt worse than Stage III disease which in turn is worse than Stage II disease, … You don’t necessarily progress by equal intervals of ‘disease-ness’ with each subsequent stage from I to IV. Similar to numbers for cancer staging, numbers for IQ scores are actually ‘Ordinal‘ data in disguise.]

Notes:

All data can be divided into the following types (from least informative to most informative):

  1. Categorical – Nominal : Distinct categories of data, that you assign names to and that you can’t rank. Eg. Smoker and Non-smoker; Asian, African, American, Australian, etc.
  2. Categorical – Ordinal : Distinct categories of data that you can not only assign names to but can also assign ranks. Intervals between ranks aren’t equal. Eg. Gold medal, Silver medal, Bronze medal; Class rank, Cancer Staging, etc. are also examples of ordinal data. The only difference is that they are disguised as numbers.
  3. Dimensional – Interval : Numerical data with ranks. Ranks have equal intervals between them. There is no absolute zero.
  4. Dimensional – Ratio : Interval data with an absolute zero.

References

  1. Biostatistics – The Bare Essentials (by Geoffrey R. Norman (Author), David L. Streiner)
  2. Principles of Medical Statistics (by Alvan R. Feinstein)

Powered by Kubuntu Linux 7.10

Readability grades for this post:-

Kincaid: 6.3
ARI: 5.3
Coleman-Liau: 10.3
Flesch Index: 70.6/100
Fog Index: 9.8
Lix: 33.9 = below school year 5
SMOG-Grading: 9.7

Copyright © 2006 – 2008 Firas MR. All rights reserved.

Written by Firas MR

April 14, 2008 at 3:51 am