Friday, November 23, 2012

Is the article of Medical Journal of Australia flawed?

 Content:
Part one: who actually had higher complaint rates, IMGs or Australian trained doctors?
Part two: A retraspective study can't reach cause-and-effect conclusion
 Part three: The consequences of the flawed article
Part four: Discussions on Doc2Doc forums of British Medical Journal (links)
Part five: How to present an unbiased comparison
(an example from Australian Bureau of Statistics)


Part one: Who actually had higher complaint rates, IMGs or Australian trained doctors?

In October 2012, Medical Journal of Australia published an article titled Risks of complaints and adverse disciplinary findings against international medical graduates in Victoria and Western Australia.  Katie Elkin, Matthew J Spittal, David M Studdert. MJA 2012; 197: 448–452. " Web link to the article The results presented in the article is flawed.

In the introduction of the article, the authors say: "Previous international studies have compared IMGs to their domestically trained counterparts in relation to patient outcomes of care and risks of complaints and disciplinary action. Results from these studies run the gamut: most have found no association, but IMG status has also been found to be associated with higher and lower complaint-related risks. No research of this kind has been conducted in Australia." As a result, the authors (1) "calculated counts and proportions to describe the registered doctors and the complaints," and (2) "fitted three logistic regression analyses using the doctor year level dataset" in order to "calculate  of complaints". The article's focus is the calculation of complaint risk based on doctor years. But I am not sure if the "proportions" were also tested statistically.

The article concludes "Overall, IMGs are more likely than Australian-trained doctors to attract complaints to medical boards and adverse disciplinary findings, but the risks differ markedly by country of training." In particular, IMGs from Nigeria, Egypt, Poland, Russia, Pakistan, Philippines and India attracted the highest complaint rates.

At the same time, the authors also asked a question: "Why is medical training in certain countries associated with higher risks of complaints? Unfortunately, our findings provide no clear answer. The article states that the “at risk” countries identified by the authors share some similar features: English is not the primary language, and all have medical education and health systems that are quite different to Australia’s. However, this explanation is incomplete because the same can be said of several other countries (eg, Bangladesh, China) whose trainees did not exhibit higher risks of attracting complaints....More research is needed to elucidate the reasons for the intercountry differences." It doesn't make sense, does it? This is a very interesting question and I will explain later.

After reading the article, I tested difference between proportions using two different statistical methods: a Two-sample t-test for Proportions/Percentages (This is not student t test!) and Pearson's Chi-squared test. Both tests achieved the same results (see Tables 1 and 2). In contrast to the article, my results paint different pictures as to whether International Medical Graduates (IMGs) had higher complaint rates than Australian-trained doctors, or vice versa.

As shown in Tables 1 and 2 and Figure 1, Overall, IMGs had significantly lower complaint rate in comparison to Australian-trained doctors (t = 11.952, p < 0.0001; Chi-squared value =142.5264, p < 0.0001).  

The same statistical methods were also used to compare complaint rates between Australian-trained doctors and IMGs from various individual countries.  As shown in Tables 1 and 2 and Figure 1, IMGs from following countries had significantly lower complaint rates in comparison to Australian trained doctors ( P < 0.05) : UK / Ireland, New Zealand, South Africa, Germany, China, Malaysia, Bangladesh, Hong Kong, Netherlands and Iran. There was no statistical difference between Australia trained doctors and IMGs from following countries ( P > 0.05 ): India, Sri Lanka, Iraq, Singapore, Pakistan, Philippines, Nigeria and other unnamed countries.

My results also confirmed IMGs from Egypt, Russia and Poland faced significantly higher complaint rate ( P < 0.05) .

As I mentioned above, the authors of the article expressed that they could not explain why IMGs from several other countries (e.g, Bangladesh, China) did not exhibit higher risks of attracting complaints. In order to answer this question, I compared the data in relation to Pakistan (higher risk according to the article) and Bangladesh (lower risk).  Chi-squared test results in Table 3 show there was no significant difference between the two countries (p > 0.05). Moreover, doctors from Pakistan actually had slightly lower complaint rate than Australian-trained doctor (not higher as in the article), although statistically the difference wasn't significant (Tables 1 and 2, Figure 1) (p > 0.05).  This may explain why the results in the article do not make sense. Importantly, I don't think there is a vast difference between the two countries in terms of culture and education as the result I expect that the complaint rates between the two should be similar.

In summary, IMGs in VIC and WA actually had significantly lower complaint rates than Australian-trained doctors, although IMGs from three countries had higher complaint rates.



Note:
# There was no statistical difference between Australian-trained doctors and IMGs from these countries.
* IMGs from these countries had significantly higher complaints than Australian trained doctors.

The rest of the groups: the complaint rates in IMGs from these countries were significantly lower than Australian-trained doctor group.
 









Questions to the Authors:

(1) How could you say that international medical graduates are more likely to be subject of complaints to medical boards while the complaint rates against them were in fact significantly lower than Australian trained doctors?  

(2) Why didn't you have detailed data, which were used for doctor-year statistical analysis, published so that the readers could scrutinise it?

(3) Why did you publish the results while you had great difficulities to interprete them? 

(4) Why did you chose to publish the article at the time when new Australian medical graduates are facing unemployment?


Part two: A retraspective study can't reach cause-and-effect conclusion.

A senior statistician in Australia wrote to Medical Journal of Australia: "It is easy to come up with a simple example. We are interested in comparing the mortality due to airplane crashes of hobby pilots compared to commercial pilots. Using census data, we see 1,234 out of 123,456 hobby pilots died from plane crashes while 543,210 out of 6,543,210 commercial airline pilots died. These data show that the proportion of deaths is higher in the commercial pilots compared to the hobby pilots (1.0% versus 8.3%). Commercial pilots should be grounded!" Confused??

The authors should have understood the problems with retrospective studies.  Unlike prospective studies, retrospective studies are unable to reach cause-and-effect conclusions because the data are often collected for purposes other than research, such as administrative data. That kind of data contain confounding factors - those unforeseen and unaccounted - for variables that may affect results. Good examples here are the case  comparing hobby pilots and commercial pilots and the case comparing IMGs and Australian trained doctors. This is why most sources of error due to confounding and bias are more common in retrospective studies as the result retrospective studies are often criticised.

When a result of statistical analysis doesn't make sense, it's important to have a good look at the design of the analysis. Comparing commercial pilots and hobby pilots is much like comparing apples and pears, because commercial pilots and hobby pilots operate different air crafts and fly in different conditions. The main defect of the comparison is not about whether you use proportion difference to compare the two groups or use person-years, but inappropriately comparing two groups of people who do different jobs involving different risk factors. Similarly it's illogical to compare the mortality and morbidity between a 1000 bed teaching hospital and a 10 bed community hospital and to compare the mortality between a paediatric ward and a cancer palliative care ward. This should apply to the article published by MJA as well. Although authors did notice something was unsusual, they didn't re-think about the reliability of the method they used.


In the article, the authors say: "There was insufficient information on about three-quarters of complaints to specify the type of complaint. Clinical specialty was missing from the registers for 75% of registered doctors in the sample". This means that the proportions of general practitioners, general surgeons, orthopaedic surgeons, obstetricians, general physicians, cardiologists, cosmetic physicians, hospital interns, residents and registrars in the data analysis on the Australian-trained doctor side and the IMG side were unknown.  This is a piece of critical information directly affecting the credibility of the statistical analysis. Let's say the proportion of surgeons in the Australian-trained doctors is higher than that in the IMGs, while the proportion of GPs in the IMG group is higher than that in Australian-trained doctor group. I would not be suprised if the complaint rate of Australian-trained doctor group is higher because surgeons have higher mortality and morbidity rate. A result from comparing surgeons and GPs would be misleading and unhelpful.  

Moreover the authors themselves were confused  as to how to interprete the results. The discussion section of the article doesn't have any solid materials but baseless assumptions. Unfortunately they still chose to have the article published.


Conclusion:  
The statistical results in the article published by the Medical Journal of Australia cannot be trusted due to defective data collection and analysis. This article does not contribute anything but misleading the public.

Part three: The consequences of the article


This artical is causing serious adverse consequences. Politicians and administrators don't read the article but the titles of newspapers. They believe that IMGs have higher risks of complaints therefore IMGs are bad doctors even they have no idea as to what kind of complaints they are talking about. The authors of the article made it clear that they don't know the nature of the complaints, which could be anything. Many of the complaints could be nuisance. For example:

A patient lodged a complaint against a IMG who was unable to tell him whether he had been bitten by a fire-ant two days ago and therefore unable to help him get Primary Industry Department to locate fire-ant nests in his backyard for him.

A patient complained about waiting for too long in the waiting room because the doctor was tied up for a prolonged period by cases such as chest pain or severe depression with suicidal thoughts.

 An elderly man complained to a local newspaper because he was unhappy with his GP (IMG) who had transferred him to a tertiary hospital where he was admitted to ICU. He said the hospital was too far from home (In fact it was only 30 mins drive), and he said the GP could treat him without sending him to the hospital. After reading his complaint in the newspaper, I wrote a letter to the Editor, explaining that the GP actually saved the life of the elderly man. The newspaper published my letter.

Should IMGs are responsible for that kind of nuisance complaints.

The danger is that politicians and administrators act on what's been reported in the media rather than the facts. The publication of the article has caused widespread media frenzy, which severely damaged IMG's reputation.


Part four: Discussions on Doc2Doc forums of British Medical Journal:




Part five: How to present an unbiased comparison: an example from Australian Bureau of Statistics (ABS)

 




The question who are more likely to attract complaints is much the same as who are more likely to have mental disorders, men or women? There is no simple answer to this question due to its complexity. The above example from ABS clearly shows how the authors should present their unbiased research results. Sadly, the authors of Risks of complaints and adverse disciplinary findings against international medical graduates in Victoria and Western Australia failed to meet the same standards for unknown reasons, while Medical Journal of Australia, a professional journal, put itself in the shoes of general media such as newspapers and gossip magzines, negreting its responsibility to safeguard the professional and ethical standards of its publications.