I found an interesting article while surfing the internet this morning.
Yamamoto Swan, Pamela BA, Nighswonger, Beverly RN, Boswell, Gregory L RN, & Stratton, Samuel J. MD, MPH. (2009). Factors Associated With False-Positive Emergency Medical Services Triage for Percutaneous Coronary Intervention. Western Journal of Emergency Medicine, 10(4),
This is a retrospective analysis of 12-lead cases from Orange County California Emergency Medical Services between February 2006 and June 2007.
For those of you who are not aware, in Southern California they use computerized interpretive algorithms to diagnose STEMI in the field. They’ve taken a lot of flack about this from the EMS intelligentsia who interpret it (wrongly) as evidence that fire-based EMS is somehow inferior.
The truth is far more complicated than that.
In the system studied they used three different types of 12-lead monitors.
There were 548 patients who were triaged from the field for primary PCI at a STEMI Receiving Center.
19 cases were excluded from the study for various reasons.
393 patients (74.3%) had PCI with significant coronary lesions found.
The remaining 136 (25.7%) were considered false positives, which included 121 patients (22.9%) who were determined by the ED physician to have no need for PCI, and 15 patients (2.8%) with no culprit artery.
False positive cases were associated with the following variables:
- A specific brand of one of three monitors used in the system
- Sinus tachycardia
- Missing lead recording on 12-lead printout
- Atrial fibrillation
- Female gender
- Poor ECG baseline
A discussion ensues during which the authors make this important statement:
“Poor ECG baseline and failure to record all 12 leads for machine algorithm interpretation are false-positive associated variables that can be addressed by improved quality in field acquisition of 12-leads.”
It can’t be said often enough! That’s why I’m always harping on achieving excellent data quality!
The authors continue:
“Variables more difficult to address are sinus tachycardia and atrial fibrillation, which had a tendency to be wrongly interpreted by machine algorithm as acute MI.”
It would be interesting to know if they are including atrial flutter in with atrial fibrillation. Either way the message is clear. The specificity of the computerized interpretive algorithms is highest when a tachycardia is not present.
Then the authors make this interesting statement:
“An unexpected finding was the association of one type of 12-lead machine with false-positive triage. Once this was re-validated by repeat data analysis, we advised the device manufacturer of the findings. Adjustments and changes to the algorithm for the device have been made and follow-up study is in progress. The type of monitor associated with false-positive 12-leads is not identified in this paper because the oversight Institutional Review Committee for the study requires that a written release from the manufacturer be obtained and such a release was declined.”
A few points here.
First, why in the world would the Institutional Review Committee for the study require a written release from the manufacturer? Research is research and outcomes are outcomes. It’s difficult to escape the conclusion that the IRC was afraid of getting sued.
Second, shame on the device manufacturer for not giving permission for the results to be published. They should just be happy that valuable feedback was given back to the company by the researchers so they can make improvements to their algorithm.
Third, it doesn’t take a rocket scientist to figure out which manufacturer’s 12-lead monitor was associated with a higher rate of false positives!
Let’s think about it. Two of the three use the GE-Marquette 12SL interpretive algorithm (ZOLL and Physio-Control). One of the three uses their own algorithm. Does it really take a college level Introduction to Logic class to connect the dots?
The authors of course admit to some limitations, including this one which I found interesting:
“A more subtle limitation is that our definition of false-positive triage does not take into account patients who were determined by the receiving physicians to lack evidence for an acute STEMI MI, when in fact such an MI was present and PCI could have been a benefit.”
To be honest, I was just amazed that so many activations were canceled by the ED physicians! They acted as gatekeepers, which is extremely important considering the high number of false positive activations triggered by the paramedics in the system.
The fact that only 2.8% of patients who were cathed had no culprit artery is extremely impressive to me. I’m not even convinced that a canceled STEMI Alert (or whatever they call it in Southern California) should be called a “false positive”.
They also state:
“While left bundle branch block was analyzed within the study population 12-leads, there was not an association of this finding with false-positive triage; on the other hand the study was limited in that we did not test for false-positive association with left ventricular hypertrophy, pericarditis, left ventricular aneurysm, and early repolarization.”
This is in startling contrast to the study by Larson et al. that showed almost half of patients with LBBB had no culprit artery! Who knows, maybe the ED physicians in Southern California use Sgarbossa’s Criteria. On the other hand, the authors admit they didn’t study false negatives, so it’s entirely possible they just aren’t cathing the LBBBs the way used to in Minnesota.
I say “used to” because it was Dr. Smith et al. that came up with excessive discordance as a marker of acute STEMI in LBBB.
Overall, a very interesting and worthwhile article. This is exactly the type of research that needs to be happening right now!