FROM CHEST

Automated scoring data from two type-3 home sleep test monitors misclassified the severity of obstructive sleep apnea, especially in patients with mild to moderate disease, researchers reported in the March issue of Chest.

“Agreement between automated and manual scoring of home sleep tests varies as a function of the portable device and definition of disordered breathing event used,” said Dr. Nisha Aurora and her associates at the Johns Hopkins University in Baltimore. “Although modest agreement exists between automated and manual scoring, input by a sleep specialist or a certified polysomnologist in reviewing home sleep study results may help to improve diagnostic accuracy and classification of OSA [obstructive sleep apnea], particularly if there is mild disease.”

About 10%-15% of the general population has OSA, and home sleep testing has become an important part of ambulatory care for the condition. The American Academy of Sleep Medicine recommends that raw data from home sleep test devices be reviewed by either a board-certified sleep specialist or a clinician who fulfills eligibility criteria for the sleep medicine certification examination. Few studies, however, have actually compared automated and manual scoring data from these devices, the researchers said (Chest 2015;147:719-27).

For their analysis, Dr. Aurora and her associates evaluated automated and manual scoring data for 200 patients without a previous OSA diagnosis who used one of two home sleep test monitors: the ApneaLink Plus monitor from ResMed or the Embletta device from Embla Systems. To assess the apnea-hypopnea index (AHI), the researchers defined disordered-breathing events based on both the >3% and >4% thresholds for oxygen desaturation.

Automated scoring consistently underscored disordered-breathing events, regardless of which threshold was used to define AHI, the researchers found. For the ApneaLink Plus monitor, the average difference in AHI between manual and automated scoring was 6.1 events per hour (95% confidence interval, 4.9-7.3) for the 3% threshold and 4.6 events per hour (95% CI, 3.5-5.6) for the 4% threshold, they reported. For the Embletta monitor, manual and automated scoring varied by an average of 5.3 (95% CI, 3.2-7.3) and 8.4 (95% CI, 7.2-9.6) events per hour, respectively.

The difference between automated and manual scoring narrowed substantially when considering oxygen desaturation index instead of AHI, the researchers reported.

The investigators then used AHI to assess OSA severity. For the ApneaLink Plus monitor, relying on automated readings led to a <10% rate of clinically significant misclassification of OSA severity, regardless which oxygen desaturation threshold was used. For the Embletta device, rates of clinically significant misclassification were 29% and 41% for the 3% and 4% thresholds, respectively. “As expected, disease misclassification was most notable for subjects with mild disease irrespective of the device used,” they wrote.

The National Institutes of Health funded the work. One coauthor reported relationships with ResMed and Koninklijke Philips NV (Respironics). The other authors declared no relevant conflicts of interest.

imnews@frontlinemedcom.com

Ads

You May Also Like

Preventing recurrent staphylococcal skin and soft tissue infection

A frequent referral to our pediatric infectious disease outpatient program at Boston Medical Center ...

HIV research update: Late February 2017

A great volume of HIV and AIDS research enters the medical literature every month. ...

FDA approves antiemesis drug rolapitant

The Food and Drug Administration has approved rolapitant for use in adults undergoing initial ...