Print      
Study offers mixed diagnosis for health apps
Brigham researchers find no consistent, reliable rating system
Dr. Adam Landman of Brigham and Women’s Hospital said the study found “significant variation’’ in rating systems for health care apps. (NICOLA/FLICKR)
By Nidhi Subbaraman
GLOBE STAFF

Apps for health care are dominating the Google and Apple app stores in rising numbers but neither customers nor physicians have access to a reliable rating method to tell the best apps from the worst, according to a new ­report from researchers who study mobile health tools at Brigham and Women’s Hospital.

“There’s not a lot of guidance on which apps are recommended,’’ said Dr. Adam Landman, chief medical ­information officer for health information innovation and integration at Brigham and Women’s Hospital and one of the lead authors of a study published Wednesday in the journal JMIR mHealth and uHealth.

The goal was to determine if there was a gold-standard rating system that would tell consumers which apps were providing reliable, safe information, and which weren’t. But among the rating techniques the team tried, they couldn’t find any that held up.

“What we saw was that there was significant variation in how they were rating these apps,’’ Landman said.

Landman and his team selected review criteria that had previously been identified by other companies or researchers as indicators of app quality.

Then, they asked a team of six reviewers, specialists in mobile health technology, to rate the apps using those guidelines. It turned out that each member of the team gave the apps vastly different scores even when they used the same tool — which led the research group to the ominous conclusion: We can’t even agree on a reliable way to rate apps.

A reported 165,000 mobile health apps are already available today. The market for health apps is expected to grow from $10 billion in 2015 to $31 billion by 2020, according to Rock Health, a San Francisco venture firm that funds health technology startups. 

Four physicians, a nurse practitioner, and one health economist were in the focus group. Of the 20 apps they reviewed, 10 claimed to help users quit smoking and 10 claimed to help symptoms of depression. They were chosen because those problems cut across ­demographics.

The only score the group agreed on was how interactive the apps were. On most other characteristics — apps’ privacy policies, errors, and performance issues, and availability of software support — the reviewers’ scores differed.

“I think one thing that’s concerning is that people are giving up a lot of heath information to these apps and they don’t realize what happens to your personal health information when you give it up to the app,’’ said John Torous, a clinical fellow in psychiatry at Harvard Medical School and senior resident in the Harvard Longwood Psychiatry Residency Training Program. He was a member of the study team. 

The study doesn’t offer easy solutions for consumers yet, rather it raises a red flag for the medical community. In studies that build on this one, the group intends to incorporate patient feedback into its evaluation process.

“Our advice to the general public is to interpret app reviews cautiously and to talk to their health care provider about apps that they are thinking of using and apps that they are already using,’’ Landman said.

Nidhi Subbaraman writes about science and research. E-mail her at nidhi.subbaraman@globe.com.