Summary of Digital Health App research & Approval Concerns
This article details a critical review of research used to approve digital health apps, highlighting significant concerns about research bias and the reliability of reported results. Here’s a breakdown of the key findings:
key Concerns & Findings:
High Risk of Bias: The overall risk of research bias (ROB) was consistently high across the reviewed studies (23 studies).
Major Bias areas: The most prominent areas of bias were:
Result Measurement (21/23 studies): Influenced by patient reporting and perception of the intervention.
Missing Data (15/23 studies): Lack of sufficient data and analysis regarding dropout rates.
Randomization Process: While most studies (10/23) showed low ROB in randomization, a significant number (11/23) had some concerns due to issues like unclear random assignment reporting, inadequate concealment of assignment, or improper concealment.
Deviation from Intended Intervention: Most studies (19/23) had some concerns regarding adherence to the intended intervention, with 3 studies having a high ROB due to insufficient reporting on intervention effects and balance between research groups.
Data Defects/Dropout Rates: 14/23 studies had a high ROB due to a lack of analysis proving that dropout rates (exceeding 5%) didn’t introduce bias.
High Dropout Rates & Real-World Use: The intervention group consistently had higher dropout rates than the control group. This raises concerns about compliance and utilization in real-world medical settings, potentially leading to lower overall effectiveness.
Inconsistent Measurement: significant differences in results reported across studies were attributed not only to varying target indications but also to the use of different outcome measures and tools for the same indications.
Difficulty in Comparison: The authors state it’s difficult to compare results from approval studies to previous research due to the novelty of the technology and lack of international standardization.
The Authors’ Recommendations:
The team emphasizes the need for careful consideration of ROB evaluation when approving digital health apps. They suggest:
Adjusting Approval Criteria: Making approval criteria more rigorous.
Real-World evidence: Including real-world evidence as a mandatory component of the approval process.
Transparency: Increasing transparency in the regulatory agency’s approval process.
Mandating Core Outcomes: Requiring the use of standardized, “core” outcome measures for approval studies.
In essence, the article argues that current approval processes for digital health apps are flawed due to significant research biases, and that a more rigorous and standardized approach is needed to ensure the validity and reliability of reported benefits.