Abstract | Abstract Purpose- The integration of artificial intelligence (AI) in healthcare has transformed the way users interact with health applications, offering personalized recommendations and decision-making support. However, building trust in AI-driven systems remains a significant challenge, particularly in high stakes environments like healthcare, where user concerns about fairness, control, and privacy are paramount. This study aims to investigate how AI transparency influences trust in healthcare applications, focusing on the mediating roles of perceived fairness and control, and the moderating role of privacy concerns. Design/methodology/approach- A quantitative research design was employed, utilizing survey data collected from healthcare application users. Structural Equation Modeling (SEM) and moderation analysis were used to test the proposed conceptual framework, exploring the interrelationships among the variables. Findings- The results revealed that AI transparency significantly influences trust in healthcare applications indirectly through perceived fairness, while perceived control had a limited mediating effect. Privacy concerns were found to amplify the relationship between fairness and trust but did not significantly moderate the effects of transparency or control on trust. These findings emphasize the central role of fairness and privacy in building trust, highlighting the nuanced interplay between ethical perceptions and user concerns in high-stakes contexts. Originality- This study contributes to the literature by integrating fairness, control, and privacy concerns into a unified framework for understanding trust in AI healthcare applications. By demonstrating how transparency operates indirectly and how privacy concerns shape user perceptions, this research provides novel insights for designing ethically robust and user-centric AI systems tailored to sensitive domains like healthcare. |
---|