Abstract | This research investigates the ethical dimensions of AI chatbot usage, with a specific focus on ChatGPT. Through an exploration of user perspectives (N=20), the study examines the ethical challenges users encounter and identifies actionable strategies for responsible AI usage. Grounded in Rights-Based Theory, this study employs a qualitative research design to explore the lived experiences of users. Findings from semi-structured interviews reveal ten key themes, with privacy and security, bias and misinformation, integrity, accountability and transparency, and over-reliance on AI emerging as the primary ethical considerations. To address these concerns, users and developers adopt five key strategies: data protection, fact-checking and verification, citation and attribution, developer accountability and user engagement, and responsible AI usage. This research contributes to Rights-Based Theory by reinforcing users' fundamental rights to privacy, unbiased information, integrity, and accountability, while also emphasising the ethical responsibilities of both users and developers in AI governance. |
---|