Abstract | With the rapid development of technology, artificial intelligence (AI) has become a trend that everyone follows. AI helps people complete complex calculations and find a much easier way to achieve goals. Digital platforms actively engage with AI to enhance and refine algorithms, gaining increased influence, be it in commercial or political realms. Digital platforms as important sources and distributors of messages and information are vital to well-functioning democracies, so we must consider the normative framework that can evaluate their performance and discuss and implement their governance. Nowadays, algorithms have become much stronger as people use AI. However, with greater power comes a greater potential for damage: bias, polarisation, non-creativity, and non-autonomy. Algorithms frequently result in unequal access to information, leading to discrimination against specific groups of digital platform users. In other words, AI-enhanced algorithms moderate and impact the promotion of information by considering consumers' interests and opinions. Users frequently encounter similar material and receive very uniform messages, leading to discrimination and the creation of echo chambers. Meanwhile, people might be limited to their preferences instead of exploring new ones. Algorithms, undeniably, are vital factors in media experiences, which largely improve users’ feelings when navigating online news, targeted advertising, streaming services, or personalised media. People might agree with the argument that algorithms could improve their media experience dramatically. However, in the context of the political economics of communication, internet users are referred to as audience commodities. We believe users have the right to receive information actively, rather than passively. In this paper, we will analyse the damage from AI-enhanced algorithms from the perspective of the political economics of communication, review related policies, and provide suggestions. We highlight the necessity of user autonomy and sovereignty through regulations and media literacy when developing AI. |
---|