Algorithmic curation of (political) information and its biases


Dr. Mykola Makhortykh & Dr. Aleksandra Urman


In our project, we investigate how the world's major search engines (e.g., Google, Baidu and Yahoo) filter, rank and personalize information and whether they can also reinforce biases which can limit their users’ right to receive information. In order to do so, we refine and utilize a novel virtual agent-based methodology of algorithmic auditing developed by our auditing team (Dr. Mykola Makhortykh (IKMB), Dr. Aleksandra Urman (IKMB), and Dr. Roberto Ulloa (GESIS)) to analyze how search algorithms construct hierarchies of knowledge in different domains (e.g., conflict reporting, political elections, and science and technology studies). Specifically, we answer the following research questions:

RQ1: How different are the search results provided various search engines on the same queries and whether the scope of these differences varies between particular domains (e.g., current politics or historical conflicts) and different languages (e.g., English and German) in which make the queries?

RQ2: How specific search factors (e.g., user gender, age or location) affect the personalization of search results and do they reinforce certain biases (e.g., do certain factors lead to the higher visibility of specific stereotypes)?

RQ3: How do previous interactions with search engine algorithms influence the subsequent interactions and can such influences be detrimental for users’ ability to get information (e.g., by enabling the formation of the so-called “filter bubbles”)?


2020 – 2021


web search, bias, information rights, search engines, algorithmic auditing, virtual agents