“Being critical towards the information you are consuming is a core literacy to develop these days.”

07. Oktober 2024

Von und

Google, chatbots, news and social medias are a constitutive part of our lives and serving as sources of information to form ideas and opinions. But what is at stake when information is filtered and prioritized by algorithms? In our interview with Mykola Makhortykh, we got an insight into the seemingly abstract world of AI and algorithmic systems and why their ways of working require being critical towards the information we are consuming.

About the person:

Mykola Makhortykh is working in the fields of communication science, humanities, data and computer science. His research focuses on politics- and history-centered information behavior in online environments and how it is affected by the information retrieval systems, such as search engines and recommender systems. He has worked on a project on populist radical-right attitudes and political information behavior. Among his current research interests are the study of bias in AI and algorithmic systems and specifically the impact of AI and algorithmic systems on Holocaust memory transmission.

(see: About us: Dr. Mykola Makhortykh – Institute of Communication and Media Studies (icmb) (unibe.ch))

We can see various examples of the rise of populist-right parties in Europe, the most recent example being the win of the populist radical-right party AfD in Germany. As a media and communication scientist who has worked on the topic of populist radical-right attitudes, how do you assess the role of media in these developments?

I must say that I would not necessarily call myself a real expert on the populist right because I was just a part of the project, but it was led by Prof. Dr. Adam from the Institute of Media and Communication Studies. However, I can share some thoughts from a less-of-an-expert point of view. I think that the media definitely plays a certain role in the rise of populist radical-right such as the AfD in Germany, and we also have other examples in the EU unfortunately. I would say that today’s media technology provides significantly more opportunities for these parties to reach out to voters and mobilize them in their support. And we know that quite often, populist right parties don’t rely on high-quality information to mobilize their voters. Often, they rely on very dubious interpretations of things, for instance concerning the Russian invasion in Ukraine or climate change, in order to mobilize the voters. As you can imagine it doesn’t necessarily make it easier for the populist parties to do the mobilization via the quality media which follow the journalistic standards.This kind of media would not necessarily push the agenda of the populist right parties.The standards and moderation principles of social media platforms are quite different from those of traditional journalistic outlets. So, the media plays an important role but at the same time I wouldn’t say that it is the only factor. If we would focus only on the media literacies, then we wouldn’t necessarily be able to comprehensively understand all the reasons why the populist right is on the rise. Unfortunately, we know that AfD, for example, is quite effective in addressing aspects of German society which are not necessarily addressed by the mainstream parties. For instance, they are quite instrumental in cannibalizing on the topic of security, the economy which is not in the best shape, or the German involvement in countering the Russian aggression.

How do political attitudes influence the way we consume news media?

I would not say that we can make this connection very easily. We cannot say that, for instance, AfD or SVP voters and SP voters consume media in a very specific way. However, what I think is important to recognize is that we do find a relationship when we are looking at the other variables. An important one is the role of trust towards institutions and politics. If you don’t really trust the mainstream political institutions, then often you don’t trust the journalistic media because they are part of the mainstream. Quite naturally you are being pushed towards the fringe of the online environments, including the different social media platforms. And then quite often you can find a correlation between the lack of trust towards the mainstream and the intention to vote for the fringe parties such as AfD in Germany. Of course, in Switzerland the situation is quite different, I wouldn’t say the SVP is a fringe party at all. Even in Germany I would be interested to re-do this analysis right now because I have a feeling that the AfD is becoming less of a fringe party and more of a mainstream party, which is unfortunate, but it is an important change.

A lot of your research focuses on bias in algorithmic systems, can you give us an insight into your findings concerning this topic?

I am quite biased towards discussing this topic, as you can guess (laughs), so very happy to share some insights. It is a complicated question, because it really depends on how we define bias. I know that we often use the term algorithmic bias in our academic discussions. But when we start thinking about what we actually mean by the term it becomes more complicated. Usually, we understand bias as unequal treatment of a certain aspect of social reality, of a certain group, by the algorithmic system. There is a lot of very interesting and very good research which focuses on the gender and race bias in the performance of artificial intelligence and algorithmic systems. To learn more about this I recommend the book by Safiya Nobel called “Algorithms of Oppression”. My great colleague Aleksandra Urman and I have found in our research that there is still quite substantive bias when we are looking at ethnic and gender groups such as Eastern European women. I would say that they are still badly treated by the search engines and primarily presented as mail brides which is really ridiculous.This is one example of how problematic algorithmic bias can be, but it is also a relatively, let’s say, easy example because it is very clearly wrong. If we consider the classic concepts of communication science such as “Framing”, we know that the selection of visual information has direct implications for how we understand social phenomena, including historical events

There is another interesting example which I always like to talk to students about when teaching about algorithmic biases. I am working a lot on the Holocaust and the Holocaust memory in the context of algorithmic systems and AI. For one of the projects, we are looking at the performance of the search engines in terms of retrieving visual information about the Holocaust. As you can imagine, it’s not the most relaxing type of information, sometimes the images are quite shocking and graphic, but we still analyze it and try to understand what kind of aspects of the Holocaust the search engine will prioritize when searching for information about the Holocaust. If we consider the classic concepts of communication science such as “Framing”, we know that the selection of visual information has direct implications for how we understand social phenomena, including historical events. What we find is that there is really a massive inequality concerning what camps and Holocaust sites are prioritized by the search engines. Around 60-80% of the images come from a single site, which is Auschwitz. It is the most well-known site, appearing in various movies and is also a touristic destination. And then if we have all these results coming from a single site, we are running into a situation where the suffering of people who died in other camps is somehow really downplayed. So, it raises the question, what would be a good solution? When trying to create a balance in the presentation of the Holocaust sites, there is the problem that there are thousands of different camps, and it is simply impossible to equally represent them in a few top search results. So, should we focus even more on Auschwitz because it is what people know about and so it would create more interest and stimulate people to search more about the Holocaust? This is one of the cases where the problem of bias becomes more difficult but also, I would say very interesting. I think that bias is a big topic that we will probably talk about a lot in the upcoming years and in my view, media literacies around biases is something we should really look more into, both in the university curricula and hopefully in the life-long learning of students.

Sometimes algorithms can seem to be something abstract, can you explain how it works?

You definitely shouldn’t think about algorithms as something highly abstract. It is often discussed in a very vague way which I honestly don’t like. You can think of it as a simple thing: it is essentially a sequence of actions that are working together to achieve a certain task. Of course, the tricky thing is that a lot of algorithms that we are working on these days, such as the image search, are indeed becoming very complex sequences of actions. For example, in the case of Google search the algorithms take a lot of different signals in order to decide what the output can be. Some examples are your location, the language you are using and the relevance of the source according to the internal Google calculations. We know that back in the days the algorithms were relatively simple. It primarily considered mentions of a certain page by other webpages and prioritized it accordingly.

We know that back in the days the algorithms were relatively simple but a lot of algorithms that we are working on these days, such as the image search, are indeed becoming very complex sequences of actions.
However, these days it is becoming more and more complicated. We know that, unlike other search engines, Google thinks about the hierarchies of the sources. So, it is about prioritizing sources which are the more authoritative ones, such as the established journalistic media, educational websites or sites of museums. But then the question is how differently those authoritative sources use a search engine optimization to try to push themselves to the top of the search engines. And in general, journalists are the most skillful in doing this which results in the situation where a lot of journalistic sources are becoming prioritized for a topic such as the Holocaust. It partly explains why we have such a dominance of the Auschwitz camp because if I am a journalist and I have the choice to put one or two images in my article I will probably choose what I am most familiar with as a non-expert and what my readership will probably be most interested in and then we have a bit of a self-reinforcing mechanism. And generally, it doesn’t work badly. But there are other examples where it doesn’t work that well, for instance the search results related to Russia. Especially in the Russian language, we run into a problem: Google wants to push forward the most authoritative sources on the topic, but the Russian media is essentially controlled by the government and serving as a tool of the Kremlin propaganda. This results in a situation where Google quite often prioritizes Russian propaganda, not necessarily in the English search results but in Russian search results, it is common. Of course, the search engine as a company is a big fan of universal criteria for every context which is much easier to implement but the example of Russia shows that it is not always working that well. So, it’s difficult to build a better algorithmic system when we are dealing with highly complex algorithms. We need to think about how we can do things better so that AI and algorithmic systems benefit society. For instance, we are thinking about working with a broader group of stakeholders to collect ideas for how to improve AI. But then we are getting back to the problem of what groups of society we actually want to help: do you equally want to help people who are voting for the AfD and people who are voting for the green party? Of course, they are all citizens and you want to help them equally, but they might have very different requirements and preferences for how AI and algorithmic systems should tell them about the topics they are interested in. For instance, for the green voters, you do not necessarily want ChatGPT to tell them: “Okay, climate change is really a debated issue, it might not necessarily be happening.” Whereas for the AfD voters, this kind of uncertainty is something that some of them would want to hear. So, If ChatGPT tells them that climate change is happening and people who do not believe so are stupid, they would not trust AI at all.

Google thinks about the hierarchies of the sources. So, it is about prioritizing sources which are the more authoritative ones, such as the established journalistic media, educational websites or sites of museums.

What do you think are the most important factors to consider as consumers of media when trying to stay informed about political conflicts and issues?
I would say that being critical towards the information you are consuming is a core literacy to develop these days. Practically speaking, I believe checking the information, especially about highly contested topics such as the wars, pandemics, violence, or migration, will be helpful. Relying on quality journalistic media is a good start because, again, journalistic medias have standards and follow certain quality criteria before actually picking up and publishing the story. If you are interested in doing it in a very comprehensive way, then following a few media outlets with different ideological attitudes might be a good strategy, but it depends on how realistic that is. Sometimes I get really annoyed when I see some recommendation for media literacies where people say that you need to verify all the information you are uncertain about. In theory it is an excellent advice – yes of course you need to verify information! But realistically, if people are working from 9-5 and they are coming home tired, I am not sure they would love to spend 2 hours checking information. So, I think that selecting a few, or at least one quality media outlet and following it, is a good strategy which helps to stay informed.
Secondly, I usually recommend taking the time to actually verify information when you see something that really triggers you, especially if you plan to share it with colleagues and friends. And the third point is that you should not trust chatbots like ChatGPT or Gemini for quality information. This does not mean that people should not use it, for example as a writing assistant or to generate some ideas. But it is important to understand that chatbots do not understand the content they are producing. They are algorithmic systems which simply calculate the probabilities of the next words. They do not understand if those words are good or bad or what they mean. There are so many cases where the information is not high quality, and it is inventing non-existent references. We also know that the quality of the information you receive can vary widely. Recently, we did a project on the quality of information coming from chatbots about the Russian aggression and we found that the reports of chatbots were extremely inconsistent and heavily randomized. So, you would get one answer to a question and two minutes later, you would get a completely different answer. So, I would not use it for factual information, at least for now. We would need to see how AI develops, which it does quickly.
Finally, try to stay interested in the development of digital technology and its relationship to media literacies. I think this interest is very important and useful, as technology develops quickly and will affect every one of us in the next few years. As far as it is realistic – given that everyone has very little time – it’s good to keep an eye on what the discussions on AI and algorithmic biases are about.

5 1 vote
Article Rating
Abonnieren
Benachrichtige mich zu:
0 Comments
Inline Feedbacks
View all comments