#108: Fake News: A Call for Critical Thinking in the Age of AI-Generated Media & Confirmation Bias

With rapid information flows and lightning-fast AI tools being used to create content we must take personal responsibility to question everything!  An emotional trigger, coupled with confirmation bias and a lack of research is something to take very seriously.

The European Union (EU) is proposing new rules that will require social media companies to label images and videos that have been manipulated using artificial intelligence (AI). This is part of a broader effort globally to combat the spread of misinformation online.

Many people saw the picture of the Pope in a white puffa jacket as one example.

In my opinion I think whilst it is important to at least to try to minimise the spread of misinformation; I also believe that we haven’t mentioned personal one to one or one to many sharing which occurs privately via messaging applications and to prevent destroying privacy we must take responsibility.

We Are Encouraging Narratives That Are A Threat To The Very Fabric Of Society

With the overwhelm created in our lives by the current deluge of news, opinions, and sometimes, unfortunately, falsehoods.  We are encouraging narratives that are a threat to the very fabric of society we live in.  If you visit the X platform as one example, you can try and decipher news that appeals to you and your ingrained beliefs, you will notice that the emotions you feel are more triggered than ever before as the quality of content has increased in many cases.

We are personally responsible for how we are triggered and for what we encourage in others!  Which is why researching before sharing content and opinion is even more important than ever as it will take a while longer before the proposed new legislation will have an effect.

The relentless stream of information, coupled with the rapid-fire nature of social media and AI content tools, has created a fertile breeding ground for fake news, fabricated or misleading information that masquerades as genuine news.

The rise of artificial intelligence (AI) has further increased the landscape of fake news and the triggering of human behaviour.  AI-powered tools now generate highly realistic and convincing fake images and videos. These AI-generated fakes, often dubbed “deepfakes,” are used to manipulate public opinion, spread disinformation, and undermine trust in institutions, causing situations to be inflamed.

Consider a recent deepfake video that depicted a prominent politician delivering a speech that he never gave.  The video was so realistic that it initially fooled even some of the politician’s close associates.  This incident highlights the growing threat of AI-generated fakes and underscores the urgent need for effective countermeasures.

Deepfake videos and phone calls are also used by scammers to extract money from businesses and to trigger behaviours.

One such countermeasure is the labelling of AI-generated images and videos. Labelling these media will provide a clear indication to viewers that the content has been manipulated, enabling them to make informed judgments about its authenticity.

Labelling also empowers fact-checkers and researchers to identify and debunk AI-generated fakes, reducing their potential harm more easily.  Additionally, labelling could discourage the creation of such fakes, as the stigma associated with labelled content could deter potential creators.

The European Union (EU) Is Proposing New Rules To Combat Misinformation

The European Union (EU) is proposing new rules that would require social media companies to label images and videos that have been manipulated using artificial intelligence (AI). This is part of a broader effort to combat the spread of misinformation online.

AI-generated images and videos, also known as deepfakes, can be used to create realistic but fake content that can be difficult to distinguish from reality. This type of content has been used to spread misinformation about political figures, celebrities, and other public figures.

The EU’s proposed rules would require social media companies to label deepfakes with a warning that the content has been manipulated. The companies would also be required to provide users with information about how the content was created.

It is too early to say whether the EU’s proposed rules will be effective in combating the spread of misinformation. However, the proposal is a step in the right direction. By requiring social media companies to label deepfakes, the EU is making it easier for users to identify and avoid this type of content.

In addition to labelling deepfakes, the EU is also proposing other measures to combat misinformation. These measures include:

  • Increasing funding for fact-checking organizations.
  • Investing in research into AI-powered tools that can detect and remove misinformation.
  • Educating the public about how to spot misinformation.

The EU’s proposed rules are part of a global effort to combat misinformation. The United States, the United Kingdom, and other countries are also developing policies to address this issue.

The fight against misinformation is complex and there is no easy solution. However, the EU’s proposed rules are a positive step in the right direction. By taking action to combat misinformation, the EU is helping to protect its citizens and democracy.

The implementation of labelling systems require collaboration between tech companies, governments, and civil society organizations.  Tech companies will need to develop robust labelling technologies, while governments will need to establish clear regulations regarding the labelling of AI-generated media.  Civil society organizations could play a role in educating the public about the importance of labelling and promoting responsible AI practices.

While labelling AI-generated media is not a foolproof solution, it is a crucial step in combating the growing threat of AI-generated fakes.  By empowering viewers, fact-checkers, and researchers, labelling can help us navigate the labyrinth of fake news and safeguard the integrity of information in the digital age.

What is confirmation bias and why is it important?

Confirmation bias is a type of cognitive bias in which people tend to favour information that confirms their existing beliefs or hypotheses.  This bias can lead people to disregard information that contradicts their beliefs, even if the contradictory information is more accurate.

Confirmation bias is a common problem because it is natural for people to seek out information that supports their existing beliefs.  We are all more likely to remember and pay attention to information that confirms our beliefs, while we tend to ignore or discount information that contradicts them.  This can lead to a distorted view of the world, and it can make it difficult to make objective decisions.

There are several factors that can contribute to confirmation bias, including:

  • The need for cognitive closure: People have a natural desire to have a clear and consistent understanding of the world. This can lead them to seek out information that confirms their existing beliefs, and to reject information that contradicts them.
  • The desire to be right: People want to be right, and this can lead them to selectively remember and interpret information in a way that supports their beliefs.
  • The tendency to be overconfident: People often overestimate their own knowledge and judgment, and this can lead them to be more resistant to information that contradicts their beliefs.

Confirmation bias can have a significant impact on our lives. It can lead to:

  • Misinformed decisions: Confirmation bias can lead us to make decisions based on inaccurate or incomplete information.
  • Polarization: Confirmation bias can contribute to polarization, as people become more entrenched in their existing beliefs and less open to considering alternative viewpoints.
  • Misunderstanding: Confirmation bias can lead to misunderstandings, as people may interpret information in a way that is different from how it was intended.

There are several things that you can do to reduce the impact of confirmation bias:

  • Be aware of your own biases: The first step to overcoming confirmation bias is to be aware of your own biases. This can be difficult, but it is an important step.
  • Seek out information from a variety of sources: Don’t just rely on information that confirms your existing beliefs. Seek out information from a variety of sources, including sources that you disagree with.
  • Be critical of information: Don’t just accept information at face value. Be critical of the information you encounter and consider whether it is credible and unbiased.
  • Be open to changing your mind: It is okay to change your mind if you are presented with new information. Don’t be afraid to admit that you were wrong.

Confirmation bias is a powerful force, but it is not insurmountable. By being aware of your own biases and by seeking out information from a variety of sources, you can reduce the impact of confirmation bias and make more informed decisions.

Here is a reference link to a study on confirmation bias:

  • Nickerson, R. S. (1998). Confirmation bias: A ubiquitous phenomenon in many tasks involving inference and decision making. In Handbook of thinking and problem solving (pp. 217-249).

The study provides a comprehensive overview of confirmation bias, including its definition, causes, and consequences. It also discusses several strategies for overcoming confirmation bias.

In conclusion, as AI continues to evolve, so too must our strategies for combating fake news and our own confirmation bias.  Labelling AI-generated images and videos is a critical step in this ongoing battle, and it is a responsibility that we must collectively embrace to protect our society from the insidious influence of manipulated media.

I am interested in your thoughts please either message me or comment below.

Leave a comment