Disinformation : Your Story Is No Longer Yours

Written by: Norlena Piseth | September 1, 2022

Since the Industrial Revolution in the 18th century, the world has seen numerous technological advances and communication has never been easier. Humans have gone far from newspapers, radio broadcasting, and television as a means of interaction. Having a conversation with someone across the globe takes merely seconds to connect, without factoring in time-zones. 

These days, just about anyone with access to the internet and a device could start a blog, a YouTube channel, join Twitter and Instagram, or a Facebook page. Content sharing takes a mere few minutes to achieve. 

As the network of human interaction grows ever smaller, so will the mingling and mixing of media. This opens up an opportunity for a malicious few to spread disinformation, manipulating and influencing the public opinion to their will, despite the truth. It should be noted to not mistake misinformation and disinformation; while misinformation is the spread of false details regardless of intent, disinformation is inaccurate media specifically intended to cause harm or misinterpretation. 

As an example, should someone post on a political scandal that turns out to be wrong, yet they believed it to be true, it’s misinformation. When a politician specifically spreads false rumours to get above a rival, then that’s disinformation. A small, yet vital, discernment. 

A popular incident of disinformation, is the spread of the conspiracy “Pizzagate” during the 2016 presidential election in the USA, in which multiple political personalities were roped in and accused of child sex trafficking. This, as a result, led to a shooting of an innocent pizza restaurant. While there were no casualties, this only proves that there lies deadly consequences of fallacious media. 

It remains difficult to maintain a personal narrative while this world of technology is expanding. Many stories, and their truths, are buried and affected under multitudes of different opinions, different takes and third party agendas. 

Malala Yousafzai’s story is one worth examining. Despite her advocacy for education, after her shooting incident with the Taliban, the media, especially western media, reduced her to simply the “Girl Shot by Taliban”. This makes it difficult for her to push forward as an advocate of women’s rights, buried under all the narratives the media has built for her. Not only does it affect her story, but it deeply antagonises the Taliban, painting a stereotype and furthering the western agenda in alienating foreign countries, in this case, the South Asian region.Anyone who doesn’t know any better, or doesn’t care to know any better, would mindlessly believe in this crafted picture. 

According to a report on disinformation conducted by Paul G. Allen School of Computer Science and Engineering, human biases tend to enable disinformation in the media. The cognitive biases in which humans treat the media they consume are just as harmful as the possibly false media itself. People tend to pay more attention and credibility to information supporting what they could already believe. And when confronted with the truth, they become defensive and will try to defend their beliefs ever more strongly, without being open-minded. 

This is proven by the work of a non-profit organisation, Project Implicit, which aims to test and investigate the effects of stereotypes, and subconscious biases on media consumption. Their goal is to understand the factors that influence judgement and hope to implement this research into bettering personal and community values. One of the more well-known ways this organisation gathers data, is by the IAT ( Implicit Association Test ). It is a good way to test one’s biases, and to learn from the defects that could arise from the human subconsciousness.

With the invention of Deepfake and AI, Deepfake being an AI system used to create realistic audio and/or visual hoaxes, it becomes very easy to manipulate or create a video or a voice recording. Caution should be exercised while perusing social media. Imagine a world where publicity is an important factor to get a message anywhere, but neither audio or video can be trusted.

In spite of the constant abuse of technological advancements and the easily malleable opinion of the public, there are still ways to combat disinformation in the modern world. Browser extensions such as SurfSafe and InVID help users identify fake news. For example, SurfSafe would be able to detect a possibly fake image by cross-referencing it with news sites that it has appeared in, and if it has not made any mainstream media, the user can evaluate the credibility of the image.

In 2019, Fabula AI — a company known for fake-news detection — developed “GoodNews”, a project that utilises AI mark fraud information. According to Michael Bronstein, the leader of this project, fake news tends to have more shares to likes ratio than normal posts, and so the purpose of the project is to attach a credibility mark on these types of articles. 

To put into perspective how helpful this could be, one can look back onto the Pizzagate theories. Had it existed during the time whence Pizzegate spread, there would have been a level of prevention of the media and caution into believing the conspiracies. Less people would have blindly ate up the details, and perhaps even the restaurant shooting could have been avoided. 

Twitter is a platform that frequently uses fact-checkers, such as Hoaxy and TwitterTrails. These bots are used to fight inaccuracy in media on Twitter, Hoaxy tracks and collect, while TwitterTrails supplies information from the origin. Both are valuable contributions to the fight against the spread of disinformation.While these bots and AIs may not solve the root of the problem, and it may never will, they’re a step into a world with better control on the truth, especially in the media. 

It’s imperative that one should refrain from oversharing personal information, especially when using social media. As the years go by and technology progresses, there will be an increase in new methods to spread disinformation. Due to the anonymity of social media, it would be easy to impersonate and create the illusion of someone doing things they hadn’t committed. 

As the digital world becomes more normalised with each passing generation, it would do as much good as any fact-checking, fake-news detecting bot would, if everyone took in the media they have with reason and an open mind to other takes and opinions. It will make it harder for malicious intent to seep through, no matter how hard it tries.

Check out our social media for more resources: 

Facebook: https://www.facebook.com/necessarybehavior  

Instagram: https://www.instagram.com/necessarybehavior/  

Twitter: https://twitter.com/necessarybehavi  

Tumblr: https://necessary-behavior.tumblr.com/  

Sources :

https://www.researchgate.net/publication/329946214_Technology-Enabled_Disinformation_Summary_Lessons_and_Recommendations

https://ec.europa.eu/research-and-innovation/en/horizon-magazine/can-artificial-intelligence-help-end-fake-news

https://datasociety.net/library/media-manipulation-and-disinfo-online/

Photo by Prateek Katyal on Unsplash

Misinformation, Subconscious Bias, AI

Leave a comment

← Back To Lemon-Aid