English, asked by athulyaalbert9, 4 months ago

Why are the chances of misinformation being circulated high on social media?
How can it be controlled? ( I need the answer cause tomorrow is my exam)

Answers

Answered by poojase18
1

If you get your news from social media, as most Americans do, you are exposed to a daily dose of hoaxes, rumors, conspiracy theories and misleading news. When it’s all mixed in with reliable information from honest sources, the truth can be very hard to discern.

In fact, my research team’s analysis of data from Columbia University’s Emergent rumor tracker suggests that this misinformation is just as likely to go viral as reliable information.

Many are asking whether this onslaught of digital misinformation affected the outcome of the 2016 U.S. election. The truth is we do not know, although there are reasons to believe it is entirely possible, based on past analysis and accounts from other countries. Each piece of misinformation contributes to the shaping of our opinions. Overall, the harm can be very real: If people can be conned into jeopardizing our children’s lives, as they do when they opt out of immunizations, why not our democracy?

As a researcher on the spread of misinformation through social media, I know that limiting news fakers’ ability to sell ads, as recently announced by Google and Facebook, is a step in the right direction. But it will not curb abuses driven by political motives.

Exploiting social media

About 10 years ago, my colleagues and I ran an experiment in which we learned 72 percent of college students trusted links that appeared to originate from friends – even to the point of entering personal login information on phishing sites. This widespread vulnerability suggested another form of malicious manipulation: People might also believe misinformation they receive when clicking on a link from a social contact.

To explore that idea, I created a fake web page with random, computer-generated gossip news – things like “Celebrity X caught in bed with Celebrity Y!” Visitors to the site who searched for a name would trigger the script to automatically fabricate a story about the person. I included on the site a disclaimer, saying the site contained meaningless text and made-up “facts.” I also placed ads on the page. At the end of the month, I got a check in the mail with earnings from the ads. That was my proof: Fake news could make money by polluting the internet with falsehoods.

Sadly, I was not the only one with this idea. Ten years later, we have an industry of fake news and digital misinformation. Clickbait sites manufacture hoaxes to make money from ads, while so-called hyperpartisan sites publish and spread rumors and conspiracy theories to influence public opinion.

This industry is bolstered by how easy it is to create social bots, fake accounts controlled by software that look like real people and therefore can have real influence. Research in my lab uncovered many examples of fake grassroots campaigns, also called political astroturfing.

In response, we developed the BotOrNot tool to detect social bots. It’s not perfect, but accurate enough to uncover persuasion campaigns in the Brexit and antivax movements. Using BotOrNot, our colleagues found that a large portion of online chatter about the 2016 elections was generated by bots

Zuckerberg wrote just weeks after the 2016 election. In the year since, the question of how to counteract the damage done by “fake news” has become a pressing issue both for technology companies and governments across the globe.

Yet as widespread as the problem is, opportunities to glimpse misinformation in action are fairly rare. Most users who generate misinformation do not share accurate information too, so it can be difficult to tease out the effect of misinformation itself. For example, when President Trump shares misinformation on Twitter, his tweets tend to go viral. But they may not be going viral because of the misinformation: All those retweets may instead owe to the popularity of Trump’s account, or the fact that he writes about politically charged subjects. Without a corresponding set of accurate tweets from Trump, there’s no way of knowing what role misinformation is playing.

For researchers, isolating the effect of misinformation is thus extremely challenging. It’s not often that a user will share both accurate and inaccurate information about the same event, and at nearly the same time.

Yet shortly after the recent attack in Toronto, that is exactly what a CBC journalist did. In the chaotic aftermath of the attack, Natasha Fatah published two competing eyewitness accounts: one (wrongly, as it turned out) identifying the attacker as “angry” and “Middle Eastern,” and another correctly identifying him as “white.”

Fatah’s tweets are by no means definitive, but they do represent a natural experiment of sorts. And the results show just how fast misinformation can travel. As the graphic below illustrates, the initial tweet—which wrongly identified the attacker as Middle Eastern—received far more engagement than the accurate one in the roughly five hours after the attack:

Similar questions