Microsoft won’t label fake news in an effort to avoid “censorship” charges, as part of its efforts to combat online misinformation


Microsoft suggests that labeling fake news could be construed as censorship. Thus, society seems very reluctant to label fake news stories, as this could be misconstrued as excessive censorship. Instead of tracking down and marking the content as intentionally false and misleading, Microsoft could expose the people and agencies that create and distribute the propaganda. Microsoft’s approach differs from that taken by other companies in the industry. However, many remain skeptical about its effectiveness.

As the US midterm elections approach, Microsoft is taking a fundamentally different approach to combating fake news. The majority of tech giants operating on American soil tread carefully and avoid the problems of managing the expected onslaught of disinformation campaigns. Companies such as Facebook and Twitter have already faced strong backlash over their methods of tagging and removing misleading content. The tagging and removal of inaccurate and misleading messages has become a hotly debated political issue.

Given the risk of a negative reaction, Microsoft does not want to go down the same path as its rivals. Microsoft President Brad Smith said in a recent interview with Bloomberg News that he would not label social media posts that appear fake to avoid giving the impression that the company is trying to censor. online speeches. “I don’t think people want governments to tell them what’s right or wrong,” Smith said when asked about Microsoft’s role in defining online misinformation.

He added: “And I don’t think people are really interested in what tech companies are telling them either.” Smith’s comments are the strongest indication yet that Microsoft is taking a unique path to track and disrupt digital propaganda efforts. At this time, Microsoft is focused on tracking disinformation campaigns that target its private and public sector customers and publicizing their existence. By the way, the company from Redmond does not use the term “infox”. She prefers to call infox “influence operations”.

Commenting on Microsoft’s approach, Tom Burt, the company’s vice president of security and customer trust, explained: We’re going to look at how we can do this in the context of influence operations. It turns out that if you tell people what’s going on, then that knowledge inspires both action and conversation about what world governments need to do to address these issues. Microsoft’s policy team will soon be sharing its propaganda (and online disinformation) findings with international governments.

The company hopes to pressure political leaders to agree on a set of rules governing the conduct of nation states in cyberspace. Microsoft has invested in information operations analysts and tools to track propaganda campaigns. Working alongside the company’s internal cybersecurity teams, these groups claim to have incapacitated suspected Russian, Iranian, Chinese and North Korean state hackers. Microsoft could take a similar approach to combat the onslaught of fake news and disinformation campaigns.

This year, Microsoft published a report on Russian cyber espionage against targets in Ukraine, in which it is alleged that the intruders carried out the hacks in tandem with disinformation operations and military attacks. For example, hackers allegedly stole data from nuclear industry organizations to help the military and state media disseminate information that Ukraine was manufacturing chemical and biological weapons and justifying the capture of nuclear power plants by soldiers. . In the report, the company said it will take a number of actions against certain Russian entities.

Microsoft said it would reduce the visibility of Russian state-sponsored media by removing the app for RT from its Windows app store and only returning links to RT and Sputnik “when a user has clearly intend to navigate to these pages”. Microsoft also announced in June that it was buying Miburo, a disinformation and cyber-threat analysis firm headed by former FBI agent and counterterrorism expert Clint Watts. The Redmond-based firm said the acquisition will help it understand how threat actors use influence operations in tandem with hacking.

Companies such as Facebook and Twitter have faced an outcry over their attempts to flag and remove inaccurate and misleading posts on their websites and apps. The truth debate has become a politicized topic, with US lawmakers claiming that social media companies are stifling right-wing voices. The US Department of Homeland Security, meanwhile, closed its own disinformation bureau earlier this year after an outcry. Critics say it is not for corporations and governments to tell people what is right or wrong.

Smith said Microsoft, which operates the Bing search engine and LinkedIn social network, wanted to provide the public with more information about who is speaking out and what they are saying so they can judge for themselves. of the truth of the content. We have to be very thoughtful and careful, because – and this is true of any democratic government – basically people rightly want to make up their own minds and they have to. The goal is to give people more information, not less, and we can’t stumble and use a tactic that can be seen as censorship,” he said.

Microsoft’s approach is welcomed by some, but criticized by many. The latter consider that fake news should be marked and deleted like Facebook and Twitter. I certainly don’t know where to draw the line between promoting lies and censoring things, but given the recent past, we cannot allow lies to proliferate in our society in the name of “life expectancy”. In the United States, we have probably had hundreds of thousands more deaths from Covid-19 because of nonsense spread on social media, said one critic.

There’s a hell of a problem when companies like Facebook have about 10 people in a boardroom making decisions that impact billions of people. “We had the 1/6 uprising, stoked by the former president’s tweets and social media posts for months after the election,” he added. That said, those who support Microsoft’s approach believe that one of the most startling developments in recent years is that progressives want three or four big evil corporations to determine what is allowed to be said.

Recently, Google published a study that suggests psychological “inoculation” can improve resistance to online misinformation. The research team found that psychologically “inoculating” internet users against lies and conspiracy theories – by preemptively showing them videos of the tactics behind misinformation – made them more skeptical about it. keep falsettos thereafter. Google plans to soon launch a campaign to counter online misinformation about Ukrainian refugees based on the results of the study.

And you?

What is your opinion on the subject?
What do you think of Microsoft’s approach to fighting online misinformation?
Do you think this is an effective solution to fight against online disinformation?
What do you think of Google’s approach of psychologically “inoculating” Internet users against misinformation?

See as well

A Google study reveals that a psychological “inoculation” can improve resistance to disinformation, it would allow an Internet user to recognize disinformation techniques

A bug in Facebook’s algorithm caused misinformation to be highlighted on news feeds, it wasn’t fixed until six months later

Facebook and Instagram delete accounts of influential US anti-vax organization for spreading misinformation

Leave a Comment