What you need to know
- In an interview with Bloomberg, Microsoft President joined CVP for Trust and Security Tom Burt to discuss misinformation and censorship.
- Smith said that users want to “make up their own minds” with regards to fake news, opting instead to give users “more information, not less.”
- Microsoft has been collaborating with Twitter and other researchers to reduce harmful outcomes from algorithmic news content.
In an interview with Bloomberg, Microsoft President Brad Smith discussed the topics of disinformation across its informational products such as Bing, LinkedIn, MSN, and beyond.
Social media companies such as Twitter and Facebook have been increasingly blamed for the erosion of a shared truth in society, with bad faith hot takes and outrage baiting sensationalism taking newsfeed priority, as a consequence of human nature intersecting with algorithmic content delivery designs. Twitter and Facebook both came under fire during the early days of the pandemic for preventing the spread of false information about the efficacy of Covid-19 vaccines, for example, but Facebook has also been criticized for its role in promoting hate groups and has often been cited in court proceedings with regards to murder and even genocide.
Given Microsoft’s global footprint, the company often shares its stances on major political issues via President Brad Smith over on the Microsoft Blog (opens in new tab). Earlier today, the firm shared how it was approaching tackling violent and extremist content on its services. Microsoft has partnered up with various research firms to study how “algorithmic outcomes” can lead to violence in the real world, with the Christchurch massacre in New Zealand as a focal point.
Despite this, Brad Smith said to Bloomberg in an interview today that he doesn’t think that it’s the role of tech companies to tell people what’s true or false.
“I don’t think that people want governments to tell them what’s true or false, and I don’t think they’re really interested in having tech companies tell them either.”
Microsoft discussed its cybersecurity teams which work with the U.S. Department of Homeland Security in tracking and isolating propaganda campaigns and cyberwarfare attacks from hostile regimes, such as Russia, North Korea, and Iran. Smith said it aims to be “transparent” with its approach to tracking disinformation, with a goal of lobbying governments to agree on national rules. Microsoft CVP for Customer Security and Trust Tom Burt emphasized the importance of conversation and transparency, in order to obtain a consensus on action.
“It turns out that if you tell people what’s going on, then that knowledge inspires both action and conversation about the steps that global governments need to take to address these issues.”
Microsoft has been an active participant in defending Ukrainian internet infrastructure and whistleblowing on Russian cyberattacks and signalled to Bloomberg that it will reduce the visibility of Russian state-sponsored media such as Sputnik and RT across its services, unless a user specifically intends to access that content. However, Brad Smith signalled that Microsoft would stop short of outright banning Russian propaganda agencies such as RT across its products, opting instead to allow users to decide for themselves.
“We have to be very thoughtful and careful because—and this is also true of every democratic government—fundamentally, people quite rightly want to make up their own mind and they should. Our whole approach needs to be to provide people with more information, not less and we cannot trip over and use what others might consider censorship as a tactic.”