Wednesday, February 24, 2021

"Information fiduciaries" again
Wolff is a professor of economics at UMass. The post is here. I was blocked from sharing it, but not the Guardian article itself. From Doug Henwood on twitter. 

The Greyzone, Blumenthal, and Reuters 

 What's the definition of a "platform"?

In that same update about group recommendations, the product manager also explained how leaders decided against making changes to a feature called In Feed Recommendations (IFR) due to potential political worries. Designed to insert posts into people’s feeds from accounts they don’t follow, IFR was intended to foster new connections or interests. For example, if a person followed the Facebook page for a football team like the Kansas City Chiefs, IFR might add a post from the NFL to their feed, even if that person didn’t follow the NFL.

One thing IFR was not supposed to do was recommend political content. But earlier that spring, Facebook users began complaining that they were seeing posts from conservative personalities including Ben Shapiro in their News Feeds even though they had never engaged with that type of content.

When the issue was flagged internally, Facebook’s content policy team warned that removing such suggestions for political content could reduce those pages’ engagement and traffic, and possibly inspire complaints from publishers. A News Feed product manager and a policy team member reiterated this argument in an August post to Facebook’s internal message board.

“A noticeable drop in distribution for these producers (via traffic insights for recommendations) is likely to result in high-profile escalations that could include accusations of shadow-banning and/or FB bias against certain political entities during the US 2020 election cycle,” they explained. Shadow-banning, or the limiting of a page’s circulation without informing its owners, is a common accusation leveled by right-wing personalities against social media platforms.

Throughout 2020, the “fear of antagonizing powerful political actors,” as the former core data scientist put it in their memo, became a key public policy team rationalization for forgoing action on potentially violative content or rolling out product changes ahead of the US presidential election. They also said they had seen “a dozen proposals to measure the objective quality of content on News Feed diluted or killed because … they have a disproportionate impact across the US political spectrum, typically harming conservative content more.”

The data scientist, who spent more than five years at the company before leaving late last year, noted that while strides had been made since 2016, the state of political content on News Feed was “still generally agreed to be bad.” According to Facebook data, they added, 1 of every 100 views on content about US politics was for some type of hoax, while the majority of views for political materials were on partisan posts. Yet the company continued to give known spreaders of false and misleading information a pass if they were deemed “‘sensitive’ or likely to retaliate,” the data scientist said.

“In the US it appears that interventions have been almost exclusively on behalf of conservative publishers,” they wrote, attributing this to political pressure or a reluctance to upset sensitive publishers and high-profile users.

As BuzzFeed News reported last summer, members of Facebook’s policy team — including Kaplan — intervened on behalf of right-wing figures and publications such as Charlie Kirk, Breitbart, and Prager University, in some cases pushing for the removal of misinformation strikes against their pages or accounts. Strikes, which are applied at the recommendation of Facebook’s third-party fact-checkers, can result in a range of penalties, from a decrease in how far their posts are distributed to the removal of the page or account.

Kaplan’s other interventions are well documented. In 2018, the Wall Street Journal revealed that he helped kill a project to connect Americans who have political differences. The paper said Kaplan had objected “when briefed on internal Facebook research that found right-leaning users tended to be more polarized, or less exposed to different points of view, than those on the left.” Last year, the New York Times reported that policy executives declined to expand a feature called “correct the record” — which notified users when they interacted with content that was later labeled false by Facebook’s fact-checking partners — out of fear that it would “disproportionately show notifications to people who shared false news from right-wing websites.”

No comments:

Post a Comment

Comment moderation is enabled.