A group of independent researchers at AlgorithmWatch recently had to shut down an Instagram research after a backlash from Facebook. The group had been studying Instagram’s algorithm and how it works. However, they claim that Facebook threatened with a lawsuit, so they were forced to bring the research to a halt.
AlgorithmWatch published a post revealing their side of the story. It reads that the research was voluntary-based. Volunteers were asked to install a browser add-on that scraped their Instagram newsfeeds. The company notes that they only shared content from their own feeds, probably meaning that the add-on couldn’t access those of their followers.
“Over the last 14 months, about 1,500 volunteers installed the add-on,” AlgorithmWatch explains. “Data was sent to a database we used to study how Instagram prioritizes pictures and videos in a user’s timeline.”
“With their data, we were able to show that Instagram likely encouraged content creators to post pictures that fit specific representations of their body, and that politicians were likely to reach a larger audience if they abstained from using text in their publications (Facebook denied both claims).”
The company writes that different parties have supported the project, including the European Data Journalism Network and the Dutch foundation SIDN. There were also several partners of the project. But when AlgorithmWatch asked Facebook to comment on the research results, there was allegedly no answer.
However, on 28 May 2020, Facebook responded. AlgorithmWatch claims that the company called the research “flawed in a number of ways”.
“On 2 March 2021, they claimed that they “found a number of issues with [our] methodology”. They did not list the flaws and issues in question, but we assumed that they carried out a thorough review of our methods and tools prior to issuing such strong statements.”
Then, in May 2021, Facebook reportedly asked AlgorithmWatch for a meeting. The company said that the research “breached their Terms of Service, which prohibit the automated collection of data.” According to the researchers, Facebook said that it “would have to “mov[e] to more formal engagement” if they didn’t “resolve” the issue on their terms. The researchers called this “a thinly veiled threat.”
Speaking with Engadget, Facebook commented on AlgorithmWatch’s claims. While it confirmed that the research doesn’t comply with Facebook’s TOS, it denied a lawsuit threat:
“We believe in independent research into our platform and have worked hard to allow many groups to do it, including AlgorithmWatch — but just not at the expense of anyone’s privacy. We had concerns with their practices, which is why we contacted them multiple times so they could come into compliance with our terms and continue their research, as we routinely do with other research groups when we identify similar concerns. We did not threaten to sue them. The signatories of this letter believe in transparency — and so do we. We collaborate with hundreds of research groups to enable the study of important topics, including by providing data sets and access to APIs, and recently published information explaining how our systems work and why you see what you see on our platform. We intend to keep working with independent researchers, but in ways that don’t put people’s data or privacy at risk.”
AlgorithmWatch still defends its methods and believes that there was nothing wrong with the research. However, the company decided to shut down the project and delete the collected data after all. “Ultimately, an organization the size of AlgorithmWatch cannot risk going to court against a company valued at one trillion dollars.”