Inside the hate factory: how Facebook fuels far-right profit

A detailed Guardian investigation into how Facebook is making money for itself and those who spread islamophobia.  The more one reads of Facebook’s toxicity, the more one is struck by how profit drives the whole thing (as in, say, dumping mercury in a river drives profit).  As Techcrunch notes:

A network of scammers used a ring of established right-wing Facebook pages to stoke Islamophobia and make a quick buck in the process, a new report from the Guardian reveals. But it’s less a vast international conspiracy and more simply that Facebook is unable to police its platform to prevent even the most elementary scams — with serious consequences.

The Guardian’s multi-part report depicts the events like a scheme of grand proportions executed for the express purpose of harassing Representatives Ilhan Omar (D-MI), Rashida Tlaib (D-MN) and other prominent Muslims. But the facts it uncovered point towards this being a run-of-the-mill money-making operation that used tawdry, hateful clickbait and evaded Facebook’s apparently negligible protections against this kind of thing.

The scam basically went like this: an administrator of a popular right-wing Facebook page would get a message from a person claiming to share their values that asked if they could be made an editor. Once granted access, this person would publish clickbait stories — frequently targeting Muslims, and often Rep. Omar, since they reliably led to high engagement. The stories appeared on a handful of ad-saturated websites that were presumably owned by the scammers.

That appears to be the extent of the vast conspiracy, or at least its operations — duping credulous conservatives into clicking through to an ad farm.

Despite the scale of its effect on Rep. Omar and other targets, it’s possible and even likely that this entire thing was carried out by a handful of people. The operation was based in Israel, the report repeatedly mentions, but it isn’t a room of state-sponsored hackers feverishly tapping their keyboards — the guy they tracked down is a jewelry retailer and amateur SEO hustler living in a suburb of Tel Aviv who answered the door in sweatpants and nonchalantly denied all involvement.

The funny thing is that, in a way, this does amount to a vast international conspiracy. On one hand, it’s a guy in sweatpants worming his way into some trashy Facebook pages and mass-posting links to his bunk news sites. But on the other, it’s a coordinated effort to promote Islamophobic, right-wing content that produced millions of interactions and doubtless further fanned the flames of hatred.

 

 

 

 

 

5 Reasons Why People Love Cancel Culture

Psychology Today (via Charles Arthur):

“Cancel culture” describes how large groups of people, often on social media, target those who have committed some kind of moral violation. They are often cast out of their social and professional circles. Both the term “cancel culture” and the activity itself are becoming more popular. Especially among young people.

Here are 5 reasons why cancel culture is so effective.

Grim reading but just another byproduct of (anti-)social media.

How to fight lies, tricks,and chaos online is a useful roundup of what to look out for.

AI Copernicus ‘discovers’ that Earth orbits the Sun

Nature:

Astronomers took centuries to figure it out. But now, a machine-learning algorithm inspired by the brain has worked out that it should place the Sun at the centre of the Solar System, based on how movements of the Sun and Mars appear from Earth. The feat is one the first tests of a technique that researchers hope they can use to discover new laws of physics, and perhaps to reformulate quantum mechanics, by finding patterns in large data sets. The results are due to appear in Physical Review Letters1.

Physicist Renato Renner at the Swiss Federal Institute of Technology (ETH) in Zurich and his collaborators wanted to design an algorithm that could distill large data sets down into a few basic formulae, mimicking the way that physicists come up with concise equations like E = mc2. To do this, the researchers had to design a new type of neural network, a machine-learning system inspired by the structure of the brain.

Conventional neural networks learn to recognize objects — such as images or sounds — by training on huge data sets. They discover general features — for example, ‘four legs’ and ‘pointy ears’ might be used to identify cats. They then encode those features in mathematical ‘nodes’, the artificial equivalent of neurons. But rather than distilling that information into a few, easily interpretable rules, as physicists do, neural networks are something of a black box, spreading their acquired knowledge across thousands or even millions of nodes in ways that are unpredictable and difficult to interpret.

So Renner’s team designed a kind of ‘lobotomized’ neural network: two sub-networks that were connected to each other through only a handful of links. The first sub-network would learn from the data, as in a typical neural network, and the second would use that ‘experience’ to make and test new predictions. Because few links connected the two sides, the first network was forced to pass information to the other in a condensed format. Renner likens it to how an adviser might pass on their acquired knowledge to a student.

First they came for the astronomers and I did nothing.

Rock climbing and the economics of innovation

Richard Jones:

The rock climber Alex Honnold’s free, solo ascent of El Capitan is inspirational in many ways. For economist John Cochrane, watching the film of the ascent has prompted a blogpost: “What the success of rock climbing tells us about economic growth”. He concludes that “Free Solo is a great example of the expansion of ability, driven purely by advances in knowledge, untethered from machines.” As an amateur in both rock climbing and innovation theory, I can’t resist some comments of my own. I think it’s all a bit more complicated than Cochrane thinks. In particular his argument that Honnold’s success tells us that knowledge – and the widespread communication of knowledge – is more important than new technology in driving economic growth doesn’t really stand up.

The film “Free Solo” shows Honnold’s 2017 ascent of the 3000 ft cliff El Capitan, in the Yosemite Valley, California. The climb was done free (i.e. without the use of artificial aids like pegs to make progress), and solo – without ropes or any other aids to safety. How come, Cochrane asks, rock climbers have got so much better at climbing since El Cap’s first ascent in 1958, which took 47 days, done with “siege tactics” and every artificial aid available at the time? “There is essentially no technology involved. OK, Honnold wears modern climbing boots, which have very sticky rubber. But that’s about it. And reasonably sticky rubber has been around for a hundred years or so too.”

Hold on a moment here – no technology? I don’t think the history of climbing really bears this out. Even the exception that Cochrane allows, sticky rubber boots, is more complicated than he thinks.

Maybe It’s Not YouTube’s Algorithm That Radicalizes People

Wired:

YouTube is the biggest social media platform in the country, and, perhaps, the most misunderstood. Over the past few years, the Google-owned platform has become a media powerhouse where political discussion is dominated by right-wing channels offering an ideological alternative to established news outlets. And, according to new research from Penn State University, these channels are far from fringe—they’re the new mainstream, and recently surpassed the big three US cable news networks in terms of viewership.

The paper, written by Penn State political scientists Kevin Munger and Joseph Phillips, tracks the explosive growth of alternative political content on YouTube, and calls into question many of the field’s established narratives. It challenges the popular school of thought that YouTube’s recommendation algorithm is the central factor responsible for radicalizing users and pushing them into a far-right rabbit hole.

The authors say that thesis largely grew out of media reports, and hasn’t been rigorously analyzed. The best prior studies, they say, haven’t been able to prove that YouTube’s algorithm has any noticeable effect. “We think this theory is incomplete, and potentially misleading,” Munger and Phillips argue in the paper. “And we think that it has rapidly gained a place in the center of the study of media and politics on YouTube because it implies an obvious policy solution—one which is flattering to the journalists and academics studying the phenomenon.”

Instead, the paper suggests that radicalization on YouTube stems from the same factors that persuade people to change their minds in real life—injecting new information—but at scale. The authors say the quantity and popularity of alternative (mostly right-wing) political media on YouTube is driven by both supply and demand. The supply has grown because YouTube appeals to right-wing content creators, with its low barrier to entry, easy way to make money, and reliance on video, which is easier to create and more impactful than text.

‘I’ve Got Nothing to Hide’ and Other Misunderstandings of Privacy

Daniel J.Solove in San Diego Law Review:

In this short essay, written for a symposium in the San Diego Law Review, Professor Daniel Solove examines the nothing to hide argument. When asked about government surveillance and data mining, many people respond by declaring: “I’ve got nothing to hide.” According to the nothing to hide argument, there is no threat to privacy unless the government uncovers unlawful activity, in which case a person has no legitimate justification to claim that it remain private. The nothing to hide argument and its variants are quite prevalent, and thus are worth addressing. In this essay, Solove critiques the nothing to hide argument and exposes its faulty underpinnings.

First published in 2007.  Still relevant.

Human speech may have a universal transmission rate: 39 bits per second

Science:

Italians are some of the fastest speakers on the planet, chattering at up to nine syllables per second. Many Germans, on the other hand, are slow enunciators, delivering five to six syllables in the same amount of time. Yet in any given minute, Italians and Germans convey roughly the same amount of information, according to a new study. Indeed, no matter how fast or slowly languages are spoken, they tend to transmit information at about the same rate: 39 bits per second, about twice the speed of Morse code.

“This is pretty solid stuff,” says Bart de Boer, an evolutionary linguist who studies speech production at the Free University of Brussels, but was not involved in the work. Language lovers have long suspected that information-heavy languages—those that pack more information about tense, gender, and speaker into smaller units, for example—move slowly to make up for their density of information, he says, whereas information-light languages such as Italian can gallop along at a much faster pace. But until now, no one had the data to prove it.

Scientists started with written texts from 17 languages, including English, Italian, Japanese, and Vietnamese. They calculated the information density of each language in bits—the same unit that describes how quickly your cellphone, laptop, or computer modem transmits information. They found that Japanese, which has only 643 syllables, had an information density of about 5 bits per syllable, whereas English, with its 6949 syllables, had a density of just over 7 bits per syllable. Vietnamese, with its complex system of six tones (each of which can further differentiate a syllable), topped the charts at 8 bits per syllable.

Next, the researchers spent 3 years recruiting and recording 10 speakers—five men and five women—from 14 of their 17 languages. (They used previous recordings for the other three languages.) Each participant read aloud 15 identical passages that had been translated into their mother tongue. After noting how long the speakers took to get through their readings, the researchers calculated an average speech rate per language, measured in syllables/second.

Some languages were clearly faster than others: no surprise there. But when the researchers took their final step—multiplying this rate by the bit rate to find out how much information moved per second—they were shocked by the consistency of their results. No matter how fast or slow, how simple or complex, each language gravitated toward an average rate of 39.15 bits per second, they report today in Science Advances. In comparison, the world’s first computer modem (which came out in 1959) had a transfer rate of 110 bits per second, and the average home internet connection today has a transfer rate of 100 megabits per second (or 100 million bits).

How social networks can be used to bias votes

Via Charles Arthur, Nature editorial board:

Politicians’ efforts to gerrymander — redraw electoral-constituency boundaries to favour one party — often hit the news. But, as a paper published in Nature this week shows, gerrymandering comes in other forms, too.

The work reveals how connections in a social network can also be gerrymandered — or manipulated — in such a way that a small number of strategically placed bots can influence a larger majority to change its mind, especially if the larger group is undecided about its voting intentions (A. J. Stewart et al. Nature 573, 117–118; 2019).

The researchers, led by mathematical biologist Alexander Stewart of the University of Houston, Texas, have joined those who are showing how it can be possible to give one party a disproportionate influence in a vote.

It is a finding that should concern us all.

A masterful understandment.

Auditing Radicalization Pathways on YouTube

Manoel Horta Ribeiro, Raphael Ottoni, Robert West, Virgílio A. F. Almeida, Wagner Meira at Cornell University:

Non-profits and the media claim there is a radicalization pipeline on YouTube. Its content creators would sponsor fringe ideas, and its recommender system would steer users towards edgier content. Yet, the supporting evidence for this claim is mostly anecdotal, and there are no proper measurements of the influence of YouTube’s recommender system. In this work, we conduct a large scale audit of user radicalization on YouTube. We analyze 331,849 videos of 360 channels which we broadly classify into: control, the Alt-lite, the Intellectual Dark Web (I.D.W.), and the Alt-right —channels in the I.D.W. and the Alt-lite would be gateways to fringe far-right ideology, here represented by Alt-right channels. Processing more than 79comments, we show that the three communities increasingly share the same user base; that users consistently migrate from milder to more extreme content; and that a large percentage of users who consume Alt-right content now consumed Alt-lite and I.D.W. content in the past. We also probe YouTube’s recommendation algorithm, looking at more than 2M million recommendations for videos and channels between May and July 2019. We find that Alt-lite content is easily reachable from I.D.W. channels via recommendations and that Alt-right channels may be reached from both I.D.W. and Alt-lite channels. Overall, we paint a comprehensive picture of user radicalization on YouTube and provide methods to transparently audit the platform and its recommender system.

Google knows this but does nothing.