Saturday, August 26, 2017

And now for a few notes on a single sophisticated Facebook bot scam, or what is it exactly going on?


Here are the names of persons of interest in what may potentially be a major and quite sophisticated Facebook scam. Or, if nothing else, a real curiosity in the way Facebook is doing business. I have determined three names, or, FB avatars (profiles) or either completely bogus, or, some kind of perverse bots at work in the social media universe that, at one time, was almost entirely invested in by a major Russian oligarch.

Now, the clincher for me was this. They were offering $500,000. However, for whatever reason, they could not simply deliver the money. The were sending it Fed Ex, according to the claim agents at work here. Those names are, as perhaps people who (I must say perhaps, because who really knows with a place that has few business phone contacts) are the following. Garrett Jordan, Debra Washington and, really curious because Zuckerberg is spelled, yes, just that way ... an avatar named Mark Z u k e r b e r g. Is this matter of mis-spelling? But why?

Now, web squatters have been doing this for years, misdirecting people to similar reading sites, but false sites. Why the diffusion? Are there not infinite possibilities for the correctly spelled Zuckerberg? And then there's of course the consistent pattern of gullible people: THEY DO NOT KNOW HOW TO SPELL.

SO, let's just bring this to light. What is going on here? There is an almost alien, A. I., intelligence, about the whole process, but it cannot answer some very basic questions about such things as, in the case of Debra Washington of Louisiana, what's going on right now in the Gulf of Mexico? Do they know what Katrina is? So on. So forth.

Bring it to light. That's all I'm saying. Why would Zuckerberg, when false identities are so out of control, create Zukerberg bots?

Anybody got any good guesses, because for me, quite frankly, I've had quite enough of the whole fucking thing. Namaste.

RAW DATA MINE ...

Busines Insider, July 27

Russia's troll factories were, at one point, likely being paid by the Kremlin to spread pro-Trump propaganda on social media.

That is what freelance journalist Adrian Chen, now a staff writer at The New Yorker, discovered as he was researching Russia's "army of well-paid trolls" for an explosive New York Times Magazine exposé published in June 2015.

"A very interesting thing happened," Chen told Longform's Max Linsky in a podcast in December.

"I created this list of Russian trolls when I was researching. And I check on it once in a while, still. And a lot of them have turned into conservative accounts, like fake conservatives. I don't know what's going on, but they're all tweeting about Donald Trump and stuff," he said.

Linsky then asked Chen who he thought "was paying for that."

"I don't know," Chen replied. "I feel like it's some kind of really opaque strategy of electing Donald Trump to undermine the US or something. Like false-flag kind of thing. You know, that's how I started thinking about all this stuff after being in Russia."

---

AOL News

According to two members of the Senate Intelligence Committee, Sen. Mark Warner (D-VA) and committee chairman Richard Burr (R-NC), hundreds of Russian trolls were paid in 2016 to generate fake news stories about Clinton and target them at voters in key states in an effort to swing the election for Trump.

"There were upwards of a thousand paid internet trolls working out of a facility in Russia, in effect taking over a series of computers which are then called a botnet, that can then generate news down to specific areas," Warner said.

---

Tech Crunch 2009

Facebook is taking that rumored $200 million investment from Digital Sky Technologies, a Russian investment group. DST will take a 1.96 percent stake in the company, giving Facebook a $10 billion valuation. Facebook ultimately did not have to give up a board seat to DST in return for the cash. But DST is getting preferred shares for it’s $200 million.

---

The Guardian 2013

The London-listed company controlled by Russia's richest man, Alisher Usmanov, has taken advantage of the recent rise in Facebook's share price to sell its remaining stake in the social network.

Mail.ru said on Thursday it had sold 14.2m Facebook shares for $528m (£338m). The firm first bought into Mark Zuckerberg's digital venture in 2009, spending $200m for a small stake when Facebook was valued at just $10bn. Facebook today has a stock market value of $102bn.

---

In the United States in the run-up to the 2016 presidential election, fake news was particularly prevalent and spread rapidly over social media "bots", according to researchers at the Oxford Internet Institute.[97][98] Germany's Chancellor Angela Merkel became a target for fake news in the run-up to the 2017 German federal election.[99]

File:Trump on 'fake news' and information leaks.webmhd.webm

60 Minutes producers said President Trump uses the phrase "fake news" to mean something else: "I take offense with what you said."[7]

In the early weeks of his presidency, U.S. President Donald Trump frequently used the term "fake news" to refer to traditional news media, singling out CNN.[100] Linguist George Lakoff says this creates confusion about the phrase's meaning.[101] According to CBS 60 Minutes, President Trump may use the term fake news to describe any news, however legitimate or responsible, with which he may disagree.[80]

President Trump also used social media site Twitter to express that "there is popular support for his executive order temporarily prohibiting the entry of all refugees as well as travellers from seven majority-Muslim nations", and any surveys that appear to show significantly higher number of people opposing the ban "are fake news, just like the CNN, ABC, NBC polls in the election".[102][103]

After Republican Colorado State Senator Ray Scott used the term as a reference to a column in the Grand Junction Daily Sentinel, the newspaper's publisher threatened a defamation lawsuit.[104][105]

In December 2016, an armed North Carolina man traveled to Washington, D.C., and opened fire at Comet Ping Pong pizzeria, driven by a fake online news story accusing the pizzeria of hosting a pedophile ring run by the Democratic Party leaders.[106] These stories tend to go viral quickly. Social media systems, such as Facebook, play a large role in the broadcasting of fake news. These systems show users content that shows their interests and history, leading to fake and misleading news.

A situation study by The New York Times shows how a tweet by a person with no more than 40 followers went viral and was shared 16,000 times on Twitter.[107] The tweet concluded that protesters were paid to be bused to Trump demonstrations and protest. A twitter user then posted a photograph of two buses outside a building claiming that those were the Anti-Trump protesters. The tweet immediately went viral on both Twitter and Facebook. Fake news can easily spread since technology is so fast and accessible to everyone.

President Donald Trump uses the term "Fake news", in order to discredit news that he dislikes. CNN "investigation" shows exactly how fake news can start to trend.[108] There are "bots" used by fake news publishers that make their articles appear more popular than they are. This makes it more likely for people to see and catch the eye of many. "Bots are fake social media accounts that are programmed to automatically ‘like’ or retweet a particular message."

~~~

The Register

April 2017

Analysis Last November at the Techonomy Conference in Half Moon Bay, California, Facebook CEO Mark Zuckerberg dismissed the notion that disinformation had affected the US presidential election as lunacy.

"The idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way, I think, is a pretty crazy idea," said Zuckerberg.

Five months later, after a report [PDF] from the Office of the US Director of National Intelligence provided an overview of Russia's campaign to influence the election – via social media among other means – the social media giant has published a plan for "making Facebook safe for authentic information."

Penned by Facebook chief security officer Alex Stamos and security colleagues Jen Weedon and William Nuland, "Information Operations and Facebook" [PDF] describes an expansion of the company's security focus from "traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people."




This despite Zuckerberg's insistence that "of all the content on Facebook, more than 99 per cent of what people see is authentic."




Facebook's paper says information operations to exploit the personal data goldmine revolve around targeted data collection from account holders, content creation to seed stories to the press, and false amplification to spread misinformation. It focuses on defenses against data collection and the distribution of misleading content.




To combat targeted data collection, Facebook says it is:




Promoting and providing support for security and privacy features, such as two-factor authentication.

Presenting notifications to specific people targeted by sophisticated attackers, with security recommendations tailored to the threat model.

Sending notifications to people not yet targeted but likely to be at risk based on the behavior of known threats.

Working with government bodies overseeing election integrity to notify and educate those at risk.

False amplification – efforts to spread misinformation to hurt a cause, sow mistrust in political institutions, or foment civil strife – is recognized in the report as a possible threat to Facebook's continuing vitality.




"The inauthentic nature of these social interactions obscures and impairs the space Facebook and other platforms aim to create for people to connect and communicate with one another," the report says. "In the long term, these inauthentic networks and accounts may drown out valid stories and even deter some people from engaging at all."




As can be seen from Twitter's half-hearted efforts to subdue trolls, sock puppets, and the like, such interaction can be toxic to social networks.




Stamos, Weedon and Nuland note that Facebook is building on its investment in fake account detection with more protections against manually created fake accounts and with additional analytic techniques involving machine learning.




Facebook's security team might want to have a word with computer scientists from University of California Santa Cruz, Catholic University of the Sacred Heart in Italy, the Swiss Federal Institute of Technology Lausanne, and elsewhere who have made some progress in spotting disinformation.




'Some like it hoax'

In a paper published earlier this week, "Some Like it Hoax: Automated Fake News Detection in Social Networks" [PDF], assorted code boffins report that they can identify hoaxes more than 99 per cent of the time, based on an analysis of the individuals who respond to such posts.




"Hoaxes can be identified with great accuracy on the basis of the users that interact with them," the research paper claims.




Asked about Zuckerberg's claim that only about 1 per cent of Facebook content is inauthentic, Luca de Alfaro, computer science professor at UC Santa Cruz and one of the hoax paper's co-authors, said he had no information on the general distribution of misinformation on Facebook.




"I would trust Mark on this," de Alfaro said in an email to The Register. "I know that on Wikipedia, on which I worked in the past, explicit vandalism is about 6 or 7 per cent (or it was some time ago)."




More significant than the percentage of fake news, de Alfaro suggested, is the impact of hoaxes on people.




"For instance, suppose I read and believe 10 run-of-the-mill pieces of news, and one outrageous hoax: which one of these 11 news [stories] will have the greatest impact on me?" he said. "Hoaxes are frequently harmful due to the particular nature of their crafted content. You can eat 99 meatballs and 1 poison pill, and you still die."




Machine learning techniques are proving to be effective, de Alfaro suggested, but people still need to be involved in the process.




"In our work, we were able to show that we can get very good automated results even when the oversight is limited to 0.5 per cent of the news we classify: thus, human oversight on a very small portion of news helps classify most of them."




Asked whether human oversight is always necessary for such systems, de Alfaro said that was a difficult question.




"To some level, I believe the answer is yes, because even if you use machine learning in other ways, you need to train the machine learning on data that has been, in the end, selected by some kind of human process," he said. "We are developing in my group at UCSC, and together with the other collaborators, a series of tools and apps that will enable people to access our classifiers, and we hope this might have an impact."




For Facebook, and the depressingly large number of people who rely on it, such tools can't come soon enough




```




In 2017, the inventor of the World Wide Web, Tim Berners-Lee claimed that fake news was one of the three most significant new disturbing Internet trends that must first be resolved, if the Internet is to be capable of truly "serving humanity." The other two new disturbing trends that Berners-Lee described as threatening the Internet were the recent surge in the use of the Internet by governments for both citizen-surveillance purposes, and for cyber-warfare purposes.




```

TIme




Russia plays in every social media space. The intelligence officials have found that Moscow's agents bought ads on Facebook to target specific populations with propaganda. "They buy the ads, where it says sponsored by--they do that just as much as anybody else does," says the senior intelligence official. (A Facebook official says the company has no evidence of that occurring.) The ranking Democrat on the Senate Intelligence Committee, Mark Warner of Virginia, has said he is looking into why, for example, four of the top five Google search results the day the U.S. released a report on the 2016 operation were links to Russia's TV propaganda arm, RT. (Google says it saw no meddling in this case.) Researchers at the University of Southern California, meanwhile, found that nearly 20% of political tweets in 2016 between Sept. 16 and Oct. 21 were generated by bots of unknown origin; investigators are trying to figure out how many were Russian.




As they dig into the viralizing of such stories, congressional investigations are probing not just Russia's role but whether Moscow had help from the Trump campaign. Sources familiar with the investigations say they are probing two Trump-linked organizations: Cambridge Analytica, a data-analytics company hired by the campaign that is partly owned by deep-pocketed Trump backer Robert Mercer; and Breitbart News, the right-wing website formerly run by Trump's top political adviser Stephen Bannon.




The congressional investigators are looking at ties between those companies and right-wing web personalities based in Eastern Europe who the U.S. believes are Russian fronts, a source familiar with the investigations tells TIME. "Nobody can prove it yet," the source says. In March, McClatchy newspapers reported that FBI counterintelligence investigators were probing whether far-right sites like Breitbart News and Infowars had coordinated with Russian botnets to blitz social media with anti-Clinton stories, mixing fact and fiction when Trump was doing poorly in the campaign.







There are plenty of people who are skeptical of such a conspiracy, if one existed. Cambridge Analytica touts its ability to use algorithms to microtarget voters, but veteran political operatives have found them ineffective political influencers. Ted Cruz first used their methods during the primary, and his staff ended up concluding they had wasted their money. Mercer, Bannon, Breitbart News and the White House did not answer questions about the congressional probes. A spokesperson for Cambridge Analytica says the company has no ties to Russia or individuals acting as fronts for Moscow and that it is unaware of the probe.




~ Mythville



Some raw data ...







Busines Insider, July 27




Russia's troll factories were, at one point, likely being paid by the Kremlin to spread pro-Trump propaganda on social media.




That is what freelance journalist Adrian Chen, now a staff writer at The New Yorker, discovered as he was researching Russia's "army of well-paid trolls" for an explosive New York Times Magazine exposé published in June 2015.




"A very interesting thing happened," Chen told Longform's Max Linsky in a podcast in December.




"I created this list of Russian trolls when I was researching. And I check on it once in a while, still. And a lot of them have turned into conservative accounts, like fake conservatives. I don't know what's going on, but they're all tweeting about Donald Trump and stuff," he said.




Linsky then asked Chen who he thought "was paying for that."




"I don't know," Chen replied. "I feel like it's some kind of really opaque strategy of electing Donald Trump to undermine the US or something. Like false-flag kind of thing. You know, that's how I started thinking about all this stuff after being in Russia."




---




AOL News




According to two members of the Senate Intelligence Committee, Sen. Mark Warner (D-VA) and committee chairman Richard Burr (R-NC), hundreds of Russian trolls were paid in 2016 to generate fake news stories about Clinton and target them at voters in key states in an effort to swing the election for Trump.




"There were upwards of a thousand paid internet trolls working out of a facility in Russia, in effect taking over a series of computers which are then called a botnet, that can then generate news down to specific areas," Warner said.




---




Tech Crunch 2009




Facebook is taking that rumored $200 million investment from Digital Sky Technologies, a Russian investment group. DST will take a 1.96 percent stake in the company, giving Facebook a $10 billion valuation. Facebook ultimately did not have to give up a board seat to DST in return for the cash. But DST is getting preferred shares for it’s $200 million.




---




The Guardian 2013




The London-listed company controlled by Russia's richest man, Alisher Usmanov, has taken advantage of the recent rise in Facebook's share price to sell its remaining stake in the social network.




Mail.ru said on Thursday it had sold 14.2m Facebook shares for $528m (£338m). The firm first bought into Mark Zuckerberg's digital venture in 2009, spending $200m for a small stake when Facebook was valued at just $10bn. Facebook today has a stock market value of $102bn.




---




In the United States in the run-up to the 2016 presidential election, fake news was particularly prevalent and spread rapidly over social media "bots", according to researchers at the Oxford Internet Institute.[97][98] Germany's Chancellor Angela Merkel became a target for fake news in the run-up to the 2017 German federal election.[99]




File:Trump on 'fake news' and information leaks.webmhd.webm

60 Minutes producers said President Trump uses the phrase "fake news" to mean something else: "I take offense with what you said."[7]

In the early weeks of his presidency, U.S. President Donald Trump frequently used the term "fake news" to refer to traditional news media, singling out CNN.[100] Linguist George Lakoff says this creates confusion about the phrase's meaning.[101] According to CBS 60 Minutes, President Trump may use the term fake news to describe any news, however legitimate or responsible, with which he may disagree.[80]




President Trump also used social media site Twitter to express that "there is popular support for his executive order temporarily prohibiting the entry of all refugees as well as travellers from seven majority-Muslim nations", and any surveys that appear to show significantly higher number of people opposing the ban "are fake news, just like the CNN, ABC, NBC polls in the election".[102][103]




After Republican Colorado State Senator Ray Scott used the term as a reference to a column in the Grand Junction Daily Sentinel, the newspaper's publisher threatened a defamation lawsuit.[104][105]




In December 2016, an armed North Carolina man traveled to Washington, D.C., and opened fire at Comet Ping Pong pizzeria, driven by a fake online news story accusing the pizzeria of hosting a pedophile ring run by the Democratic Party leaders.[106] These stories tend to go viral quickly. Social media systems, such as Facebook, play a large role in the broadcasting of fake news. These systems show users content that shows their interests and history, leading to fake and misleading news.




A situation study by The New York Times shows how a tweet by a person with no more than 40 followers went viral and was shared 16,000 times on Twitter.[107] The tweet concluded that protesters were paid to be bused to Trump demonstrations and protest. A twitter user then posted a photograph of two buses outside a building claiming that those were the Anti-Trump protesters. The tweet immediately went viral on both Twitter and Facebook. Fake news can easily spread since technology is so fast and accessible to everyone.




President Donald Trump uses the term "Fake news", in order to discredit news that he dislikes. CNN "investigation" shows exactly how fake news can start to trend.[108] There are "bots" used by fake news publishers that make their articles appear more popular than they are. This makes it more likely for people to see and catch the eye of many. "Bots are fake social media accounts that are programmed to automatically ‘like’ or retweet a particular message."




~~~




The Register




APril 2017




Analysis Last November at the Techonomy Conference in Half Moon Bay, California, Facebook CEO Mark Zuckerberg dismissed the notion that disinformation had affected the US presidential election as lunacy.




"The idea that fake news on Facebook, which is a very small amount of the content, influenced the election in any way, I think, is a pretty crazy idea," said Zuckerberg.




Five months later, after a report [PDF] from the Office of the US Director of National Intelligence provided an overview of Russia's campaign to influence the election – via social media among other means – the social media giant has published a plan for "making Facebook safe for authentic information."




Penned by Facebook chief security officer Alex Stamos and security colleagues Jen Weedon and William Nuland, "Information Operations and Facebook" [PDF] describes an expansion of the company's security focus from "traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people."




This despite Zuckerberg's insistence that "of all the content on Facebook, more than 99 per cent of what people see is authentic."




Facebook's paper says information operations to exploit the personal data goldmine revolve around targeted data collection from account holders, content creation to seed stories to the press, and false amplification to spread misinformation. It focuses on defenses against data collection and the distribution of misleading content.




To combat targeted data collection, Facebook says it is:




Promoting and providing support for security and privacy features, such as two-factor authentication.

Presenting notifications to specific people targeted by sophisticated attackers, with security recommendations tailored to the threat model.

Sending notifications to people not yet targeted but likely to be at risk based on the behavior of known threats.

Working with government bodies overseeing election integrity to notify and educate those at risk.

False amplification – efforts to spread misinformation to hurt a cause, sow mistrust in political institutions, or foment civil strife – is recognized in the report as a possible threat to Facebook's continuing vitality.




"The inauthentic nature of these social interactions obscures and impairs the space Facebook and other platforms aim to create for people to connect and communicate with one another," the report says. "In the long term, these inauthentic networks and accounts may drown out valid stories and even deter some people from engaging at all."




As can be seen from Twitter's half-hearted efforts to subdue trolls, sock puppets, and the like, such interaction can be toxic to social networks.




Stamos, Weedon and Nuland note that Facebook is building on its investment in fake account detection with more protections against manually created fake accounts and with additional analytic techniques involving machine learning.




Facebook's security team might want to have a word with computer scientists from University of California Santa Cruz, Catholic University of the Sacred Heart in Italy, the Swiss Federal Institute of Technology Lausanne, and elsewhere who have made some progress in spotting disinformation.




'Some like it hoax'

In a paper published earlier this week, "Some Like it Hoax: Automated Fake News Detection in Social Networks" [PDF], assorted code boffins report that they can identify hoaxes more than 99 per cent of the time, based on an analysis of the individuals who respond to such posts.




"Hoaxes can be identified with great accuracy on the basis of the users that interact with them," the research paper claims.




Asked about Zuckerberg's claim that only about 1 per cent of Facebook content is inauthentic, Luca de Alfaro, computer science professor at UC Santa Cruz and one of the hoax paper's co-authors, said he had no information on the general distribution of misinformation on Facebook.




"I would trust Mark on this," de Alfaro said in an email to The Register. "I know that on Wikipedia, on which I worked in the past, explicit vandalism is about 6 or 7 per cent (or it was some time ago)."




More significant than the percentage of fake news, de Alfaro suggested, is the impact of hoaxes on people.




"For instance, suppose I read and believe 10 run-of-the-mill pieces of news, and one outrageous hoax: which one of these 11 news [stories] will have the greatest impact on me?" he said. "Hoaxes are frequently harmful due to the particular nature of their crafted content. You can eat 99 meatballs and 1 poison pill, and you still die."




Machine learning techniques are proving to be effective, de Alfaro suggested, but people still need to be involved in the process.




"In our work, we were able to show that we can get very good automated results even when the oversight is limited to 0.5 per cent of the news we classify: thus, human oversight on a very small portion of news helps classify most of them."




Asked whether human oversight is always necessary for such systems, de Alfaro said that was a difficult question.




"To some level, I believe the answer is yes, because even if you use machine learning in other ways, you need to train the machine learning on data that has been, in the end, selected by some kind of human process," he said. "We are developing in my group at UCSC, and together with the other collaborators, a series of tools and apps that will enable people to access our classifiers, and we hope this might have an impact."




For Facebook, and the depressingly large number of people who rely on it, such tools can't come soon enough




```




In 2017, the inventor of the World Wide Web, Tim Berners-Lee claimed that fake news was one of the three most significant new disturbing Internet trends that must first be resolved, if the Internet is to be capable of truly "serving humanity." The other two new disturbing trends that Berners-Lee described as threatening the Internet were the recent surge in the use of the Internet by governments for both citizen-surveillance purposes, and for cyber-warfare purposes.




```

TIme




Russia plays in every social media space. The intelligence officials have found that Moscow's agents bought ads on Facebook to target specific populations with propaganda. "They buy the ads, where it says sponsored by--they do that just as much as anybody else does," says the senior intelligence official. (A Facebook official says the company has no evidence of that occurring.) The ranking Democrat on the Senate Intelligence Committee, Mark Warner of Virginia, has said he is looking into why, for example, four of the top five Google search results the day the U.S. released a report on the 2016 operation were links to Russia's TV propaganda arm, RT. (Google says it saw no meddling in this case.) Researchers at the University of Southern California, meanwhile, found that nearly 20% of political tweets in 2016 between Sept. 16 and Oct. 21 were generated by bots of unknown origin; investigators are trying to figure out how many were Russian.




As they dig into the viralizing of such stories, congressional investigations are probing not just Russia's role but whether Moscow had help from the Trump campaign. Sources familiar with the investigations say they are probing two Trump-linked organizations: Cambridge Analytica, a data-analytics company hired by the campaign that is partly owned by deep-pocketed Trump backer Robert Mercer; and Breitbart News, the right-wing website formerly run by Trump's top political adviser Stephen Bannon.




The congressional investigators are looking at ties between those companies and right-wing web personalities based in Eastern Europe who the U.S. believes are Russian fronts, a source familiar with the investigations tells TIME. "Nobody can prove it yet," the source says. In March, McClatchy newspapers reported that FBI counterintelligence investigators were probing whether far-right sites like Breitbart News and Infowars had coordinated with Russian botnets to blitz social media with anti-Clinton stories, mixing fact and fiction when Trump was doing poorly in the campaign.




There are plenty of people who are skeptical of such a conspiracy, if one existed. Cambridge Analytica touts its ability to use algorithms to microtarget voters, but veteran political operatives have found them ineffective political influencers. Ted Cruz first used their methods during the primary, and his staff ended up concluding they had wasted their money. Mercer, Bannon, Breitbart News and the White House did not answer questions about the congressional probes. A spokesperson for Cambridge Analytica says the company has no ties to Russia or individuals acting as fronts for Moscow and that it is unaware of the probe.

No comments:

Post a Comment