New UK tech regulator set to limit power of Google and Facebook


A new tech regulator will work to limit the power of Google, Facebook and other tech platforms, the government has announced, in an effort to ensure a level playing field for smaller competitors and a fair market for consumers.

Under the plans, the Competition and Markets Authority (CMA) will gain a dedicated Digital Markets Unit, empowered to write and enforce a new code of practice on technology companies which will set out the limits of acceptable behaviour.

The code will only affect those companies deemed to have “strategic market status”, though it has not yet been decided what that means, nor what restrictions will be imposed.

The business secretary, Alok Sharma, said: “Digital platforms like Google and Facebook make a significant contribution to our economy and play a massive role in our day-to-day lives – whether it’s helping us stay in touch with our loved ones, share creative content or access the latest news.

“But the dominance of just a few big tech companies is leading to less innovation, higher advertising prices and less choice and control for consumers. Our new, pro-competition regime for digital markets will ensure consumers have choice, and mean smaller firms aren’t pushed out.”

The government’s plans come in response to an investigation from the CMA which began as a narrow look at the digital advertising industry, but was later broadened out to cover Google and Facebook’s dominance of the market. The code will seek to mediate between platforms and news publishers, for instance, to try to ensure they are able to monetise their content; it may also require platforms to give consumers a choice over whether to receive personalised advertising, or force them to work harder to improve how they operate with rival platforms.

Andrea Coscelli, the chief executive of the CMA, welcomed the move. “Only through a new pro-competition regulatory regime can we tackle the market power of tech giants like Facebook and Google and ensure that businesses and consumers are protected.

“We will soon be providing advice to government on how this new regime should work, as requested earlier this year, and stand ready to support the setup of the Digital Markets Unit.”

Oliver Dowden, the digital secretary, said: “There is growing consensus in the UK and abroad that the concentration of power among a small number of tech companies is curtailing growth of the sector, reducing innovation and having negative impacts on the people and businesses that rely on them. It’s time to address that and unleash a new age of tech growth.”

But in trying to impose strict terms on multinational companies, the UK may have an uphill battle on its hands. In France, for instance, digital tax payments levied on big tech have been seen by the US government as unfair discrimination, leading to threats of retaliatory tariffs on French goods such as handbags and cosmetics.





Source link

Senators attack Facebook and Twitter over labeling election misinformation


Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.

Senators hammered the CEOs of Facebook and Twitter on Tuesday over how their services handled election misinformation.

Republicans on the Senate Judiciary Committee complained that warnings the companies affixed to posts, like those by President Trump that falsely claimed to have been reelected, were unfair. Democrats, in turn, said the labels didn’t go far enough and worried that leaving posts up would cause the public to doubt the democratic process. 

“As we speak, Donald Trump is waging an all-out war on the truth…and one of his weapons of choice in this disinformation war is social media,” Democratic New Jersey Sen. Corey Booker told the CEOs, who attended via a video call. “You have the tools to prevent him from weaponizing these platforms.” 

Twitter and Facebook both recently introduced the labels to combat the expected onslaught of election-related lies. The companies also included links in those warnings to more credible sources, such as official results and news articles.

Almost as soon as the labels appeared during the lead up to the election, lawmakers went on the offensive against them. In fact, the labels are one rare thing that both parties agree on in their dislike, though for different reasons.

Tuesday’s hearing came three weeks after Facebook CEO Mark Zuckerberg and Twitter CEO Jack Dorsey testified before the Senate about Section 230, a law the protects Internet companies from being held liable for what their users post.

In their repeat performance, the CEOs defended their companies’ various election efforts, including labels that they said provided context to conversations. They insisted that their actions helped limit the spread of election misinformation. 

“We believe the labels point to a broader conversation so that people can see what’s happening with the election and with the results,” Twitter’s Dorsey responded under fire. 

Dorsey said Twitter applied more than 300,000 labels to tweets between Oct. 27 to Nov. 11, or 2.2% of all U.S. election tweets. Zuckerberg didn’t disclose how many labels Facebook added to election-related posts. 

Sen. Ted Cruz, the Texas Republican, suggested by adding labels, the two companies are choosing what is fact and fiction, and therefore are picking sides. He also criticized the companies for reducing the sharing of a New York Post story that claimed to connect Joe Biden to corruption in Ukraine.

In a rapid-fire exchange with Dorsey, Cruz questioned Twitter’s decision to attach a label to posts claiming voter fraud that says voter fraud is “exceedingly rare” in the U.S. “That’s not linking to a broader conversation, that’s taking a disputed policy decision,” he said.

Republican Sen. Ben Sasse of Nebraska argued that the companies are taking sides by labeling posts by conservatives as misinformation, but then don’t do the same for Democrats. He claimed that the bias is because employees of both California-based companies are mostly liberals.

“You’re applying content moderation policies in seemingly a way that’s not objective,” Sasse said. 

Sen. Dianne Feinstein, a California Democrat, questioned a number of instances that Twitter labeled Trump’s tweets for spreading election misinformation. She complained that the tweets weren’t labeled quickly enough and that their language was too weak.

Zuckerberg, in response to Feinstein’s attack on Twitter, pointed out that Facebook had added voter information atop users’ news feeds to steer them to credible sources, regardless of what their friends had posted on the service. 

“All taken together, we went really quite far to distribute reliable and accurate information,” Zuckerberg said. 

Facebook and Twitter plan to continue their election labeling policies through the Georgia runoff election in January, when the Senate majority will be determined.  

More must-read tech coverage from Fortune:

  • Hackers are trying to disrupt and steal COVID-19 vaccine research
  • Here’s how President-elect Biden plans to tackle online abuse
  • What’s in a name? For Tesla’s Full Self Driving, it may be danger
  • What my day on conservative social network Parler was like
  • He’s worried A.I. may destroy humanity. Just don’t confuse him with Elon Musk



Source link

A Ukrainian Neo-Nazi Group Is Organizing Violence On Facebook


Despite attempts to drive it off the platform, a violent Ukrainian far-right group with ties to American white supremacists is using Facebook to recruit new members, organize violence, and spread its far-right ideology across the world.

Although it banned the Azov movement and its leaders more than a year ago, Facebook continues to profit from ads placed by the far-right organization as recently as Monday.

Since July, Azov, which sprung up during the Russian invasion in 2014, has opened at least a dozen new Facebook pages. Alla Zasyadko, a 25-year-old member, has used one to place 82 ads on the social network, paying Facebook at least $3,726, according to the platform’s ad library. Many of the ads called for street protests against the Ukrainian government. One of the ads encourages children to sign up for a patriotic youth training course. Similar courses have included firearms training.

Zasyadko did not respond to requests for comment.

A Facebook spokesperson told BuzzFeed News, “The Azov Battalion is banned from our platforms and we remove content that represents, praises or supports them when we’re made aware of it.”

At the time this story was published, the Azov movement’s main Facebook page, listed as Ukrainian Corps — a name that resembles that of the movement’s political arm, National Corps — was still active.

Facebook has come under heavy criticism for allowing US right-wing militant organizations to organize and run ads on the platform. Some of those groups have committed violence during Black Lives Matter protests, advocated for civil war, and allegedly conspired to kidnap and kill elected political officials. Facebook said last month that it had deleted thousands of pages and groups tied to “militarized social movements.” Many of those pages and groups were taken down after BuzzFeed News brought them to Facebook’s attention.

But driving right-wing extremists from the social network has proven difficult, with many of them popping up again days or weeks after removal.

Facebook banned the Azov movement, which has many members who espouse neo-Nazi beliefs, in April 2019. The company removed several pages associated with the group, including those operated by its senior members and the various branches they lead.

But since July 16, the group has been operating the new Ukrainian Corps page. The page does not try to hide that it belongs to the Azov National Corps — it openly discusses National Corps activities and leaders, links to Azov’s websites and email, and posts photos of members in uniforms at rallies and torchlight marches.

Facebook has no reason not to know that the Azov movement is dangerous. In the wake of a series of violent attacks on Roma and LGBTQ people across Ukraine by members of the National Corps and its paramilitary street wing, the National Militia, the US State Department named Azov’s National Corps a “nationalist hate group.”

Matthew Schaaf, who leads the Ukraine office of the human rights group Freedom House and has closely observed the group, said the Azov movement’s ability to mobilize people through social media poses a threat to society.

“In the last couple of years, participants of Azov-affiliated groups have used violence against vulnerable groups in Ukrainian society and threatened public officials, with social media serving as an important tool to organize these actions and share their results,” Schaaf told BuzzFeed News. “Many of these assaults are accompanied by before-and-after propagandistic posts on social media.”

Azov began in 2014 as a volunteer military battalion that helped Ukraine defend itself against an invasion by Russia and its separatist proxy forces. The battalion’s symbol is similar to that of the Wolfsangel, the insignia widely used by the German military during World War II. Although human rights groups accused the battalion of torture and war crimes during the early months of the Ukrainian-Russian conflict, in late 2014, Ukraine’s National Guard incorporated the Azov battalion into its official fold, where it was renamed the Azov regiment.

The military unit has been a favorite bogeyman of the Kremlin, with Russian President Vladimir Putin using the group to justify his attacks against Ukraine as fighting against fascism. Although the group is not broadly popular in Ukraine, its neo-Nazi links are clear. In 2010, the battalion’s founder, Andriy Biletsky, said that Ukraine ought to “lead the white races of the world in a final crusade … against Semite-led Untermenschen [subhumans].”

Biletsky could not be reached for comment.

While the regiment still looks to Biletsky for inspiration, he has moved into politics; he served as a member of the Ukrainian parliament from 2014 to 2019 but lost reelection. He now heads the National Corps political party, which has been largely unsuccessful at getting members into elected positions but is using social media to try to grow support. He is also one of the founders of the movement’s Intermarium project, which builds bridges to white nationalists and neo-Nazis in Western Europe and the US.

Although Facebook previously took down Intermarium pages, a new Intermarium page was created on Sept. 9. Run by the National Corps’ international secretary, Olena Semenyaka, it has been sharing news and information about far-right and neo-Nazi figures in Europe and promoting “cultural” events at its Kyiv office.

After a ban, Semenyaka too has reopened Facebook and Instagram accounts under a pseudonym.

Semenyaka did not respond to a request for comment.

Thanks in part to social media, the National Corps has made inroads with white nationalist groups in the US, including the California-based Rise Above Movement, whose members participated in 2017’s Unite the Right rally in Charlottesville, Virginia, but saw charges over their actions later dropped. In April 2018, RAM founder Robert Rundo visited Kyiv and took part in an Azov-organized fight club. That October, the FBI wrote that it believed Azov was involved in “training and radicalizing United States-based white supremacy organizations.”

Last month, Ukraine deported two American neo-Nazis associated with the US-based Atomwaffen Division who had attempted to set up a local branch of the group with Azov fighters to gain “combat experience.”

As Azov uses Facebook to expand beyond Ukraine’s borders, experts are growing concerned. “The use of violence and the possibility that they could muster large crowds of mostly young men ready to use violence, all of it facilitated by social media,” Schaaf said, “gives them power.”



Source link

Solomon Islands to pursue ban on Facebook after govt. criticism on platform: media


If the ban goes ahead, the Solomons would join only a handful of countries around the world, including China, to actively restrict the world’s biggest social networking platform.

The Solomon Islands is planning to ban the use of Facebook for an indeterminate period after inflammatory critique of the government was aired on the social media platform, the Solomon Times reported.

The government, led by Prime Minister Manasseh Sogavare, told Reuters it would issue a formal statement on its decision later on November 17.

If the ban goes ahead, the Solomons would join only a handful of countries around the world, including China, to actively restrict the world’s biggest social networking platform.

Facebook Inc. did not immediately respond to a Reuters request for comment on November 17.

The government has been heaviy criticised over the distribution of economic stimulus funds amid the coronavirus pandemic and the impact of the Pacific nation’s decision to switch diplomatic ties from Taiwan to China.

Facebook is a hugely popular forum in the Solomons for discussion with the population of around 6,50,000 people spread out over a sprawling archipelago.

Solomon’s Minister of Communication and Aviation, Peter Shanel Agovaka, is one of the chief supporters of the ban, according to the Solomon Times, and has blamed “abusive language” and “character assassination” of government ministers, including the Prime Minister, being carried on the platform for the decision.

Opposition leader Matthew Wale told Reuters he would oppose the ban. “I absolutely do not see any justification whatsoever for such a ban,” Mr. Wale told Reuters on the phone.



Source link

Josh Hawley Claims He Has Evidence of Coordinated Censorship by Google, Facebook, Twitter



Sen. Josh Hawley (R-MO) says he has been approached by a Facebook whistleblower who alleges coordination between Facebook, Google, and Twitter to suppress and censor their platforms.

Hawley announced via his Twitter account that the whistleblower has provided him with evidence of the alleged coordination, and that he will question Twitter’s Jack Dorsey and Facebook’s Mark Zuckerberg about it at a hearing of the Senate Judiciary Committee tomorrow, where both CEOs will testify.

The Missouri senator, a vocal critic of big tech, says the whistleblower has also alleged that Facebook has an internal platform to manage the coordinated censorship.

“I’ve heard from [A Facebook] whistleblower who revealed Facebook and Google and Twitter coordinate to censor,” said Hawley. “Facebook has an internal platform to manage it. I’ll be asking Mark Zuckerberg and [Jack Dorsey] about this at tomorrow’s hearing.”

As Breitbart News’ Lucas Nolan reported earlier today, Big Tech censorship against President Trump continues to escalate:

Breitbart News has reported extensively on social media websites increasing their censorship leading up to the 2020 Presidential election. Twitter stated last week that it put labels on 300,000 user posts from October 27 to November 11 for violating rules related to election misinformation. Twitter also implemented a retweet feature that required users to add their own comments before retweeting a post on the platform.

Between election day and Friday of last week, Twitter labeled around 34 percent of President Trump’s tweets and retweets as “disputed.” Breitbart News has reported extensively on this, noting recently that 25 of President Trump’s posts across Twitter and Facebook were labeled or disputed within 24 hours.

The Senate Judiciary Committee’s hearing will take place Tuesday at 10:00 a.m. eastern time.

The hearing’s topic focuses squarely on the widespread censorship of news and political speech that occurred in the weeks and months leading up to election day 2020. The hearing’s title is “Breaking the News: Censorship, Suppression, and the 2020 Election.”

Allum Bokhari is the senior technology correspondent at Breitbart News. His new book, #DELETED: Big Tech’s Battle to Erase the Trump Movement and Steal The Election, which contains exclusive interviews with sources inside Google, Facebook, and other tech companies, is currently available for purchase.





Source link

Speaker Pelosi calls out Facebook for not following mass


Speaker of the House Nancy Pelosi, D-Calif., meets with reporters on Capitol Hill in Washington, Friday, Nov. 13, 2020. (AP Photo/J. Scott Applewhite)

OAN Newsroom
UPDATED 9:14 AM PT – Saturday, November 14, 2020

House Speaker Nancy Pelosi slammed Big Tech for having what appears to be double standards when it comes to political censorship.

During a news briefing Friday, Pelosi called out Facebook CEO Mark Zuckerberg for allegedly refusing to censor conservative content or what she called “election misinformation.”

“I’m not a big fan of Facebook,” she stated. “I don’t know what they have been doing, but I know they’ve been part of the problem all along.”

Facebook has found itself in the cross-hairs of political leaders and other tech companies.

Beginning in the late spring of 2020, Zuckerberg openly refused to censor information put out by President Trump. The tech CEO stated that while he disagrees with the President’s tweets, he refuses to limit his speech.

Pelosi argued, however, that media companies have a certain responsibility when it comes to monitoring content on their sites.

“the technology is a blessing, but it’s a double-edged sword in terms of communication democratizing the spread of information,” she asserted. “So I would hope that they would have some sense of responsibility, because they were very much a part of causing this problem to begin with.”

Pelosi said she would like to see Facebook join the ranks of other tech companies who have made an effort to censor the posts and accounts of conservative personalities.

RELATED: YouTube censors OAN report exposing hazards of voting fraud in 2020 election





Source link

Melbourne Cup 2020: Death of Tim Bell, Facebook tribute, Rising star, News, Updates


While the Melbourne Cup is an event of excitement and joy for the nation, it is also a day for reflection in the racing community.

Tuesday marks five years since Queensland rider and Melbourne Cup day hopeful Tim Bell died in a non-racing incident in Singapore at just 22 years old.

He was in Singapore at the time as part of a three-month riding contract and died on Melbourne Cup day.

Watch the Melbourne Cup LIVE on Racing.com, available on Kayo. New to Kayo? Get your 14-day free trial & start streaming instantly

Tim Bell died in 2015. Pics Tara Croser.Source: News Corp Australia



Source link

Tired of coronavirus conspiracy theories in your Facebook feed? So was Elissa — so she did something about it


When the coronavirus pandemic first became serious back in March, Elissa McKay started noticing more and more troubling social media posts appearing in her feed.

“It ran the entire spectrum from it [COVID] is no worse than the flu, all the way up to it has been planned, this is a hoax, it doesn’t exist or it does exist and it is part of government control,” she said.

She also noticed a lot of fury directed at the media.

“And ‘the media’ was a catch all term, it was everyone from the ABC all the way up to Andrew Bolt. It was, ‘I don’t like what you are telling me so I am going to shoot the messenger.'”

What worried the Mount Dandenong mum and former communications advisor was where people would then turn for vital health information if they were not consuming news during the pandemic.

“I was getting very concerned, particularly with our demographic up here in the Hills,” she said.

‘We began to build consensus together’

Ms McKay helps run a community Facebook group called Mums of the Hills, which has many members from the Yarra Ranges, east of Melbourne.

Previously, she spent years working in communications for not-for-profit groups, the Federal Government and the Greens.

She decided to use her skills to try and include public health information in posts on her local community Facebook page.

Elissa McKay says her local Facebook page can be used as a template for other groups who also want to share health information online.(ABC News: Ron Ekkel)

It was a different response to many community groups on Facebook, which banned conversation about COVID-19 because it was deemed too controversial, too political or too difficult to moderate.

But Ms McKay thought it was important for people who were not consuming news to have another space to access information and talk about the pandemic.

“We were certainly expecting a fair amount of conflict and a fair amount of pushback,” she said.

Ms McKay wrote COVID updates, taking information from the Premier’s daily press conferences and the Department of Health and Human Services (DHHS), and summarising the news of the day with humour and links to external news reporting.

“I wanted people to question what they were reading, and what they were hearing,” she said.

As the conversations grew, members of the group who were doctors, lawyers, public servants and psychologists began to share their own knowledge.

“We were able to cut through the misinformation and say ‘this is the piece of the puzzle that I have’ and ‘this is the information I am quite confident on’, and we began to build consensus together.”

Ms McKay believes her experience shows community social media groups can be part of the answer to combating dangerous online misinformation.

Women tuning out of news and into social media

RMIT University’s program manager for journalism, Alex Wake, said research from the University of Canberra had been tracking “news fatigue” in some groups, even before the pandemic started.

It shows that certain groups of people, particularly women, were starting to avoid news. Women are also spending more time on social media than men, Dr Wake said.

“Women, in this social media sphere, have always preferred taking recommendations of stories from others,” she said.

“So they are more likely to get the anti-vaxxer story, rather than going to The Age or to the Sydney Morning Herald or whatever it is to go to a verifiable news source.”

Dr Alex Wake looks into the camera, with a bookshelf visible behind her.
Dr Alex Wake says it is important for Australians to read widely and support quality journalism, to ensure they are getting accurate and verified information.

While major news outlets recorded big audience jumps during the pandemic, and some outlets recorded increased trust levels, Dr Wake said there was also another emerging trend.

Just as Ms McKay noticed on her Facebook page, Dr Wake said there had been a growing number of people who didn’t trust any media for their information.

She said the best way to get accurate information was for Australians to pay for quality journalism and read widely.

Political extremists and government agents pushing misinformation online

Cyber analyst Jake Wallis works for the independent think tank the Australian Strategic Policy Institute, which tracks online misinformation campaigns that some people read and share as news.

“There is a whole eco-system of misinformation around COVID-19 and the origins of the virus,” Dr Wallis said.

Jacob Wallis at work
Jacob Wallis tracks misinformation campaigns across the world.(Supplied)

His research has found there are “state actors” involved in propagating false information about the virus.

“We have tracked pro-Russian vaccine disinformation from Eastern Ukraine into a prominent anti-vax Facebook group here in Australia,” he said.

While Dr Wallis acknowledged the links were not always direct, “you can track narrative and the impacts on audiences as far away as here in Australia”.

And it is not just foreign government agents trying to spread misinformation online.

Dr Wallis said extremist groups, from Islamic State to far-right political organisations, were “increasingly adapt at using social media environments to target mainstream audiences with narratives and perspectives that are outside the bounds of healthy political discourse”.

He has some simple tips for avoiding misinformation online.

“Just taking some critical distance, checking the source, reading content before we share it, and retaining our own critical judgement about content that we see online,” he said.



Source link

Facebook Extends Ban on Political Ads


Don’t expect Google to lift its marketing embargo before the end of this year, either.


2 min read


This story originally appeared on PCMag

The 2020 U.S. presidential election is far from over, and social networks aren’t taking any chances ahead of the Jan. 20 inauguration.

Facebook, which announced last month that it would temporarily stop running political, electoral and social issue ads on Nov. 4, has extended its ban.

“The temporary pause for ads about political and social issues in the U.S. continues to be in place as part of our ongoing efforts to protect the election,” according to a Nov. 11 update to the company’s initial blog post. “Advertisers can expect this to last another month, though there may be an opportunity to resume these ads sooner.”

Rob Leathern, director of product management at Facebook, confirmed the deferment via Twitter, explaining that labels naming former Vice President Joe Biden as the projected winner will remain in place “as that result moves toward certification next month.”

Related: Facebook, Uber and Dating Sites Top List of Companies Collecting Your Personal Data

The White House isn’t the only 2020 battleground, though. Two Senate seats from Georgia are still up for grabs in a rare double-barreled runoff election slated for early January. But, unfortunately for the candidates and their undecided constituents, Facebook and Instagram won’t be helping to plug anyone in the Peach State.

“We know that people are disappointed that we can’t immediately enable ads for runoff elections in Georgia and elsewhere,” Leathern wrote. “We do not have the technical ability in the short term to enable political ads by state or by advertiser, and we are also committed to giving political advertisers equal access to our tools and services.”

Google, meanwhile, has taken a similar approach, warning some advertisers that it’s unlikely to lift its own marketing ban before the end of the year, The Wall Street Journal reported.

 





Source link

Facebook Removed A Chinese Propaganda Network Targeting The Philippines, Southeast Asia, And The US


Facebook announced the removal of two separate networks that used fake identities to promote government propaganda.

The first network was located in China. While it targeted the US, its primary focus was the Philippines and countries in Southeast Asia. This is the second time Facebook removed Pages associated with this campaign, the company said, and this time the people running it used a VPN to try to hide their identities.

“Although the people behind this activity attempted to conceal their identities and coordination, our investigation found links to individuals in the Fujian province of China,” Facebook said in a statement announcing the takedown.

The propaganda operation consisted of Pages and Instagram accounts, but its primary focus was running fake identities on Facebook which were used to amplify content. People running profiles “posed as locals,” the release said.

The network targeted the US, but that wasn’t its primary focus. Accounts posted about Democratic presidential candidates Joe Biden and Pete Buttigieg and as well as President Donald Trump, both in support and opposition.

The fake accounts were also used to “like and comment on other people’s posts, particularly about naval activity in the South China Sea, including US Navy ships.”

Naval activity was also the focus of the network targeting people in Southeast Asia, along with posts supporting Philippine President Rodrigo Duterte.

Overall, the network affected about 194,000 people and spent $60 on ads using Chinese yuan. Six Instagram accounts, nine groups, and 115 Facebook accounts were removed.

The second network removed by Facebook originated in and targeted the Philippines. It consisted of 31 pages, 57 Facebook accounts, and 20 Instagram accounts.

Facebook investigated the propaganda network after being alerted to it by Rappler, an independent news organization in the country that Duterte has targeted. The propaganda network supported the Philippine president and posted about a variety of political topics, accelerating its activity between 2019 and 2020.

“Although the people behind this activity attempted to conceal their identities, our investigation found links to Philippine military and Philippine police,” the Facebook announcement said.

Those entities spent $1,100 on advertising on the platform.



Source link