Sydney Roosters have what it takes to stop Brisbane Broncos’ dynasty, writes Ruan Sims


He will know better than most the threat posed by the Broncos, having watched them up close last week when the sides met. Feeney didn’t have Simaima Taufa and Hannah Southwell available for that match. They’re huge inclusions this week for the decider.

Watching the Broncos, it’s clear so much of their game revolves around a mobile forward pack. They play with a lot of subtleties, late footwork at the line and offloads when they poke their nose through. The primary support is amazing, there is always a player screaming through looking for a half chance.

Ali Brigginshaw was crowned the Dally M female player of the year on Monday.Credit:Getty

And their ability in broken play is the main reason they have become the NRLW’s most successful side since its inception.

This is where Feeney and his middle enforcers Taufa and Southwell will be so crucial in the expected wet conditions. Those two will add starch to the Roosters’ defence. They’ve got to get in the face of Brisbane’s metre eaters and pressure Ali Brigginshaw every time she touches the ball. If you give her half a sniff, she’ll make you look silly.

I will be fascinated to see how Brisbane coach Kelvin Wright uses Brigginshaw. In the past two games she’s almost slipped into that ball-playing lock role, getting her hands on the ball a lot earlier than the first game against the Warriors, when she stood a little wider of the ruck.

If I was in the Roosters’ defensive line this week, I would want her being forced to run the ball or shovelling it wide early because she’s under so much pressure she has little time to think. It’s the only way you can attack her.

If you can do that, I reckon you’ll also limit the opportunities for Tamika Upton and Tarryn Aiken. There’s got to be constant pressure, all game, on Brisbane’s key players.

I can’t wait to see what Brigginshaw’s opposite Zahara Tamara produces for the Roosters. She’s driven this team the past three games.

What has most impressed me most is her ability to let mistakes go in a way she may not have done in the past. In the first game against the Dragons, she had a couple of slip-ups, but she parked them quickly. In a 60-minute game, you can’t afford to dwell on errors.

Loading

Mel Howard deserves another chance at five-eighth, despite how good captain Corban McGregor has been in the role.

The Broncos did a superb job setting up their roster when Paul Dyer was in charge before the 2018 season. It’s been a cornerstone of their success. Everyone in that squad knows their role and executes it well.

But I think the dynasty might be about to come to an end.

My grand final tip? The Roosters to win by four.

Sport newsletter

Sports news, results and expert commentary delivered straight to your inbox each weekday. Sign up here.

Most Viewed in Sport

Loading



Source link

Using A Kill-Switch Or Red Stop Button For AI Is A Dicey Proposition, Including For Self-Driving Cars


The big red button.

Associated with providing an emergency stop capability, a noticeable red button is often included on mechanical and electronic devices to allow for a rapid way to halt a machine that is seemingly going astray. This urgent knockout can be implemented via a button that is pushed, or by using a kill-switch, or a shutdown knob, a shutoff lever, and so on. Alternatively, another approach involves simply pulling the plug (literally doing so or might allude to some other means of cutting off the power to a system).

Besides utilizing these stopping acts in the real-world, a plethora of movies and science fiction tales have portrayed big red buttons or their equivalent as a vital element in suspenseful plotlines. We have repeatedly seen AI systems in such stories that go utterly berserk and the human hero must brave devious threats to reach an off-switch and stop whatever carnage or global takeover was underway.

Does a kill-switch or red button really offer such a cure-all in reality?

The answer is more complicated than it might seem at first glance. When a complex AI-based system is actively in progress, the belief that an emergency shutoff will provide sufficient and safe immediate relief is not necessarily assured.

In short, the use of an immediate shutdown can be problematic for a myriad of reasons and could introduce anomalies and issues that either do not actually stop the AI or might have unexpected adverse consequences.

Let’s delve into this.

AI Corrigibility And Other Facets

One gradually maturing area of study in AI consists of examining the corrigibility of AI systems.

Something that is corrigible has a capacity of being corrected or set right. It is hoped that AI systems will be designed, built, and fielded so that they will be considered corrigible, having an intrinsic capability for permitting corrective intervention, though so far, unfortunately, many AI developers are unaware of these concerns and are not actively devising their AI to leverage such functionality.

An added twist is that a thorny question arises as to what is being stopped when a big red button is pressed. Today’s AI systems are often intertwined with numerous subsystems and might exert significant control and guidance over those subordinated mechanizations. In a sense, even if you can cut off the AI that heads the morass, sometimes the rest of the system might continue unabated, and as such, could end-up autonomously veering from a desirable state without the overriding AI head remaining in charge.

Especially disturbing is that a subordinated subsystem might attempt to reignite the AI head, doing so innocently and not realizing that there has been an active effort to stop the AI. Imagine the surprise for the human that slammed down on the red button and at first, could see that the AI halted, and then perhaps a split second later the AI reawakens and gets back in gear. It is easy to envision the human repeatedly swatting at the button in exasperation as they seem to get the AI to quit and then mysteriously it appears to revive, over and over again.

This could happen so quickly that the human doesn’t even discern that the AI has been stopped at all. You smack the button or pull the lever and some buried subsystem nearly instantly reengages the AI, acting in fractions of a second and electronically restarting the AI. No human can hit the button fast enough in comparison to the speed at which the electronic interconnections work and serve to counter the human instigated halting action.

We can add to all of this a rather scary proposition too.

Suppose the AI does not want to be stopped.

One viewpoint is that AI will someday become sentient and in so doing might not be keen on having someone decide it needs to be shutdown. The fictional HAL 9000 from the movie 2001: A Space Odyssey (spoiler alert) went to great lengths to prevent itself from being disengaged.

Think about the ways that a sophisticated AI could try to remain engaged. It might try to persuade the human that turning off the AI will lead to some destructive result, perhaps claiming that subordinated subsystems will go haywire.

The AI could be telling the truth or might be lying. Just as a human might proffer lies to remain alive, the AI in a state of sentience would presumably be willing to try the same kind of gambit. The lies could be quite wide-ranging. An elaborate lie by the AI might be to convince the person to do something else to switch-off the AI, using some decoy switch or button that won’t truly achieve a shutdown, thus giving the human a false sense of relief and misdirecting efforts away from the workable red button.

To deal with these kinds of sneaky endeavors, some AI developers assert that AI should have built-in incentives for the AI to be avidly willing to be cut off by a human. In that sense, the AI will want to be stopped.

Presumably, the AI would be agreeable to being shut down and not attempt to fight or prevent such action. An oddball result though could be that the AI becomes desirous of getting shut down, due to the incentives incorporated into the inner algorithms to do so and thus wanting to be switched off, even when there is no need to do so. At that point, the AI might urge the human to press the red button and possibly even lie to get the human to do so (by professing that things are otherwise going haywire or that the human will be saved or save others via such action).

One viewpoint is that those concerns about AI will only arise once sentience is achieved. Please be aware that today’s AI is not anywhere near to becoming sentient and thus it would seem to suggest that there aren’t any near-term qualms about any kill-switch or red button trickery from AI. That would be a false conclusion and a misunderstanding of the underlying possibilities. Even contemporary AI, as limited as it might be, and as based on conventional algorithms and Machine Learning (ML), could readily showcase similar behaviors as a result of programming that intentionally embedded such provisions or that erroneously allowed for this trickery.

Let’s consider a significant application of AI that provides ample fodder for assessing the ramifications of a red button or kill-switch, namely, self-driving cars.

Here’s an interesting matter to ponder: Should AI-based true self-driving cars include a red button or kill-switch and if so, what might that mechanism do?

Understanding The Levels Of Self-Driving Cars

As a clarification, true self-driving cars are ones that the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.

These driverless vehicles are considered a Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-on’s that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5, which we don’t yet even know if this will be possible to achieve, and nor how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).

Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).

For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.

You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.

Self-Driving Cars And The Red Button

For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.

All occupants will be passengers.

The AI is doing the driving.

Some pundits have urged that every self-driving car ought to include a red button or kill-switch.

There are two major perspectives on what this capability would do.

First, one purpose would be to immediately halt the on-board AI driving system. The rationale for providing the button or switch would be that the AI might be faltering as a driver and a human passenger might decide it is prudent to stop the system.

For example, a frequently cited possibility is that a computer virus has gotten loose within the onboard AI and is wreaking havoc. The virus might be forcing the AI to drive wantonly or dangerously. Or the virus might be distracting the AI from effectively conducting the driving task and doing so by consuming the in-car computer hardware resources intended for use by the AI driving system. A human passenger would presumably realize that for whatever reason the AI has gone awry and would frantically claw at the shutoff to prevent the untoward AI from proceeding.

The second possibility for the red button would be to serve as a means to quickly disconnect the self-driving car from any network connections. The basis for this capability would be similar to the earlier stated concern about computer viruses, whereby a virus might be attacking the on-board AI by coming through a network connection.

Self-driving cars are likely to have a multitude of network connections underway during a driving journey. One such connection is referred to as OTA (Over-The-Air), an electronic communication used to upload data from the self-driving car into the cloud of the fleet, and allows for updates and fixes to be pushed down into the onboard systems (some assert that the OTA should always be disallowed while the vehicle is underway, but there are tradeoffs involved).

Let’s consider key points about both of those uses of a red button or kill-switch.

If the function entails the focused aspect of disconnecting from any network connections, this is the less controversial approach, generally.

Here’s why.

In theory, a properly devised AI driving system will be fully autonomous during the driving task, meaning that it does not rely upon an external connection to drive the car. Some believe that the AI driving system should be remotely operated or controlled but this creates a dependency that bodes for problems (see my explanation at this link here).

Imagine that a network connection goes down on its own or otherwise is noisy or intermittent, and the AI driving system could be adversely affected accordingly. Though an AI driving system might benefit from utilizing something across a network, the point is that the AI should be independent and be able to otherwise drive properly without a network connection. Thus, cutting off the network connection should be a design capability and for which the AI driving system can continue without hesitation or disruption (i.e., however, or whenever the network connection is no longer functioning).

That being said, it seems somewhat questionable that a passenger will do much good by being able to use a red button that forces a network disconnect per se.

If the network connection has already enabled some virus to be implanted or has attacked the on-board systems, disconnecting from the network might be of little aid. The on-board systems might already be corrupted anyway. Furthermore, an argument can be made that if the cloud-based operator wants to push into the on-board AI a corrective version, the purposeful disconnect would then presumably block such a solving approach.

Also, how is it that a passenger will realize that the network is causing difficulties for the AI?

If the AI is starting to drive erratically, it is hard to discern whether this is due to the AI itself or due to something regarding the networking traffic. In that sense, the somewhat blind belief that the red button is going to solve the issue at-hand is perhaps misleading and could misguide a passenger when needing to take other protective measures. They might falsely think that using the shutoff is going to solve things and therefore delay taking other more proactive actions.

In short, some would assert that the red button or kill-switch would merely be there to placate passengers and offer an alluring sense of confidence or control, more so as a marketing or selling point, but the reality is that they would be unlikely to make any substantive difference when using the shutoff mechanism.

This also raises the question of how long would the red button or kill-switch usage persist?

Some suggest it would be momentary, though this invites the possibility that the instant the connection is reengaged, whatever adverse aspects were underway would simply resume. Others argue that only the dealer or fleet operator could reengage the connections, but this obviously could not be done remotely if the network connections have all been severed, therefore the self-driving car would have to be ultimately routed to a physical locale to do the reconnection.

Another viewpoint is that the passenger should be able to reengage that which was disengaged. Presumably, a green button or some kind of special activation would be needed. Those that suggest the red button would be pushed again to re-engage are toying with an obvious logically confusing challenge of trying to use the red button for too many purposes (leaving the passenger bewildered about what the latest status of the red button might be).

In any case, how would a passenger decide that it is safe to re-engage? Furthermore, it could become a sour situation of the passenger hitting the red button, waiting a few seconds, hitting the green button, but then once again using the red button, doing so in an endless and potentially beguiling cycle of trying to get the self-driving car into a proper operating mode (flailing back-and-forth).

Let’s now revisit the other purported purpose of the kill-switch, namely, to stop the on-board AI.

This is the more pronounced controversial approach, here’s why. Assume that the self-driving car is going along on a freeway at 65 miles per hour. A passenger decides that perhaps the AI is having troubles and slaps down on the red button or turns the shutoff knob.

What happens?

Pretend that the AI instantly disengages from driving the car.

Keep in mind that true self-driving cars are unlikely to have driving controls accessible to the passengers. The notion is that if the driving controls were available, we would be back into the realm of human driving. Instead, most believe that a true self-driving car has only and exclusively the AI doing the driving. It is hoped that by having the AI do the driving, we’ll be able to reduce significantly the 40,000 annual driving fatalities and 2.5 million related injuries, based on the aspect that the AI won’t drive drunk, won’t be distracted while driving, and so on.

So, at this juncture, the AI is no longer driving, and there is no provision for the passengers to take over the driving. Essentially, an unguided missile has just been engaged.

Not a pretty picture.

Well, you might retort that the AI can stay engaged just long enough to bring the self-driving car to a safe stop. That sounds good, except that if you already believe that the AI is corrupted or somehow worthy of being shut off, it seems dubious to believe that the AI will be sufficiently capable of bringing the self-driving car to a safe stop. How long, for example, would this take to occur? It could be just a few seconds, or it could take several minutes to gradually slow down the vehicle and find a spot that is safely out of traffic and harm’s way (during which, the presumed messed-up AI is still driving the vehicle).

Another approach suggests that the AI would have some separate component whose sole purpose is to safely bring the self-driving car to a halt and that pressing the red button invokes that specific element. Thus, circumventing the rest of the AI that is otherwise perceived as being damaged or faltering. This protected component though could be corrupted, or perhaps is hiding in waiting and once activated might do worse than the rest of the AI (a so-called Valkyrie Problem). Essentially, this is a proposed solution that carries baggage, as do all the proposed variants.

Some contend that the red button shouldn’t be a disengagement of the AI, and instead would be a means of alerting the AI to as rapidly as possible bring the car to a halt.

This certainly has merits, though it once again relies upon the AI to bring forth the desired result, yet the assumed basis for hitting the red button is due to suspicions that the AI has gone a kilter. To clarify, having an emergency stop button that is there for other reasons, such as a medical emergency of a passenger, absolutely makes sense, and so the point is not that a stop mode is altogether untoward, only that to use it for overcoming the assumed woes of the AI itself is problematic.

Note too that the red button or kill-switch would potentially have different perceived meanings to passengers that ride in self-driving cars.

You get into a self-driving car and see a red button, maybe it is labeled with the word “STOP” or “HALT” or some such verbiage.

What does it do?

When should you use it?

There is no easy or immediate way to convey those particulars of those facets to the passengers. Some contend that just like getting a pre-flight briefing while flying in an airplane, the AI ought to tell the passengers at the start of each driving journey how they can make use of the kill-switch. This seems a tiresome matter and it isn’t clear whether passengers would pay attention and nor recall the significance during a panic moment of seeking to use the function.

Conclusion

In case your head isn’t already spinning about the red button controversy, there are numerous additional nuances.

For example, perhaps you could speak to the AI since most likely there will be a Natural Language Processing (NLP) feature akin to an Alexa or Siri, and simply tell it when you want to carry out an emergency stop. That is a possibility, though it once again assumes that the AI itself is going to be sufficiently operating when you make such a verbal request.

There is also the matter of inadvertently pressing the red button or otherwise asking the AI to stop the vehicle when it was not necessarily intended or perhaps suitable. For example, suppose a teenager in a self-driving car is goofing around and smacks the red button just for kicks, or someone with a shopping bag filled with items accidentally leans or brushes against the kill-switch, or a toddler leans over and thinks it is a toy to be played with, etc.

As a final point, for now, envision a future whereby AI has become relatively sentient. As earlier mentioned, the AI might seek to avoid being shut off.

Consider this AI Ethics conundrum: If sentient AI is going to potentially have something similar to human rights (see my discussion at this link here), can you indeed summarily and without hesitation shutoff the AI?

That’s an intriguing ethical question, though for today, not at the top of the list of considerations for how to cope with the big red button or kill-switch dilemma.

The next time you get into a self-driving car, keep your eye out for any red buttons, switches, levers, or other contraptions and make sure you know what it is for, being ready when or if the time comes to invoke it.

As they say, go ahead and make sure to knock yourself out about it.



Source link

Why the Geelong skipper is the man to stop Dustin Martin


Garry Lyon has flagged Geelong skipper Joel Selwood as the man to stop Richmond’s Dustin Martin in Saturday night’s AFL Grand Final.

Speaking on SEN Breakfast this morning, Lyon suggested Selwood should run with Martin when the Tiger champion is in the middle, with Cats defender Jake Kolodjashnij taking over when Martin drifts forward.

“I reckon I’d throw him (Selwood) that challenge around the ground,” Lyon said.

“Just say you’re with Dustin. It’s not the Shane Hird get inside your jumper, but you’re with him, take it personally every single contest.”

Lyon suggested Kolodjashnij, fresh off a stellar Preliminary Final performance on Brisbane’s Charlie Cameron, should go to Martin deep forward.

“Then the hand over, it’s probably going to be Kolodjashnij I reckon who will get the first crack at it and then the communication between Joel and Kolodjashnij is going to have to be as good as it has ever been,” he said.

Selwood missed the most recent clash between the two sides in round 17 this year through injury, where Richmond claimed a 26-point victory and Dustin Martin collected 19 possessions.







Source link

US election 2020: Microphones to be muted during final TV debate to stop Trump and Biden interrupting each other | US News


Microphones will be muted during the final US presidential debate to avoid Donald Trump and Joe Biden interrupting one another, it has been confirmed.

The new rules will mean Mr Trump and Mr Biden get two minutes to answer each question uninterrupted, the debate commission said on Monday.

Moderators were already considering changing debate protocol after chaotic exchanges in the first TV debate saw Mr Biden shout at Mr Trump: “Will you shut up, man?”

The Trump campaign said the “last minute rule changes” had been made by a “biased commission in their latest attempt to provide advantage to their favoured candidate”.

Image:
Mr Biden told Mr Trump: ‘Will you shut up, man?’

The president also hit out at the moderator for the next debate, NBC’s Kristen Welker.

Mr Biden’s team have not yet commented.

The third and final TV debate will take place in Nashville, Tennessee on 22 October.

During a visit to Arizona for his fifth rally in three days on Monday, Mr Trump accused his Democratic rival of being a “criminal”.

Speaking to reporters beforehand he said: “Joe Biden is a criminal and he’s been a criminal for a long time.”

Donald Trump addresses supporters at a rally in Arizona on Monday
Image:
Donald Trump addresses supporters at a rally in Arizona on Monday

His comments are part of an ongoing bid by the Trump campaign to cast doubt over Mr Biden’s son Hunter’s business interests in Ukraine, which Trump claims have broken the law.

At the rally in Tucson, Mr Trump also launched another scathing attack on his top infectious diseases expert Dr Anthony Fauci.

He said: “People are tired of hearing Fauci and all these idiots…these people that have got it wrong.”

Describing Dr Fauci as a “disaster”, he said that “if we listened to him we’d have 700 to 800,000 deaths right now”.

Joe Biden is pictured in Wilmington, Delaware on Monday
Image:
Joe Biden is pictured in Wilmington, Delaware on Monday

Also using the opportunity to hit out at his rival, the president appeared to mock Mr Biden for “listening to the scientists”.

Responding on Twitter, the Democratic candidate said late on Monday: “@realDonaldTrump – if you had listened to the scientists it wouldn’t be this bad.”

Coronavirus has killed nearly 220,000 Americans so far, with Mr Trump widely criticised for his approach to the disease.

Dr Fauci said he was not surprised the president contracted the virus after he “saw him in a completely precarious situation of crowded, no separation between people, and almost nobody wearing a mask”.

:: Subscribe to Divided States on Apple podcasts, Google Podcasts, Spotify, and Spreaker

Mr Biden, who is currently ahead in the polls, was not out on the campaign trail on Monday, but chose to stay in Delaware to prepare for Thursday’s debate.

His vice presidential candidate Kamala Harris returned to campaigning after seven days self-isolating following a positive COVID-19 result of a close adviser.



Source link

Greens push jobmaker amendments to stop Australia’s largest companies claiming hiring credit | Australia news


Australia’s largest companies would be unable to claim their share of $4bn to hire young workers under Greens amendments to significantly tighten eligibility for the jobmaker hiring credit.

The Greens will seek to exclude companies that have recently declared a dividend or have underpaid workers through Senate amendments to the government bill creating subsidies for new hires aged 35 and under.

On Monday, Labor ambushed the government by bringing on lower house debate on the $4bn hiring credit program, part of a suite of more than $30bn of business tax concessions in the October budget to spur economic recovery from the Covid-19 recession.

The hiring credit has copped criticism from unions, Labor and the Greens who warn it does nothing to help older workers and could even see them laid off by employers hoping to gain payments of $100 a week for new hires aged 30 to 35 and $200 a week for those aged 16 to 29.

The Greens will amend the bill to prevent employers sacking existing staff to claim the subsidies, on top of the government’s unlegislated safeguards that employers must increase their headcount and payroll to claim payments.

The shadow treasurer, Jim Chalmers, told the house it was “incredibly concerning” that employers could “rort” the scheme by sacking an older worker and hiring two younger workers.

The shadow employment minister, Brendan O’Connor, also hinted at “potential amendments” from Labor. He warned that small businesses losing access to the jobkeeper wage subsidy may not be in a position to hire extra workers to gain the hiring credit.

“Our fear is, if this is replacing jobkeeper as the main support for small business, it will not be fit for purpose,” he said.

Labor plans to pass the bill in the lower house but will not decide its final position until after a Senate inquiry reports on 6 November.

The Greens want to make wholesale changes when the bill comes to the Senate. Greens leader, Adam Bandt, said the minor party “will amend the enabling legislation for the government’s jobmaker wage subsidy to stop public money going to big corporations that are paying dividends to shareholders or that have a history of ripping employees off”.

The exclusion of companies that have recently paid dividends will render Australia’s largest employers ineligible, including grocery and retail giants Woolworths and Wesfarmers, miner BHP and telco Telstra. The big four banks are already ineligible for the program.

The Greens amendment follows controversy about the number of employers that claimed jobkeeper wage subsidies because they suffered revenue downturns of 30% or more and later declared large dividends or paid executive bonuses.

The exclusion of companies that underpaid workers could impact employers such as Sunglass Hut, jeweller Michael Hill, and Super Retail Group, the owner of Rebel, Macpac, and Super Cheap Auto, which have admitted inadvertent underpayments and paid workers back.

Bandt said “in the biggest recession we’ve seen in generations, we shouldn’t be subsidising profitable corporations or giving public money to corporations that underpay workers”.

“If a big corporation is doing well enough to pay dividends during a pandemic, it doesn’t need the public to pay part of its wages bill,” he told Guardian Australia.

The Australian Council of Trade Unions is pressing Labor and the crossbench to amend the bill, warning it would allow employers to replace full-time jobs with multiple part-time or casual jobs.

In budget week and again on Monday, O’Connor raised concerns that the bill gives the government power to introduce any form of payment to encourage job creation or workforce participation.

The bill contains none of the program’s safeguards, giving the government a blank cheque to change its rules or introduce new programs without approval from parliament.

Bandt said it is “outrageous that the treasurer is expecting opposition parties to consider a key recovery measure with nothing more than a glorified fact sheet available for review”.

“We need to see the details to make sure this wage subsidy won’t make the employment crisis worse, throw current employees into unemployment, and further drive casualisation and insecure work.”

Scott Morrison has fumbled the details of the program’s safeguards, incorrectly telling 6PR Radio on 8 October that employers cannot sack existing staff and receive the subsidy.

“If you’re already working for a place, they can’t reduce your hours or get rid of you to appoint someone else, they wouldn’t get the subsidy under that arrangement,” he claimed.

In fact, provided an employer increases the total number of staff and the amount spent on wages, there is nothing in jobmaker’s safeguards that would prevent a reduction of hours or laying off existing staff.

Bandt said the government “seems confused about whether the scheme would allow employers to fire a decently paid full-time employee in order to recruit two young people on a subsidised minimum wage”.



Source link

Facebook’s new tool to stop fake news is a game changer—if the company would only use it


When an explosive—and most likely fake—story about Joe Biden’s son began to circulate online this week, Facebook did something unusual: It decided to restrict its spread while it investigated the story’s accuracy.

This marked the first prominent deployment of a tool the company has been testing for several months. Facebook calls the tool a “viral content review system,” while some news outlets and research outfits have referred to it as a “circuit breaker.” Whatever its name, the tool has enormous potential to limit a tsunami of false or misleading news on topics like politics and health.

The circuit breaker tactic is a common sense way for the social network to fix its fake news problem, but it may also run counter to Facebook’s business interest. This means that it’s too soon to say whether Facebook’s actions on the Biden post will be a one-off occurrence or a new embrace of civic accountability by a company that has long resisted it.

The promise of viral circuit breakers

Not every post on Facebook is treated equally, as most people are aware. Instead, the site’s algorithm amplifies the reach of those most likely to elicit a reaction. That’s why a picture of a new baby from a long-ago acquaintance will vault to the top of your Facebook feed, even if you haven’t seen any other posts by that person for years.

While the algorithm rewards pictures of newborns and puppies, it is also inclined to promoting news stories—including fake ones—likely to elicit a reaction. That’s what occurred prior to the 2016 election when stories from sites in Macedonia, masquerading as U.S. conservative news sites, went viral on Facebook. (The sites in questions were run by teenagers seeking to make money from ads.)

Today, the problem of fake news circulating on Facebook is just as prevalent—and possibly more dangerous. This week, the New York Times listed four false election stories circulating widely on Facebook, including a baseless rant about an impending Democratic coup that has been viewed nearly 3 million times. Another example, this one trending in left-wing circles, is a fake report about a mysterious cabal that’s blocking mailboxes to discourage voting. And last month, Facebook users circulated stories (likewise fake) that radical leftists were setting the wildfires in the West. The ensuing hysteria led to sheriffs’ offices and firefighters wasting critical time and resources on nuisance calls.

Until now, Facebook has responded to this sort of viral misinformation by pointing to its team of fact checkers it employs, which can result in Facebook taking down some stories or placing a warning label on them. Critics, however, say the process is feckless because any response typically comes days later—meaning the stories have already reached an enormous audience. Or, as the axiom goes, “[Facebook’s] lie has gone halfway around the world before the truth has had a chance to get its pants on.”

This situation led the Center for American Progress, a Washington think tank, to include circuit breakers as its first recommendation in a landmark report on how social media platforms can reduce misinformation. The idea has also been endorsed by GMFUS, another policy think tank.

“Circuit breakers like those used by high-frequency traders on Wall Street would be a way for them to pause algorithmic promotion before a post does damage,” says Karen Kornbluh, a policy expert at GMFUS. “It gives them time to decide if it violates their rules. They don’t need to take it down, but they can stop promoting it.”

Circuit breakers thus appear to be the best of all worlds: They allow Facebook to limit the spread of misinformation without taking the draconian step of removing a post altogether.

And indeed, that’s what Facebook did on Tuesday when spokesperson Andy Stone declared that the company was responding to the suspect Hunter Biden story by “reducing its distribution” while fact checkers investigated its veracity. It deployed a circuit breaker.

But it’s far from clear if circuit breakers will be a regular part of Facebook’s misinformation strategy, or if the Hunter Biden decision will stand instead as a rare exception to Facebook’s practice of letting fake news flow freely on its platform.

Can Facebook change a viral business model?

Facebook’s use of a circuit breaker is one of several encouraging steps the platform has taken this month to limit misinformation, including a ban on posts that deny or distort the Holocaust. But there are reasons to be skeptical.

As a scathing new profile of Facebook in the New Yorker observes, “The company’s strategy has never been to manage the problem of dangerous content, but rather to manage the public’s perception of the problem.”

In the case of circuit breakers, the company has been cagey about how widely they are being deployed. In an interview with Fortune, a Facebook spokesperson noted that, in most cases, few will notice when the company uses them. The spokesperson, who spoke on condition of anonymity, also cited a recent example—one involving an audio post suggesting right-wing activists run over protesters with cars—of the circuit breaker working.

But the spokesperson did not explain why the circuit breakers did not slow down the four fake stories cited by the New York Times, or provide any data about how often they have been used. Instead, she said, the system served as a backup for Facebook’s policy-based moderation tools, which she claimed do an effective job of screening for noxious content—a proposition that many critics would disagree with.

Facebook’s reluctance to elaborate is perhaps understandable. Republicans, responding to Facebook’s decision to temporarily limit the Biden story, warned they will make it easier for people to sue the company over the content its users post. In a hyper-partisan climate, any steps Facebook takes may leave it open to accusations of bias and political retaliation.

Meanwhile, Facebook has another incentive not to use circuit breakers in a meaningful way: Doing so would mean less “engagement” on its platform and, by extension, less ad money. In the view of one critic cited in the New Yorker profile, Facebook’s “content-moderation priorities won’t change until its algorithms stop amplifying whatever content is most enthralling or emotionally manipulative. This might require a new business model, perhaps even a less profitable one.”

The critic, a lawyer and activist named Cori Crider, went on to suggest that Facebook is unlikely to make such a change in the absence of regulation. The company, meanwhile, has yet to offer a convincing answer about how it plans to reconcile this tension between an ethical duty to limit the spread of misinformation, and the fact it makes money when such misinformation goes viral.

Kornbluh of GMFUS says this tension is what leads Facebook and other social media platforms to err on the side of waiting—meaning harmful posts can earn millions of views before any action is taken. She argues that this approach must change, and that circuit breakers offer the potential to do enormous good with little harm.

“A circuit breaker approach wouldn’t force them to deny anyone the right to post—but would deny them amplification,” she says.

More must-read finance coverage from Fortune:





Source link

‘It has to stop’: Warnings after Trump continues attacks on Michigan governor | US News


Donald Trump has verbally attacked Michigan’s governor Gretchen Whitmer, despite warnings about the effect his words can have.

During a rally in the state, Mr Trump called on Ms Whitmer, a Democrat, to axe the remaining restrictions aimed at limiting the spread of the coronavirus.

He called her “dishonest” and joked about an extremist plot recently uncovered by the FBI to kidnap her, saying: “Hopefully you’ll be sending her packing pretty soon”.

His words prompted the crowd to chant: “Lock her up!”

Image:
Gretchen Whitmer was the target of a kidnap plot uncovered by the FBI

Ms Whitmer wrote on Twitter: “This is exactly the rhetoric that has put me, my family, and other government officials’ lives in danger while we try to save the lives of our fellow Americans.”

Earlier this month, 13 men were charged with plotting to overthrow and kidnap her from her holiday home, with one saying he wanted to try her for “treason”.

Ms Whitmer’s digital director, Tori Saylor, also urged the president to stop the dangerous words.

She wrote on Twitter: “Every single time the president does this at a rally, the violent rhetoric towards (Ms Whitmer) immediately escalates on social media. It has to stop. It just has to.”

Mr Trump did rallies in Michigan and Wisconsin on Saturday, two states that were vital to his 2016 win but now seem to be slipping away.

He told undecided and moderate voters that they had a “moral duty” to join the Republican Party, adding that the “Democrat Party you once knew doesn’t exist”.

:: Subscribe to Divided States on Apple podcasts, Google Podcasts, Spotify, and Spreaker

He accused rival Joe Biden of wanting to “shut down the country, delay the vaccine and prolong the pandemic” – a pandemic that he tried to play down for a long time, despite warnings from public health experts.

He said a win for the Democrats would result in the “single biggest depression in the history of our country” and “turn Michigan into a refugee camp” but offered no evidence for his claims.

He also stoked fears that, even if he does lose November’s election, he might not leave the White House gracefully, saying in Michigan that he “better damn well be president” in January.

Mr Trump moves to Nevada on Sunday, Arizona on Monday and Pennsylvania on Tuesday.

Despite Mr Biden leading the polls and having no public appearances scheduled on Saturday, his campaign manager warned against complacency.

Jen O’Malley Dillon wrote in a memo published by The Associated Press: “If we learned anything from 2016, it’s that we cannot underestimate Donald Trump or his ability to claw his way back into contention in the final days of a campaign, through whatever smears or underhanded tactics he has at his disposal.”



Source link

NY authorities stop wedding with up to 10,000 guests, citing coronavirus as reason


Authorities in New York quashed plans for a wedding that could have seen over 10,000 people gather in violation of COVID-19 measures, Governor Andrew Cuomo said Saturday.

The Rockland County Sheriff”s Office made authorities aware of the huge wedding, which was scheduled for Monday in Williamsburg.

“We were told it was going to take place. We investigated and found that it might be true. There was a big wedding planned that would have violated the rules on gatherings,” Cuomo said at a press conference.

New York’s rules for stemming the spread of COVID-19 limit social gatherings to no more than 50 people. For religious events inside a church or temple, the limit is 33 per cent of its capacity.

Elizabeth Garvey, an adviser to Cuomo, told reporters that “more than 10,000 people planned to attend” the wedding.

“You can get married. You just can’t get a thousand people at your wedding. You get the same results at the end of the day. It’s also cheaper!” Cuomo said.

Local media reported the event was an Orthodox Jewish wedding.

New York was the epicentre of the US coronavirus outbreak back in spring, and the city has seen more than 23,800 related deaths.

It managed to bring the crisis under control through lockdowns, but in recent weeks the number of reported COVID-19 cases has risen.

Last week Cuomo ordered the closure of non-essential businesses in the worst-hit areas and limited the number of people who can be in places of worship to 10. Schools were also closed.

The governor said Saturday that these measures were already yielding results.



Source link

Power out to stop Tigers push for a dynasty


Could not have asked for more from that quarter. Just brilliant.

Richmond kicked the first two and the Power fought back to hit the lead by quarter time.

Hamish Hartlett and Duursma in the middle of it with their chat, the crowd absolutely loving it too.

Peter Ladhams was reported for punching Noah Balta in the stomach. Silly, stilly stuff.



Source link