www.silverguide.site –

“Gamified” versions of the Bondi terror attack have been circulated online, with Australia’s internet safety regulator forced to respond to more than 100 complaints about footage of the December massacre, it says.

Naveed and Sajid Akram allegedly killed 15 people after opening fire at a Hanukah festival at Bondi beach on 14 December. Naveed Akram survived a shootout with police, while his father was killed at the scene. Akram has been charged with 59 offences, including 15 counts of murder and one count of committing a terrorist act.

In the months since, the eSafety commissioner’s office has dealt with a deluge of complaints about videos of the attack posted online. In briefing documents prepared for Senate estimates hearings and published on Thursday under freedom of information laws, the regulator stated that after the attack, eSafety monitored footage and identified videos of the attack on Instagram, X, Threads and some “fringe sites”.

Sign up for the Breaking News Australia email

The commissioner was also alerted to what it called “gamified versions” of the attack “circulating online overseas”.

“The nature and origin of these are not yet clear, however no content has yet been identified as available from Australia,” the document stated.

A spokesperson for eSafety said gamified referred to “instances where real life events are converted into game-like aesthetics for viewing or as background to an interactive game” but did not identify what sort of games were involved.

They said eSafety was alerted to the gamified versions by overseas counterparts, but had not been able to locate any from Australia, and there had been no further reports.

The regulator also received 106 complaints comprising mostly of videos taken by bystanders. The initial footage was classified MA15+ in Australia, while footage that surfaced later was refused classification, making it illegal to be distributed in Australia.

The altered footage was found on a fringe website and on Elon Musk’s X. The social media platform geo-blocked the posts, preventing users in Australia from accessing them, but the posts remained on the other site.

eSafety said it was “considering what further action may be appropriate”.

Since Hamas attacked Israel in October 2023, there had been a “marked but modest” increase in reports of hate speech to eSafety, with 102 reports, including nine related to antisemitic content, eight of Islamophobia, and 23 reports of religious discrimination. Four notices for removal under the adult cyberbullying scheme were issued in relation to the reports.

Of the reports of antisemitic bullying, eSafety said one met the threshold for the regulator to take action, which eSafety said was “a complaint about antisemitic content targeting an ABC journalist”, which was referred by the AFP’s Special Operation Avalite.

The unnamed platform on which it was hosted removed the post for breaching its terms of service.

A further three complaints relating to antisemitic comments posted on social media were referred to Operation Avalite in April 2025, but did not activate eSafety’s adult cyberbullying powers.

eSafety issued 32 removal notices for manifestos including far right and Islamic State material, and the Christchurch terror attack video.

Other complaints included an individual referred to as a “zio-freak” and Instagram accounts labelled “zionist”. eSafety deemed these not to present serious harm.

The online safety regulator is empowered to remove content that targets individuals, but does not have the power to remove content targeting a group, such as LGBTQ people or those with religious affiliations. In a review of the Online Safety Act last year, it was recommended that a definition of online hate material be added to the law empowering eSafety to take more action.

In its response to the review this week, the federal government said it needed to consider the recommendation as part of the planned digital duty of care legislation to be introduced later this year.

The government argued expanding the adult cyber abuse scheme would probably be “operationally burdensome and counterproductive”.