Close Menu
  • Home
  • Life Insurance
  • Auto Insurance
  • Home Insurance
  • Health Insurance
  • Business Insurance
  • Travel Insurance
  • Specialized Insurance
  • Insurance Tips & Guides
Facebook X (Twitter) Instagram
Insure GenZInsure GenZ Thursday, May 14
  • About Us
  • Contact Us
  • Disclaimer
  • Terms & Conditions
  • Privacy Policy
Facebook X (Twitter) Instagram
Subscribe
  • Home
  • Life Insurance
  • Auto Insurance
  • Home Insurance
  • Health Insurance
  • Business Insurance
  • Travel Insurance
  • Specialized Insurance
  • Insurance Tips & Guides
Insure GenZInsure GenZ
Home»Auto Insurance»The Growing Toolkit of Technology-Facilitated Abuse
Auto Insurance

The Growing Toolkit of Technology-Facilitated Abuse

AwaisBy AwaisMay 14, 2026No Comments5 Mins Read0 Views
Facebook Twitter Pinterest Telegram LinkedIn Tumblr Copy Link Email
Follow Us
Google News Flipboard
The Growing Toolkit of Technology-Facilitated Abuse
Share
Facebook Twitter LinkedIn Pinterest Email Copy Link

The use of the AI tool Grok to remove women’s clothing in images brought the issue of so-called technology-facilitated abuse to the fore. But it’s a problem that predates AI – with Bluetooth trackers, wearable devices, smart speakers, smart glasses and apps all used by abusers to control, harass or stalk their victims.

This abuse has worsened as tech has become more embedded in people’s lives, and as AI advances rapidly. But governments have struggled to make tech companies design systems that minimise misuse, and to hold them accountable when things go wrong.

Our own research has confirmed that technology misuse has increased and that its harms are significant. But governments and the tech sector are doing little to combat it – despite numerous examples of how tech can enable abuse.

Case 1: Smart Glasses

The growing availability of smart glasses – which look like normal eyewear but can do many things a smartphone does – has led to reports of secret filming. In some cases, videos were posted online, often attracting degrading and sexually explicit comments.

Meta has said its smart glasses have a light to show when they are recording and anti-tamper tech to make sure the light cannot be covered. But there appear to be workarounds.

In England and Wales, voyeurism legislation focuses on private spaces, and harassment laws do not specifically apply to targeted recording and online distribution. However, the UK Information Commissioner’s Office is investigating Meta after subcontractors were allegedly able to access intimate footage from customers’ glasses. This is in addition to a lawsuit in the US, which alleges Meta violated privacy laws and engaged in false advertising. Meta has said that it takes the protection of data very seriously and that faces are usually blurred out. It also discloses in its UK terms of service the potential for content to be reviewed either by a human or by automation.

Case 2: Bluetooth Trackers

Apple’s AirTags, and other devices built for tracking personal items, can be misused to stalk and harass people, particularly women. Apple released updates to AirTags and other trackable tech so that potential victims would be alerted if an unknown device was traveling with them. But for many, this feature should have existed from the outset.

The law in England and Wales is clear that attaching tracker devices to someone without their knowledge is a criminal offense. But despite convictions, the ease of covertly monitoring people using these devices means people continue to be at risk.

Case 3: AI Deepfake and ‘Nudification’ Apps

Apps can now “nudify” people, while AI is increasingly used to make non-consensual deepfake pornography. In January, several instances of xAI’s assistant Grok being used to create sexualized photos of women and minors came to light. All it took to create the images were some simple prompts.

After criticism, xAI decided to limit this feature. But the safeguards appear to apply only to certain jurisdictions and certain users.

In February, the UK government announced legal changes similar to the Take It Down Act in the US, which will require tech platforms in the UK to remove non-consensual intimate images within 48 hours. Failure to do so will result in fines and services being blocked, and the law is likely to be implemented from summer.

Using automated technology known as “hash matching,” victims will only need to report an image once to have it removed from multiple platforms simultaneously. The same images would then be automatically deleted every time anyone attempted to reupload them. Nudification apps and using AI chatbots to create deepfake pornography will also become illegal in the UK.

But there is more to be done. Mitigating risks must be embedded at the design stage to prevent these images being created in the first place. The rise of romantic and sexual chatbots means this has become more urgent.

And beyond deepfakes and nudification, AI can also enable harassment at scale. This includes directly targeting someone with abusive content, or fake images or profiles that impersonate victims for so-called “sextortion” scams.

Challenges Ahead

These issues must be prevented with robust guardrails built into these technologies. This is what prioritising user safety should look like, after all. But often, these guardrails have failed. Safety tools are only usually added after public pressure, not built into platforms from the start.

Governments have allowed regulation to fall behind fast-paced developments. Tech companies have grown quickly, but laws and enforcement have not kept up. At the same time, police and legal systems are often under-trained or unclear on how to handle digital harm.

Even where there is regulation, such as the UK’s Online Safety Act, penalties for platforms that allow abuse are often weak or unenforceable. The regulator Ofcom has issued only voluntary guidance to tech companies on how to better protect women and girls on their platforms. Campaigners have called for this to be made mandatory, with clear penalties for companies that do not comply, placing it on a level legal footing with child sexual abuse and terrorism content.

As AI advances, tech companies must prioritise system design that puts user safety first. But until governments enforce real consequences, the tech sector will be able to profit from harm while those using the platforms bear the cost.

This article is republished from The Conversation under a Creative Commons license. The Conversation is an independent and nonprofit source of news, analysis and commentary from academic experts. The original article can be accessed here.

Related:

Abuse growing TechnologyFacilitated Toolkit
Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Telegram Email Copy Link
Awais
  • Website

Related Posts

Icosa to add Dell’Amore as a Portfolio Manager for growing cat bond fund

May 14, 2026

Odey Settles Sexual Assault Cases Ahead of London Trial

May 14, 2026

Google Settles Racial Discrimination Lawsuit for $50 Million

May 13, 2026
Leave A Reply Cancel Reply

Our Latest Blogs

Icosa to add Dell’Amore as a Portfolio Manager for growing cat bond fund

May 14, 2026

Judge Says Cannot ‘Rubber Stamp’ $1.5 Million Musk-SEC Deal

May 14, 2026

US Efforts to End Iran War Stumble as Ship Seized Near UAE

May 14, 2026

The Growing Toolkit of Technology-Facilitated Abuse

May 14, 2026
Recent Posts
  • Icosa to add Dell’Amore as a Portfolio Manager for growing cat bond fund
  • Judge Says Cannot ‘Rubber Stamp’ $1.5 Million Musk-SEC Deal
  • US Efforts to End Iran War Stumble as Ship Seized Near UAE
  • The Growing Toolkit of Technology-Facilitated Abuse
  • Why Speedy Payouts Matter in EU Plan to Close Insurance Protection Gap

Subscribe to Updates

Insure Genz is a modern insurance blog built for the next generation. Subscribe it for more updates.

Insure Genz is a modern insurance blog built for the next generation. We break down complex topics across categories like Auto, Health, Business, Life, and Travel Insurance — making them simple, useful, and easy to understand. Whether you're just getting started or looking for expert tips and guides, we've got you covered with clear, reliable content.

Our Picks

Icosa to add Dell’Amore as a Portfolio Manager for growing cat bond fund

May 14, 2026

Judge Says Cannot ‘Rubber Stamp’ $1.5 Million Musk-SEC Deal

May 14, 2026

US Efforts to End Iran War Stumble as Ship Seized Near UAE

May 14, 2026

The Growing Toolkit of Technology-Facilitated Abuse

May 14, 2026
Most Popular

Icosa to add Dell’Amore as a Portfolio Manager for growing cat bond fund

May 14, 2026

Judge Says Cannot ‘Rubber Stamp’ $1.5 Million Musk-SEC Deal

May 14, 2026

US Efforts to End Iran War Stumble as Ship Seized Near UAE

May 14, 2026

The Growing Toolkit of Technology-Facilitated Abuse

May 14, 2026
  • About Us
  • Contact Us
  • Disclaimer
  • Terms & Conditions
  • Privacy Policy
© 2026 Insure GenZ. Designed by Insure GenZ.

Type above and press Enter to search. Press Esc to cancel.