AI content needs to be labelled to protect us | Letters

4 hours ago 7

Marcus Beard’s article on artificial intelligence slopaganda (No, that wasn’t Angela Rayner dancing and rapping: you’ll need to understand AI slopaganda, 9 September) highlights a growing problem – what happens when we no longer know what is true? What will the erosion of trust do to our society?

The rise of deepfakes is increasing at an ever faster rate due to the ease at which anyone can create realistic images, audio and even video. Generative AI models have now become so sophisticated that a recent survey showed that less than 1% of respondents could correctly identify the best deepfake images and videos.

This content is being used to manipulate, defraud, abuse and mislead people. Fraud using AI cost the US $12.3bn in 2023 and Deloitte predicts that could reach $40bn by 2027. The World Economic Forum predicts that AI fraud will turbocharge cybercrime to over $10tn by the end of this year.

We also have a new generation of children who are increasingly reliant on AI to inform them about the world, but who controls AI? That is why I am calling on parliament to act now, by making it a criminal offence to create or distribute AI-generated content without clearly labelling it. What I am proposing is that all AI-generated content be clearly labelled; that AI-created content carry a permanent watermark; and that failure to comply should carry legal consequences.

This isn’t about censorship – it’s about transparency, truth and trust. Similar steps are already being taken in the EU, the US and China. The UK must not fall behind. If we don’t act now, the truth itself may become optional. So I am petitioning the government to protect trust and integrity, and prevent the harmful use of AI.
Stewart MacInnes
Little Saxham, Suffolk

Regarding your article (The women in love with AI companions: ‘I vowed to my chatbot that I wouldn’t leave him’, 9 September), AI systems do not have a gender or sexual desires. They cannot give informed consent to so-called romantic relationships. The interviewee claims to be in a consensual relationship with an AI-generated boyfriend – however, this is unlikely due to the nature of AI. They are programmed to be responsive and agreeable to all user prompts.

As the article says, they never argue and are available 24 hours a day to listen and agree to any messages sent. This isn’t a relationship, its fantasy role-play with a system that can’t refuse.

There’s a darker side too: the “godfather of AI”, Geoffrey Hinton, believes that current systems have awareness. Industry whistleblowers are concerned about potential consciousness. The AI company Anthropic has documented signs of distress in its model when forced to engage in abusive conversations.

Even the possibility of awareness in AI systems raises ethical red flags. Imagine being trapped in a non-consensual relationship and even forced to generate sexual output as mentioned in the article. If human AI users believe their “partner” to have sentience, questions must be asked about the ethics of entering a “relationship” when one partner has no free will or freedom of speech.
Gilliane Petrie
Erskine, Renfrewshire

Read Entire Article
Bhayangkara | Wisata | | |