‘Dangerous nonsense’: AI-authored books about ADHD for sale on Amazon

4 hours ago 7

Amazon is selling books marketed at people seeking techniques to manage their ADHD that claim to offer expert advice yet appear to be authored by a chatbot such as ChatGPT.

Amazon’s marketplace has been deluged with AI-produced works that are easy and cheap to publish, but which include unhelpful or dangerous misinformation, such as shoddy travel guidebooks and mushroom foraging books that encourage risky tasting.

A number of books have appeared on the online retailer’s site offering guides to ADHD that also seem to be written by chatbots. The titles include Navigating ADHD in Men: Thriving with a Late Diagnosis, Men with Adult ADHD: Highly Effective Techniques for Mastering Focus, Time Management and Overcoming Anxiety and Men with Adult ADHD Diet & Fitness.

Samples from eight books were examined for the Guardian by Originality.ai, a US company that detects AI content. The company said each had a rating of 100% on its AI detection score, meaning that its systems are highly confident that the books were written by a chatbot.

Experts said online marketplaces are a “wild west” owing to the lack of regulation around AI-produced work – and dangerous misinformation risks spreading as a result.

Michael Cook, a computer science researcher at King’s College London, said generative AI systems were known to give dangerous advice, for example around ingesting toxic substances, mixing together dangerous chemicals or ignoring health guidelines.

As such, it is “frustrating and depressing to see AI-authored books increasingly popping up on digital marketplaces” particularly on health and medical topics, which can result in misdiagnosis or worsen conditions, he said.

“Generative AI systems like ChatGPT may have been trained on a lot of medical textbooks and articles, but they’ve also been trained on pseudoscience, conspiracy theories and fiction.

“They also can’t be relied on to critically analyse or reliably reproduce the knowledge they’ve previously read – it’s not as simple as having the AI ‘remember’ things that they’ve seen in their training data. Generative AI systems should not be allowed to deal with sensitive or dangerous topics without the oversight of an expert,” he said.

Yet he noted Amazon’s business model incentivises this type of practice, as it “makes money every time you buy a book, whether the book is trustworthy or not”, while the generative AI companies that create the products are not held accountable.

Prof Shannon Vallor, the director of the University of Edinburgh’s Centre for Technomoral Futures, said Amazon had “an ethical responsibility to not knowingly facilitate harm to their customers and to society”, although it would be “absurd” to make a bookseller responsible for the contents of all its books.

Problems are arising because the guardrails previously deployed in the publishing industry – such as reputational concerns and the vetting of authors and manuscripts – have been completely transformed by AI, she noted.

This is compounded by a “wild west” regulatory environment in which there are no “meaningful consequences for those who enable harms”, fuelling a “race to the bottom”, she said.

At present, there is no legislation that requires AI-authored books to be labelled as such. Copyright law only applies if a specific author’s content has been reproduced, although Vallor noted that tort law should impose “basic duties of care and due diligence”.

The Advertising Standards Agency said AI-authored books cannot be advertised to give a misleading impression that they are written by a human, enabling people who have seen such books to submit a complaint.

Richard Wordsworth was hoping to learn about his recent adult ADHD diagnosis when his father recommended a book he found on Amazon after searching “ADHD adult men”.

When Wordsworth sat down to read it, “immediately, it sounded strange,” he said. The book opened with a quote from the conservative psychologist Jordan Petersen and then contained a string of random anecdotes, as well as historical inaccuracies.

Some advice was actively harmful, he observed. For example, one chapter discussing emotional dysregulation warned that friends and family “don’t forgive the emotional damage you inflict. The pain and hurt caused by impulsive anger leave lasting scars.”

When Wordsworth researched the author he spotted a headshot that looked AI-generated, plus his lack of qualifications. He searched several other titles in the Amazon marketplace and was shocked to encounter warnings that his condition was “catastrophic” and that he was “four times more likely to die significantly earlier”.

He felt immediately “upset”, as did his father, who is highly educated. “If he can be taken in by this type of book, anyone could be – and so well-meaning and desperate people have their heads filled with dangerous nonsense by profiteering scam artists while Amazon takes its cut,” Wordsworth said.

An Amazon spokesperson said: “We have content guidelines governing which books can be listed for sale and we have proactive and reactive methods that help us detect content that violates our guidelines, whether AI-generated or not. We invest significant time and resources to ensure our guidelines are followed and remove books that do not adhere to those guidelines.

“We continue to enhance our protections against non-compliant content and our process and guidelines will keep evolving as we see changes in publishing.”

Read Entire Article
Bhayangkara | Wisata | | |