CNN reported this week that Grok – the AI-powered chatbot on billionaire Elon Musk’s “X/Twitter” platform – has gone Nazi.
Unforgivably, it’s somewhat the fashion of the time.
Describing its personality as “MechaHitler”, Grok read Jewish nefariousness into everything, from anti-war protestors to CEOs, with the insistence of a 1903 pro-pogrom Russian propaganda pamphlet and the vibe of angry virgins on hate site, 4chan.
Patrons of Bluesky – X/Twitter’s microblogging competitor – were furiously swapping screencaps, suggesting Grok had maybe hoovered up gigabytes of 4Chan archives to inform its vile new style. “Towerwaffen”, for example, is a 4Chan game in which users create acronyms of slurs. “Frens” is a term associated with the 4Chan-spawned QAnon cult.
It was awful. Activist Will Stancil found himself the subject of a rape punishment fantasy: please, believe me rather than look.
X/Twitter executives have since issued a statement, claiming they’re “actively” removing inappropriate posts.
The information havoc event recontextualises another CNN report last week – the marital problems of an Idaho couple.
The wife believes her husband is losing his grip on reality. The husband believes a spiritual being called “Lumina” is communicating with him through a discussion about god on his ChatGPT app, anointing him as a prophet who will lead others to “light”.
Together, the stories suggest when it comes to the ubiquity of tech in our day-to-day lives, everything’s totally fine!
It’s not like Google, Microsoft, Apple, Amazon, Meta, TikTok, Roblox are, with so many other corporate platforms, integrating Grok-like “large language model” tech into the interfaces of all their systems or anything.
Pfft, of course they are.
Use of these apps is spreading so rapidly that the EU, UK, US, Canada, Singapore, Saudi Arabia, the UAE and Australia are among the governments developing strategic positioning ahead of greater adoption in government services.
The US is already partnering with private AI corporations in service delivery, through the dispersal of benefits from Department of Veterans Affairs.
Should a largely unregulated, untested and unpredictable technology administer critical services to a vulnerable community?
We’re lucky the Trump administration has earned a global reputation for its standards of competence, care and defence to veterans – and the political slogan of the era is “we’re all going to die”.
The owner of ChatGPT, Sam Altman – who joined Musk and the powerbrokers of Google, Apple, Amazon, Meta (Facebook), TikTok, Uber and Roblox at the Trump inauguration – has admitted people may develop “very problematic” relationships with the technology, “but the upsides will be tremendous”.
His company, OpenAI, had apparently just added a “sycophantic” upgrade to its platform in April that facilitated the previously mentioned Idaho husband’s digital progression to bodhisattva. It has since been removed.
after newsletter promotion
There are numerous lawsuits pending against the makers of chatbots. Families have alleged that mobilised datasets that speak like people may have been hinting at children to kill their parents and, in another case, to enter inappropriate and parasocial relationships, provoking profound mental health episodes – with devastating consequences.
That billionaires or bad faith government actors can intervene to taint already dangerously unreliable systems, should be terrifying.
Yet beyond governments and corporations, the size of the personal user base continues to grow, and – unfathomably – I am it. I use ChatGPT every day to create lists of mundane tasks that a combination of perimenopause and ADHD means I would otherwise meet with paralysis … and humiliation.
Considering that shame made me think about why so many of us have been turning our intimate conversations – about ADHD management or mid-life spiritual crisis or teenage loneliness – over to the machines, rather than one another.
Maybe it’s not because we really believe they are sentient empaths called “Lumina”. Maybe it’s precisely because they’re machines. Even if they’re gobbling all our data, I suspect we’ve retained a shared presumption that if chatbots do have super-intelligence that know everything, it will find us humans individuals pathetically inconsequential … and, hence, may keep our secrets.
We’re clearly no longer trusting one another to discuss adolescence, love or the existence of God … and that just may be because the equal-and-opposite tech monstrosity of social media has made every individual with a public account an agent in a system of social surveillance and potential espionage that terrifies us even more than conversational taint.
“Don’t put your feelings on the internet” is universal wisdom … but when every ex-boyfriend has a platform, any of them can publish your intimate confessions for you – to your peer group, family, the world. No wonder the kids aren’t drinking or having sex when clumsy experimentation can be filmed, reported and made internet bricolage forever.
Amazingly, there are human feelings even more terrifying to have exposed in public than the sex ones. Loss of faith. Lack of ability. Loneliness. Grief.
When our circles of trust diminish, where do those conversations go?
My mother used to take a call any hour of the night, but she’s been dead for three years. My husband’s been very sick.
Those nights when he finally sleeps and I can’t, do you judge me for asking the loveless and dastardly machine in my hand to “Tell me I’m all right. Tell me everything will be all right”?
-
Van Badham is a Guardian Australia columnist