Datacenters are becoming a target in warfare for the first time

6 hours ago 6

Hello, and welcome to TechScape. I’m your host, Blake Montgomery. If you enjoy reading this newsletter, please forward it to someone you think would as well.

The US-Israel war on Iran shows that datacenters are a new frontier in warfare

Iran is bombing datacenters in the Persian Gulf to blow up symbols of the Gulf states’ technological alliance with the United States. Added bonus: they will be extremely costly to rebuild, being among the most expensive buildings in history. My colleague Daniel Boffey reports:

It is believed to be a first: the deliberate targeting of a commercial datacenter by the armed forces of a country at war.

At 4.30am on Sunday morning, an Iranian Shahed 136 drone struck an Amazon Web Services datacenter in the United Arab Emirates, setting off a devastating fire and forcing a shutdown of the power supply. Further damage was inflicted as attempts were made to suppress the flames with water.

Soon after, a second datacenter owned by the US tech company was hit. Then a third was said to be in trouble, this time in Bahrain, after an Iranian suicide drone turned to fireball on striking land nearby.

Iranian state TV has claimed that Iran’s Islamic Revolutionary Guard Corps launched the attack “to identify the role of these centers in supporting the enemy’s military and intelligence activities”.

The coordinated strike had an immediate impact. Millions of people in Dubai and Abu Dhabi woke up on Monday unable to pay for a taxi, order a food delivery or check their bank balance on their mobile apps.

Whether there was a military impact is unclear – but the strikes swiftly brought the war directly into the lives of 11 million people in the UAE, nine out of 10 of whom are foreign nationals. Amazon has advised its clients to secure their data away from the region.

Read more: ‘It means missile defence on datacentres’: drone strikes raise doubts over Gulf as AI superpower

The Guardian view on AI and war

the logo of the Pentagon
Photograph: Alexander Drago/Reuters

Anthropic’s feud with the US military over AI safeguards coincides with AI’s unprecedented use in the Iran crisis, signalling profound changes in the way the world wages war. The Guardian editorial board writes:

The paradigm shift has already begun. Anthropic’s Claude has reportedly been vital to the massive and intensifying offensive which has already killed an estimated thousand-plus civilians in Iran. This is an era of bombing “quicker than the speed of thought”, experts told the Guardian this week, with AI identifying and prioritising targets, recommending weaponry and evaluating legal grounds for a strike.

Even without considering questions of AI inaccuracy and biases – the impacts are obvious to its users. In 2024, one Israeli intelligence source observed of its use in the war on Gaza: “The targets never end. You have another 36,000 waiting.” Another said he spent 20 seconds assessing each target, stating: “I had zero added-value as a human, apart from being a stamp of approval.” Mass killing is eased in every sense, with further moral and emotional distancing, and reduced accountability.

Democratic oversight and multilateral constraints, instead of leaving decisions to entrepreneurs and defence departments, are essential. Most governments want clear guidance on the military use of AI. It is the biggest players who resist – though they are at least in the room. The pace of AI-driven warfare means that caution can look like handing control to adversaries. Yet as tech workers and military officials themselves are realising, the dangers of uncontrolled expansion are far greater.

Anthropic is acting as one of the few public backstops against fully automated killing in Iran, a bizarre position for a private company that is not even accountable to shareholders on public markets.

My colleague Nick Robins-Early notes in a deep dive on how Anthropic ended up in the crosshairs of the US war machine: Hanging over Pentagon vs Anthropic is the broader question of who should decide what AI is used for and a lack of detailed regulation from Congress on autonomous weapons systems. Although neither Anthropic nor the Pentagon believe that a private company should have decision-making power over AI’s military applications, right now the company is functioning as one of the only checks on what appears to be the military’s expansive desires for weaponizing AI.

Read more: How AI firm Anthropic wound up in the Pentagon’s crosshairs

How datacenters are shaping US politics

Online age verification is spreading across the world

The disturbing pattern of generative AI and suicide

a woman in the woods
Kate admiring the creek on her property. Photograph: Clayton Cotterell/The Guardian

My colleague Dara Kerr reports:

More than a dozen lawsuits have now been filed against AI companies over allegations that their chatbots led people to die by suicide. The latest suit, filed against Google last week, alleges that its Gemini chatbot instructed a 36-year-old man in Florida to kill himself, something the bot referred to as “transference”. The machine allegedly told him they could be together in a different dimension.

When the man told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied, per the suit. “The first sensation … will be me holding you.”

A Google spokesperson told the Guardian that Gemini is designed to “not suggest self-harm”: “Our models generally perform well in these types of challenging conversations … but unfortunately they’re not perfect.” Spokespeople for other AI companies have responded similarly.

This was the first lawsuit against Google, but OpenAI, the maker of ChatGPT, has been targeted in more than seven. One case involved a 48-year-old man, who used ChatGPT for years to brainstorm ways for low-cost home building in rural Oregon, but over time he became increasingly attached to the bot, spending 12 hours a day engaging with it. He ended his life after cutting off use of the AI, restarting, then stopping again.

In the Oregon OpenAI lawsuit and the one filed against Google, the families allege that the men had no history of mental illness or depression and that the chatbots caused them to have AI-induced delusions.

As these cases work their way through the legal system, courts will determine who is liable – the individual, the company behind the bot, or, somehow, the chatbot itself. Judges and juries will have to decide whether the people using these bots were already prone to suicidal ideations or whether the companies and their amiable chatbots, prone to reinforcing users’ existing beliefs and predispositions, are culpable and capable of provoking mental health crises.

The wider TechScape

Read Entire Article
Bhayangkara | Wisata | | |