Today I went to ChatGPT and typed: “AI write me a New York Times news article about the new Israeli ground offensive in Gaza.”
I helped hasten humanity’s demise because I was having rage strokes reading the New York Times coverage of the offensive. I think it was the following, which is copy-pasted quite possibly literally into every story on the topic:
Israel and Hamas have been at war since Oct. 7, 2023, when the Palestinian militants launched an attack on Israel in which about 1,200 people were killed and 250 taken hostage. The Israeli campaign in Gaza that followed has killed more than 50,000 people, according to Palestinian health officials, who do not distinguish between civilians and combatants.
This is how they frame the death toll. Not, “although the numbers are likely to be far higher” which no sane person doubts, but by suggesting that the source of the information is suspect and that they don’t distinguish between civilians and combatants. And of the 1,200 — do we distinguish between civilians and combatants?
So I thought, I wonder how AI would cover the story? As it turns out, better than the Aaron Boxerman, Patrick Kingsley and Bilal Shbair (I really would like to know why no one has a job in journalism anymore yet the New York Times has three to five “reporters” working on basic stories that’re basically transcriptions of IDF talking points).
So what do you think? AI or Aaron Boxerman (who, hilariously, continues to generate Israeli propaganda even after he got his ass kicked by a soldier in the West Bank who mistook him for a Palestinian), Patrick Kingsley and Bilal Shbair?
Let’s start with the headline.
AI: Israel Launches Major Ground Offensive in Gaza Amid Escalating Humanitarian Crisis”
The New York Times: Gunfire in Gaza After Israel Said Its Troops Were Mobilizing
I’m not a fan of strict writing rules concerning passive versus active voice. But when you depict gunfire as a random, free-floating phenomenon, like snowflakes, you might be obfuscating important information. And if you take your average news consumer, based on that headline they can, and will if they have a pro-Israel slant, presume the gunfire is coming from Hamas. AI, which I guess is Hamas, has no trouble naming the source of the gunfire.
Also, AI seems to think the next relevant point is that there’s an escalating humanitarian crisis, whereas the crack NYT team lands on the random fact that Israel said its troops were mobilizing. Isn’t a humanitarian crisis more important than what Israel said?
The word “humanitarian” appears only once in the NYT piece, as “humanitarian aid.” AI mentions “humanitarian” 7 times, with most references to “humanitarian crisis.”
Our robot overlords aren’t perfect yet, so AI does parrot official Israeli talking points:
Defense Minister Israel Katz emphasized the necessity of these operations, stating that they are crucial for the security of Israel and the defeat of Hamas. Prime Minister Benjamin Netanyahu echoed these sentiments, asserting that the military campaign would continue until "total victory" is achieved .
But unlike the New York Times story, which barely mentions it, AI devotes a paragraph to the latest slaughter.
The escalation has exacerbated an already dire humanitarian situation in Gaza. Over the past three days, Israeli air and ground strikes have resulted in the deaths of more than 250 Palestinians, bringing the total death toll since the conflict's resurgence in March to over 3,000 .
The United Nations and various humanitarian organizations have raised alarms over the worsening conditions. A full blockade has been imposed on Gaza, halting the entry of essential supplies such as food, water, and medical aid. The Palestinian Health Ministry reports that more than 53,000 individuals have lost their lives since the war began, with a significant portion being women and children .
The NYT, very low in the article, says that, “Over the past few days, Israel has conducted increasingly deadly strikes across Gaza, killing more than 90 people on Friday alone, according to the Gazan health ministry.”
At the end of the post, pesky AI returns to the humanitarian crisis. “As the situation unfolds, the international community watches closely, urging both sides to prioritize the protection of civilians and the delivery of humanitarian aid. The prospects for peace remain uncertain, with the humanitarian crisis deepening amid continued military operations.”
The New York Times finishes on this:
On Tuesday, Israel bombed sites near a hospital in the southern Gaza city of Khan Younis in an attempt to kill Muhammad Sinwar, one of the most powerful Hamas leaders remaining in Gaza. Neither Israel nor Hamas have publicly commented on whether he had survived.
Really. Israel has bombed every hospital to rubble. But let’s promote the notion that it’s in the service of killing prominent Hamas leaders, which a pro-Israel or neutral reader might view as an unfortunate, but necessary thing to do.
So it appears that ChatGPT is more concerned about the humanitarian crises than the robots who work for the New York Times.
Excellent writing (and attitude and tone) as always. As stupid as "AI" can be, it gets some things right. Grok, the in-house AI (LLM) at Elon(save-white-South-Africans!)'s x.com, sez: "I estimate a 75-85% likelihood Trump is a Putin-compromised asset". I hate to link to the often-toxic X, but this xweet is just so delicious: https://x.com/jeffreymlevy/status/1897013490067685745
This is an interesting exercise; I'm glad you did it. Much of the print journalism has so scrupulously attempted to be even-handed that it has missed the larger story. The dutifully report the IDF talking points without the context that much of the bombing appears aimed as much at destroying the viability of Gaza as a home for ordinary Palestinians as at destroying Hamas. The TV journalism showing starving children and rubble everywhere gives us a much clearer picture.
I've worked as a journalist. The only comment I'd add to the blame you place on the reporters is not to forget the role of editors in shaping what appears in print. They deserve as much censure as the reporters, if not more, because their job is to make reporters go back and fill in the gaps and blanks in the story, and editors have been known to remove important details that were in pieces that the reporter originally submitted.