hckrnws
Editor's Note: Retraction of article containing fabricated quotations
by bikenaga
Former technology journalist here.
If you want to experiment with reported news using untested tools that have known quality problems, do it in a strictly controlled environment where the output can be carefully vetted. Senior editor(s) need to be in the loop. Start with something easier, not controversial or high-profile articles.
One other thing. If the author cut corners because he's too sick to write, but did so anyway because he thought his job would be in jeopardy if he didn't publish, maybe it's time for some self-reflection at Ars regarding the work culture and sick leave/time-off policies.
> One other thing. If the author cut corners because he's too sick to write, but did so anyway because he thought his job would be in jeopardy if he didn't publish, maybe it's time for some self-reflection at Ars regarding the work culture and sick leave/time-off policies.
It sounds like you're implying that's what happened here, but I don't see any of that in the article. Was additional info shared elsewhere?
Edit: oh, I see links to the article author's social media saying this. Nevermind my question, and I agree.
Took me a while to find, here's one of the authors Benj Edwards with a statement on Bluesky: https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
looking at the statement, I find it weird that Benj Edwards is trying very hard to remove the blame from Kyle Orland, Even if he is not directly responsible.
Not weird. Kyle will take a massive career hit, as a result of this.
I’d say that some of the onus is on Kyle, anyway, as he should vet anything he slaps his name on (I do), but it sounds like he really didn’t have anything to do with it.
Despite the aspersions against the company for their sick time policy (which might actually be valid), the other corporate pressure might be to force their employees to incorporate AI tools into their work. That’s become quite common, these days.
He is taking responsibility because it is by his omission his mistake. That is what grown ups do. He probably feels an immense sense of guilt, even if it was an honest mistake.
Not weird. Taking blame on himself rather than the junior reporter was the most -- the only? -- professional thing about this whole snafu.
Not sure how widespread an occurrence in the industry at large, but in two slowly dying publications I'm familiar with, the editors were the first to be let go.
Quality took a nosedive, which may or may not have quickened the death spiral.
All that to say, there may not even be senior editors around to put in the loop.
The good news is that there are 3 senior editors (though none tasked with AI specifically), the bad news is that one of them was the coauthor. Their staff page does list two copy editors (variously labeled "copy editor" and "copyeditor" which is unfortunate) but no one assigned to fact checking specifically.
In mainstream journalism wasn't the practice to have a junior position research and confirm quotes, dates, proper names, etc?
Those are exactly the types of jobs that have been disappearing for years (not because of AI, but because of Internet). Same with editors. I regularly see embarrassing typos in major publications.
Is anyone actually embarrassed by typos these days? It doesn't seem so judging by the quantity of them.
I think this is entirely plausible lapse for someone with a bad fever, especially if they routinely work from home and are primarily communicating over text-based channels. Personally I'm much more inclined to blame the organization, as it sounds like they knowingly accepted work from someone who was potentially going to be in an altered mental state.
I can't help but think this is a reflection of the unwillingness of most people to actually pay for journalism online — and worse, the active and intentional effort to subvert copyright, making it more difficult for journlists to actually earn a living from their work.
People don't value journalism. They expect it to be free, generally. Therefore, companies like Ars are put into a position of expecting too much from their journalists.
HN is rife with people with this attitude -- frequently linking to "archive" sites for otherwise paywalled articles, complaining when companies try to build email lists, charge for their work, or have advertising on their sites. The underlying message, of course, is that journalism shouldn't be paid for.
Yes, Ars is at fault if they have a bad company culture. However, the broader culture is a real factor here as well.
> strictly controlled environment where the output can be carefully vetted
I don't know journalism from the inside, though of course it's one of those professions that everyone things they understand and has an opinion about. Realistically, is it especially careful vetting to verify the quotes and check the factual statements? The quotes seem like especially obvious risks - no matter how sick, who would let an LLM write anything without verifying quotes?
That seems like not verifying currency figures in an estimate or quote, and especially in one written by an LLM - I just can't imagine it. I'd be better off estimating the figures myself or removing them.
Possibly the author doesn't understand LLMs well.
Benj Edwards, one of the authors, accepted responsibility in a bluesky post[0]. He lists some extenuating circumstances[1], but takes full responsibility. Time will tell if it's a one-off thing or not I guess.
[0] https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
[1] your mileage may vary on how much you believe it and how much slack you want to cut him if you do
The bigger problem is that he felt the need to work while ill in bed, with very little sleep and sick with fever.
Makes me wonder about Ars Technica's company culture.
I agree that the work culture promoting this is bad, but being sick is still simply not an excuse to fabricate quotes with AI. It's still just journalistic malfeasance, and if Ars actually cares about the quality of their journalism, he should be fired for it.
> (...) he should be fired for it.
I don't know about that - I'd say it's the managers responsibility to make sure employees don't feel pressured to work when they're to ill to function.
And also brings to mind the IBM one million dollars story:
(...)
A very large government bid, approaching a million dollars, was on the table. The IBM Corporation—no, Thomas J. Watson Sr.—needed every deal. Unfortunately, the salesman failed. IBM lost the bid. That day, the sales rep showed up at Mr. Watson’s office. He sat down and rested an envelope with his resignation on the CEO’s desk. Without looking, Mr. Watson knew what it was. He was expecting it.
He asked, “What happened?”
The sales rep outlined every step of the deal. He highlighted where mistakes had been made and what he could have done differently. Finally he said, “Thank you, Mr. Watson, for giving me a chance to explain. I know we needed this deal. I know what it meant to us.” He rose to leave.
Tom Watson met him at the door, looked him in the eye and handed the envelope back to him saying, “Why would I accept this when I have just invested one million dollars in your education?”
Did it happen? I'd like to believe it but it's a lot of money even now and in Thomas Watson's time it was worth a great deal more.
> he should be fired for it.
If anyone who makes a mistake rarely and owns it completely shall be fired, everyone would be homeless.
To err is human, so owning what you did. This is the first time I have seen Ars to make a mistake of this kind in any size, so I think this is a good corrective bump given Ars' track report on these matters.
Maybe we should learn to be a bit flexible and understanding sometimes. If you live by the sword, you die by the sword, and we don't need more of that right now.
I agree, I think this should be taken in context and his past work should be reviewed by Ars to ensure this isn’t a pattern. If he made a mistake one time this is a learning experience and I doubt he would ever make it again. You don’t need to fire someone every time they make a mistake. Especially if the mistake was made in good faith.
Should he? Where does that mindset come from? The author has owned up to his mistake. Unless there is a pattern here, why would we not prefer to let him learn and grow from this? We all get to accidentally drop the prod DB once, since that’s what teaches us not to do it again.
He's not some junior developer with his first job, he's the senior editor. If a senior editor plagiarized an article, he would rightly be fired because it's a serious violation of journalistic ethics. He knew using AI tools like that was against company policy and he did it anyway. That's well beyond just making a mistake.
There are degrees of plagiarism and you could argue this is not really plagiarism at all. Paraphrasing instead of directly quoting is probably about as mild as it can get. Most publications wouldn’t even note the mistake.
This wasn't paraphrasing either. The tool couldn't access the subject's website and instead fabricated quotes, which Benj nor anyone in the editorial process bothered to vet.
Have you met any professional journos? It's not exactly a laid back profession. I could easily imagine the people I know pushing through illness to get a story out.
> felt the need to work while ill in bed, with very little sleep and sick with fever
You are assuming that...
He says he currently has a fever.
But was he sick when he wrote the article? That is not so clear.
He was, it's in the first paragraph
> I have been sick with COVID all week /../, while working from bed with a fever and very little sleep, I unintentionally made a serious journalistic error in an article about Scott Shambaugh.
How do you unintentionally use an AI to help you write an article?
Being under stress and being ill at the same time can change your modus operandi. I know, because that happens to me, too.
When I'm too tired, too stupid, and too stressed, I stop after a point. Otherwise things go bad. Being sick adds extra mental fog, so I try to stop sooner.
Being aware of it needs some effort, though.
Paste the original blog post into ChatGPT asking it to summarize or provide suggestions. Unintentionally copy and paste quotes from the ChatGPT output rather than the original blog post.
A fever can cause altered mental states, confusion, etc. It's not surprising that someone suffering from one would act out of character.
Yeah, personally with a high fever I’d say I’m more impaired than when I’m drunk. It’s not a state people should be doing anything important in.
Read the article. The use of AI was not accidental, but how the output was used was.
That's words, not facts.
> That's words, not facts.
Ok, what sort of facts would you accept here?
tbh that's the least surprising aspect of this. Most journalists do not have work-life balance.
That's a poor mea culpa. It begins with a preamble attempting to garner sympathy from the reader before it gets to the acknowledgement of the error, which is a sleight-of-hand attempt to soften its severity.
> which is a sleight-of-hand attempt to soften its severity
That’s not sleight-of-hand, I think we all immediately recognize it for what it is. Whether it is good form to lead with an excuse is a matter of opinion, but it’s not deceptive.
Okay. I've been harsh on Ars Technica in these comments, and I'm going to continue to hold an asterisk in my head whenever I see them cited as a source going forward. However, at least one thing in this apology does seem more reasonable than people have made it out to be: I think it's fine for reporters at an AI-skeptical outlet to play around with various AI tools in their work. Benj Edwards should have been way more cautious, but I think that people should be making periodic contact with the state of these tools (and their pitfalls!), especially if they're going to opine.[1]
We don't know yet how widespread these practices are at Ars Technica, or whether this is a one-off. But if it went down like he says it did here, then the coincidental nature of this mistake -- i.e., that it's an AI user error in reporting an AI novel behavior story at an AI-skeptical outlet -- merely makes it ironic, not more egregious than it already is.
[1] Edit: I read and agreed with ilamont's new comment elsewhere in this thread, right after posting this. It's a very reasonable caveat! https://news.ycombinator.com/item?id=47029193
I speculate that curious minds, with a forensic inclination and free time, will go back to previous articles and find out it happened before...When you see a cockroach...
It's not really important whether it's a one-off thing with this one guy, he's not relevant in the big picture. To the extent that he deindividualizes his labor he's just one more fungible operator of AI anyway.
People are making a bigger deal about it than this one article or site warrants because of ongoing discourse about whether LLM tech will regularly and inevitably lead to these mistakes. We're all starting to get sick of hearing about it, but this keeps happening.
Comment was deleted :(
Recent and related (in reverse order):
An AI agent published a hit piece on me – more things have happened - https://news.ycombinator.com/item?id=47009949 - Feb 2026 (602 comments)
AI Bot crabby-rathbun is still going - https://news.ycombinator.com/item?id=47008617 - Feb 2026 (28 comments)
The "AI agent hit piece" situation clarifies how dumb we are acting - https://news.ycombinator.com/item?id=47006843 - Feb 2026 (125 comments)
An AI agent published a hit piece on me - https://news.ycombinator.com/item?id=46990729 - Feb 2026 (945 comments)
AI agent opens a PR write a blogpost to shames the maintainer who closes it - https://news.ycombinator.com/item?id=46987559 - Feb 2026 (746 comments)
Several of the subscribers in the comments are so eager to praise Ars for "catching" the error and being honest by retracting the article, as if that's not an expected journalistic standard. They're so happy to have a reason NOT to be upset. This wasn't even caught by Ars or any of its readers. The guy being misquoted had to sign up and post a comment about it.
I'm happy that they fixed it, checked for any similar errors, and promised that they would improve their processes to try to prevent it from happening again.
This is pretty much what I expect when an organization makes a mistake. Many organizations don't do as well.
the apology for the mistake is fine but it is expected journalistic practice to hand an article to a fact checker before it goes out who will quite literally make sure names, dates, quotes and so on are authentic.
I thought of Ars Technica as a pretty decent publication, now I am wondering if they actually check what they publish.
The New Yorker famously has separate fact checkers, but I'm not sure how many other news organizations do?
Ah, the New Yorker. That's a Condé Nast publication too, just like Ars. There have been any labor incidents involving fact checkers and Condé Nast at the New Yorker recently: https://www.washingtonpost.com/business/2025/11/21/conde-nas...
Standard practice at most organisations these days is to apologize then keep doing it, it seems
I mean, honestly, “it was a failure and we won’t do it again” is better than a lot of outlets would do; some have the magic robots wholesale make up articles for them.
While I commend Ars and the author for taking responsibility, I am a bit off put by the wording used for the retraction on the original article: https://arstechnica.com/ai/2026/02/after-a-routine-code-reje...
> Following additional review, Ars has determined that the story “After a routine code rejection, an AI agent published a hit piece on someone by name,” did not meet our standards. Ars Technica has retracted this article. Originally published on Feb 13, 2026 at 2:40PM EST and removed on Feb 13, 2026 at 4:22PM EST.
Rather than say “did not meet our standards,” I’d much prefer if they stated what was false - that they published false, AI generated quotes. Anyone who previously read the article (which realistically are the only people who would return to the article) and might want to go back to it as a reference isn’t going to have their knowledge corrected of the falsehoods that they read.
Bluesky post by Benj (one of the authors of the article). https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
He admits to using an AI tool, says he was sick and did dumb things. He does clear Kyle (the other author).
Wow, he admits to using two AI tools: He used Claude Code, which failed because the blog was intentionally set up to refuse AI crawlers, so he pasted the page into ChatGPT. Then he blames ChatGPT for paraphrasing the hallucinated quotes.
He makes the claim that he was just using AI to help him put together an outline for his article, when the evidence clearly shows that he used the AI's verbatim output.
Is it an American thing to work even when you are sick?
There's no federal entitlement to being paid if you're sick, so companies come up with their own policies.
So companies often have a strange concept of "sick days", a specific number of days a year you're allowed to be sick. If you're sick more than that you have to use your vacation days, or unpaid leave when you're sick.
(And of course American companies often have weirdness around vacations too. More so in companies where there is allegedly "unlimited time off". But that's kinda off-topic now.)
Thank you for the explanation.
During COVID my company had mandatory days off (I think 14) if you reported any COVID symptoms. Those days were unpaid of course. The cherry on top is the people paid the lowest were the ones who couldn't work from home and were most likely to get COVID. This was pretty common at other places too.
Depends entirely on the workplace and the individual. You can tell people not to work when they're sick, but it's not like they're not aware of deadlines for things that, in some cases, only they can reasonably do.
As someone who has deadlines, and is occasionally sick, if I have a high fever I am not working. Nor would my manager thank me for it if I did. If you have a high fever, you’re mentally impaired and shouldn’t be doing anything important if it can possibly be avoided.
Comment was deleted :(
Really refreshing to see someone owning up to their mistake, that is something rare nowadays.
What alternative action could he possibly take? He's owning up to something indisputable.
We operate in an information environment where this is exceedingly rare. Shame is hard to come by these days.
Not say anything at all.
He could say he injected disinfectant and shoved an ultraviolet flashlight up his ass and ate horse dewormer pills, then snorted cocaine off a toilet seat because he's not afraid of germs, but that didn't cure his COVID, to blame it all on Trump Derangement Syndrome.
Then the could mention the Dow is over 50,000 right now, S&P at almost 7,000, and the NASDAQ smashing records, to justify what he did.
https://wttrends.com/dow-is-over-50000-meme-meaning-explaine...
I don’t totally agree with this. There’s a gap in his story that most journalists wouldn’t leave out like he did. According to his post, the order of events was:
1.) He tried use Claude to generate a list of citations. Claude refused because the article talked about harassment and this breaks its content policy.
2.) He wanted to understand why so he pasted the text into ChatGPT.
3.) ChatGPT generated quotes; he did not verify they were actual quotes.
I don’t see any sign that he actually read the source article. He had an excellent lead in to that - he had Covid and mentioned a lack of sleep so brain fog would have been a valid excuse. He could have said something as simple as ‘I was sick, extremely tired and the brain fog was so deep that I couldn’t remember what I read or even details of the original author’s voice.’ And that would have been more than enough. But there’s nothing.
That’s an odd thing for a journalist to leave out. They’re skilled at crafting narratives that will both explain and persuade and yet the most important part of this whole thing didn’t even warrant a mention.
As a basic rule, if a journalist is covering something that happened via blog posts, you should be able to expect the journalist to read the posts. I’d like to give this writer the benefit of the doubt but it’s hard.
I think there's still something missing here. This is a strange place for ChatGPT to confabulate quotes: extracting short quotes from a short text blog post is about easy as it gets these days. GPT-5.2 Pro can handle tens of thousands of words for me before I start to notice any small omissions or confabulations, and this was confabulating all that at just 1.5k words?
So since he says he was sick and his recollection cannot be trusted (I don't blame him, the second-to-last time I had COVID-19, I can barely remember anything about the worst day - which was Christmas Day), something seems to be missing. He may not have pasted in the blog post like he remembers. Or perhaps he got routed to a cheap model; it wouldn't surprise me if he was using a free tier, that accounts for a lot of these stories where GPT-5 underperforms and would explain a lot of stupidity by the GPT. Or didn't use GPT at all, who knows.
Comment was deleted :(
He did not even mentioned the Dow...
[dead]
For those wondering what specifically was fabricated, I checked. The earlier parts of the article include some quotes from Scott Shambaugh on Github and all the quotes are genuine.
But the last section of the article includes apparent quotes from this blog post by Shambaugh:
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
and all the quotes are fake. The section:
> On Wednesday, Shambaugh published a longer account of the incident, shifting the focus from the pull request to the broader philosophical question of what it means when an AI coding agent publishes personal attacks on human coders without apparent human direction or transparency about who might have directed the actions.
> “Open source maintainers function as supply chain gatekeepers for widely used software,” Shambaugh wrote. “If autonomous agents respond to routine moderation decisions with public reputational attacks, this creates a new form of pressure on volunteer maintainers.”
> Shambaugh noted that the agent’s blog post had drawn on his public contributions to construct its case, characterizing his decision as exclusionary and speculating about his internal motivations. His concern was less about the effect on his public reputation than about the precedent this kind of agentic AI writing was setting. “AI agents can research individuals, generate personalized narratives, and publish them online at scale,” Shambaugh wrote. “Even if the content is inaccurate or exaggerated, it can become part of a persistent public record.”
> ...
> “As autonomous systems become more common, the boundary between human intent and machine output will grow harder to trace,” Shambaugh wrote. “Communities built on trust and volunteer effort will need tools and norms to address that reality.”
Source: the original Ars Technica article:
Odd that there's no link to the retracted article.
Thread on Arstechnica forum: https://arstechnica.com/civis/threads/editor%E2%80%99s-note-...
The retracted article: https://web.archive.org/web/20260213194851/https://arstechni...
> Odd that there's no link to the retracted article.
Not at all. Whitewash.
> Odd that there's no link to the retracted article.
Well, it's retracted. That means that it shouldn't exist any more, so while they could link to the archive, it defeats the point of retracting it if they do so, right?
The articles authors are listed on that page as: Benj Edwards and Kyle Orland.
[dead]
What are they changing to prevent this from happening in the future? Why was the use of LLMs not disclosed in the original article? Do they host any other articles covertly generated by LLMs?
As far as I can tell, the pulled article had no obvious tells and was caught only because the quotes were entirely made up. Surely it's not the only one, though?
My read is, "Oops someone made a mistake and got caught. That shouldn't have happened. Let's do better in the future." and that's about it.
The _claim_ is that the article wasn’t AI generated, only the quote (the journalist rather unwisely trusted in the ability of an LLM to summarise things).
> That this happened at Ars is especially distressing. We have covered the risks of overreliance on AI tools for years, and our written policy reflects those concerns. In this case, fabricated quotations were published in a manner inconsistent with that policy.
Ars were caught with their pants down. We have no reason to believe otherwise. It isn't possible to prove otherwise. We as readers are lucky ars quoted someone who disabled LLM access to their website, causing the hallucination and giving us a smoking gun.
Clawing back credibility will be hard
[dead]
People put a lot of weight on blame-free post-mortems and not punishing people who make "mistakes", but I believe that has to stop at the level of malice. Falsifying quotes is malice. Fire the malicious party or everything else you say is worthless.
That don't actually say it's a blame free post-mortem, nor is it worded as such. They do say it's their policy not to publish AI generated anything unless specifically labelled. So the assumption would be that someone didn't follow policy and there will be repercussions.
The problem is people on the Internet, hn included, always howl for maximalist repercussions every time. ie someone should be fired. I don't see that as a healthy or proportionate response, I hope they just reinforce that policy and everyone keeps their jobs and learns a little.
Most of the time a firing is not a reasonable or helpful response to a mistake.
This was not a mistake.
> That don't actually say it's a blame free post-mortem, nor is it worded as such.
Correct, I only mentioned the blame-free post-mortem thing to head off the usual excuses, as a shorthand for the general approach. It has merits in many/most circumstances.
> I don't see that as a healthy or proportionate response,
Again, correct. It's only appropriate in cases of malice.
Hanlon's razor is a farce. There are no unintentional acts, the drunk driver takes off because he thinks he has to get back as fast as possible, the sick man invokes AI to write his article because he must hit the deadline.
There are lots of unintentional acts, simply because fully predicting all the consequences of ones actions is genuinely difficult. I agree that drunk driving is not one; those consequences are well-known.
Yes. This is being treated as thought it were a mistake, and oh, humans make mistakes! But it was no mistake. Possibly it was a mistake on the part of whoever was responsible for reviewing the article before publication didn't catch it. But plagiariasm and fabrication require malicious intent, and the authors responsible engaged in both.
> Possibly it was a mistake on the part of whoever was responsible for reviewing the article before publication didn't catch it
My wife, former journalist, said that you don’t direct quote anyone without talking to them first and verifying what you’re quoting is for sure from them. The she said “I guess they have no editors?” because in her experience editors aren’t like fact checkers but they’re suppose to have the experience and wisdom to ask questions about the content to make sure everything is kosher before going to print. Seems like multiples errors in judgement from multiple parts of the organization.
(My wife left journalism about 15 years ago so maybe things are different but that was her initial reaction)
In this case, the article was quoting a blog post, so presumably the editor (it _does_ look like there was one) took the arguably-not-unreasonable stance that _obviously_ the author wouldn't have fabricated quotes from a blog post they're literally linking to, that would be _insane_, nobody would do that. And thus that they didn't need to check.
And that might be a semi-justifiable stance if dealing with a human.
One of the many problems with our good friends the magic robots is that they don't just do incorrect stuff, they do _weird_ incorrect stuff, that a human would be unlikely to do, so it can fly under the radar.
> My wife left journalism about 15 years ago so maybe things are different
Ya, they are quite different!
Blameless post-mortems work really well when you use them to fix process issues. In this case, you'd identify issues like "not all quotes are fact checked because our submissions to editorial staff don't require sources and the checklist doesn't require fact checks", "the journalist worked while sick because we were understaffed", "nothing should ever be copy-pasted from an LLM", etc.
There’s no malice if there was no intention of falsifying quotes. Using a flawed tool doesn’t count as intention.
Outsourcing your job as a journalist to a chatbot that you know for a fact falsifies quotes (and everything else it generates) is absolutely intentional.
It's intentionally reckless, not intentionally harmful or intentionally falsifying quotes. I am sure they would have preferred if it hadn't falsified any quotes.
He's on the AI beat, if he is unaware that a chatbot will fabricate quotes and didn't verify them that is a level of reckless incompetence that warrants firing
Yeah! We can call things reckless incompetence without calling them malice!
The state of California can classify some driving under the influence cases as operating with "implied malice". Not sure it would qualify in this scenario, but there is precedent for arguing that reckless incompetence is malicious when it is done without regard for the consequences.
Comment was deleted :(
“In any statutory definition of a crime ‘malice’ must be taken not in the old vague sense of ‘wickedness’ in general, but as requiring either (i) an actual intention to do the particular kind of harm that was in fact done, or (ii) recklessness as to whether such harm should occur or not (ie the accused has foreseen that the particular kind of harm might be done, and yet has gone on to take the risk of it).” R v Cunningham
Comment was deleted :(
I think that is the crucial question. Often we lump together malice with "reckless disregard". The intention to cause harm is very close to the intention to do something that you know or should know is likely to cause harm, and we often treat them the same because there is no real way to prove intent, so otherwise everyone could just say they "meant no harm" and just didn't realize how harmful their actions could be.
I think that a journalist using an AI tool to write an article treads perilously close to that kind of recklessness. It is like a carpenter building a staircase using some kind of weak glue.
Replace parent-poster's "malice" with "malfeasance", and it works well-enough.
I may not intend to burn someone's house down by doing horribly reckless things with fireworks... but after it happens, surely I would still bear both some fault and some responsibility.
But that kind of recklessness is malice
Outsourcing writing to a bot without attribution may not be malicious, but it does strain integrity.
I don't think the article was written by an LLM; it doesn't read like it, it reads like it was written by actual people.
My assumption is that one of the authors used something like Perplexity to gather information about what happened. Since Shambaugh blocks AI company bots from accessing his blog, it did not get actual quotes from him, and instead hallucinated them.
They absolutely should have validated the quotes, but this isn't the same thing as just having an LLM write the whole article.
I also think this "apology" article sucks, I want to know specifically what happened and what they are doing to fix it.
> Using a flawed tool doesn’t count as intention.
"Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here."
They aren't allowed to use the tool, so there was clearly intention.
The issues with such tools are highly documented though. If you’re going to use a tool with known issues you’d better do your best to cover for them.
The tool when working as intended makes up quotes. Passing that off as journalism is either malicious or unacceptably incompetent.
The malice is passing off someone else's writing as your own.
They're expected by policy to not use AI. Lying about using AI is also malice.
It's a reckless disregard for the readers and the subjects of the article. Still not malice though, which is about intent to harm.
Lying is intent to deceive. Deception is harm. This is not complicated.
I think you're reading a lot of intentionality into the situation what may be present, but I have not seen information confirming or really even suggesting that it is. Did someone challenge them, "was AI used in the creation of this article?" and they denied it? I see no evidence of that.
Seems like ordinary, everyday corner cutting to me. I don't think that rises to the level of malice. Maybe if we go through their past articles and establish it as a pattern of behavior.
That's not a defence to be clear. Journalists should be held to a higher standard than that. I wouldn't be surprised if someone with "senior" in their title was fired for something like this. But I think this malice framing is unhelpful to understanding what happened.
> Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.
By submitting this work they warranted that it was their own. Requiring an explicit false statement to qualify as a lie excludes many of the most harmful cases of deception.
Have you ever gone through a stop sign without coming to a complete stop? Was that dishonesty?
You can absolutely lie through omission, I just don't see evidence that that is a better hypothesis than corner cutting in this particular case. I am open to more evidence coming out. I wouldn't be shocked to hear in a few days that there was other bad behavior from this author. I just don't see those facts in evidence, at this moment. And I think calling it malice departs from the facts in evidence.
Presumably keeping to the facts in evidence is important to us all, right? That's why we all acknowledge this as a significant problem?
We see a typical issue in modern online media: The policy is to not use AI, but he demands of content created per day makes it very difficult to not use AI... so the end result is undisclosed AI. This is all over the old blogosphere publications, regardless of who owns them. The ad revenue per article is just not great
I'm curious if you've read the author's Bluesky statement (which wasn't available when you made your comment) and what you think of it?
I'll admit that at least looks consistent with extreme carelessness rather than lying. I don't find it terribly convincing, though. I find it a suspiciously long chain of excuses perfectly calibrated to excuse the events. The description gets vague right at the critical point where AI output gets laundered into journalistic output, and the part about the tool being strictly to gather "verbatim source material" sounds like the narrow end of a wedge of excuses for something that actually doesn't do that. But I don't have the background to tell with confidence whether he's lying. If it turns out he's not, well, I'd feel a little bad, but I still wouldn't respect him.
I certainly stand by my broader claim that lying is fireable.
Well I appreciate you taking the time to respond and acknowledge the new evidence. I agree with the broad point that dishonesty can't be tolerated in a newsroom. And it's a "Caesar's wife must be beyond reproach" situation, the appearance of dishonesty is very bad regardless of the reality. And despite what Orland claims I do think there's blame to go around for not catching the mistake (assuming we accept his account).
For what it's worth, the post below talks about experimenting with Claude Code but also having COVID in December. I don't know what to think of that, I did work with a guy who just kept catching COVID (or at least he said that and I believed him, I didn't swab him personally or anything), but it is weird for him to have COVID in December and February.
https://arstechnica.com/information-technology/2026/01/10-th...
> but it is weird for him to have COVID in December and February.
Unlucky, sure, but not beyond the bounds of possibility. Also possible that one or both was actually flu or RSV or other non-COVID fever-inducing respiratory disease; people often just bucket them all under COVID these days (a bit like they used to with flu).
Yes, absolutely. I felt obliged to report any contrary indicator I came across. But it's totally plausible, and I have been convinced I had COVID and tested negative multiple times.
The specific disease is the least weird part of that story. Whatever the degree of truth in his statement, some absurd series of events occurred.
Comment was deleted :(
At this point anyone reporting on tech should know the problems with AI. As such even if AI is used for research and articles are written on that output by human there is still absolute unquestionable expectation to do the standard manual verification of facts. Not doing it is pure malice.
I don’t see how you could know that without more information. Using an AI tool doesn’t imply that they thought it would make up quotes. It might just be careless.
Assuming malice without investigating is itself careless.
> Using an AI tool doesn’t imply that they thought it would make up quotes
He covers AI for Ars Technica. Like, if he doesn't know that chatbots make shit up...
FWIW I suspect that a lot of the problem here was that he was _working while he had a high fever_. This is a really bad idea.
we are fucking doomed holy shit
we're really at the point where people are just writing off a journalist passing off their job to a chatgpt prompt as though that's a normal and defensible thing to be doing
No one said it was defensible. They drew a distinction between incompetence and malice. Let's not misquote each other here in the comments.
Even if it didn't fabricate quotes wholesale, taking an LLM's output and claiming it as your own writing is textbook plagiarism, which is malicious intent. Then, if you know that LLMs are next-token-prediction-engines that have no concept of "truth" and are programmed solely to generate probabilistically-likely text with no specific mechanism of anchoring to "reality" or "facts", and you use that output in a journal that (ostensibly) exists for the reason of presenting factual information to readers, you are engaging in a second layer of malicious intent. It would take an astounding level of incompetence for a tech journal writer to not be aware of the fact that LLMs do not generate factual output reliably, and it beggars belief given that one of the authors has worked at Ars for 14 years. If they are that incompetent, they should probably be fired on that basis anyways. But even if they are that incompetent, that still only covers one half of their malicious intent.
The article in question appears to me to be written by a human (excluding what's in quotation marks), but of course neither of us has a crystal ball. Are there particular parts of it that you would flag as generated?
Honestly I'm just not astounded by that level of incompetence. I'm not saying I'm impressed or that's it's okay. But I've heard much worse stories of journalistic malpractice. It's a topical, disposable article. Again, that doesn't justify anything, but it doesn't surprise me that a short summary of a series of forum exchanges and blog posts was low effort.
I don't believe there is any greater journalistic malpractice than fabrication. Sure, there are worse cases of such malpractice in the world given the low importance of the topic, but journalists should be reporting the truth on anything they deem important enough to write about. Cutting corners on the truth, of all things, is the greatest dereliction of their duty, and undermines trust in journalism altogether, which in turn undermines our collective society as we no longer work from a shared understanding of reality owing to our inability to trust people who report on it. I've observed that journalists tend to have unbelievably inflated egos and tout themselves as the fourth estate that upholds all of free society, and yet their behaviour does not actually comport with that and is rather actively detrimental in the modern era.
I also do not believe this was a genuine result of incompetence. I entertained that it is possible, but that would be the most charitable view possible, and I don't think the benefit of doubt is earned in this case. They routinely cover LLM stories, the retracted article being about that very subject matter, so I have very little reason to believe they are ignorant about LLM hallucinations. If it were a political journalist or something, I would be more inclined to give the ignorance defense credit, but as it is we have every reason to believe they know what LLMs are and still acted with intention, completely disregarding the duty they owe to their readers to report facts.
> I don't believe there is any greater journalistic malpractice than fabrication. Sure, there are worse cases of such malpractice...
That's more or less what I mean. It was only a few notches above listicle to begin with. I don't think they intended to fabricate quotes. I think they didn't take the necessary time because it's a low-stakes, low-quality article to begin with. With a short shelf life, so it's only valuable if published quickly.
> I also do not believe this was a genuine result of incompetence.
So your hypothesis is that they intentionally made up quotes that were pretty obviously going to be immediately spotted and damage their career? I don't think you think that, but I don't understand what the alternative you're proposing is.
I also feel compelled to point out you've abandoned your claim that the article was generated. I get that you feel passionately about this, and you're right to be passionate about accuracy, but I think that may be leading you into ad-hoc argumentation rather than more rational appraisal of the facts. I think there's a stronger and more coherent argument for your position that you've not taken the time to flesh out. That isn't really a criticism and it isn't my business, but I do think you ought to be aware of it.
I really want to stress that I don't think you're wrong to feel as you do and the author really did fuck up. I just feel we, as a community in this thread, are imputing things beyond what is in evidence and I'm trying to push back on that.
What I'm saying is that I believe they do not care about the truth, and intentionally chose to offload their work to LLMs, knowing that LLMs do not produce truth, because it does not matter to them. Is there any indication that this has damaged their career in any way? It seems to me that it's likely they do not care about the truth because Ars Technica does not care about the truth, as long as the disregard isn't so blatant that it causes a PR issue.
> I also feel compelled to point out you've abandoned your claim that the article was generated.
As you've pointed out, neither of us has a crystal ball, and I can't definitively prove the extent of their usage. However, why would I have any reason to believe their LLM usage stops merely at fabricating quotes? I think you are again engaging in the most charitable position possible, for things that I think are probably 98 or 99% likely to be the result of malicious intent. It seems overwhelmingly likely to me that someone who prompts an LLM to source their "facts" would also prompt an LLM to write for them - it doesn't really make sense to be opposed to using an LLM to write on your behalf but not be opposed to it sourcing stories on your behalf. All the more so if your rationale as the author is that the story is unimportant, beneath you, and not worth the time to research.
> I think you are again engaging in the most charitable position possible, ...
Yeah, that's accurate. I will turn a dime the moment I receive evidence that this was routine for this author or systemic for Ars. But yes, I'm assuming good faith (especially on Ars' part), and that's generally how I operate. I guess I'm an optimist, and I guess I can't ask you to be one.
This is silly. LLMs are not people; you can’t “plagiarize” an LLM. Either the result is good or it isn’t, but it’s the actual author’s responsibility either way.
> Ars Technica does not permit the publication of AI-generated material unless it is clearly labeled and presented for demonstration purposes. That rule is not optional, and it was not followed here.
Both from the Mastodon post of the journalist (which admits to casual use of more than one LLM), and from a cursory review of this author's past articles, I'm willing to bet that this rule wasn't followed more than once.
Fabricated quotes has been a huge problem outside the new AI issues for years(decades?). The vast majority of print "news" people consume is cynically designed "outrage porn" targeted towards different segment's political proclivities; aka "opinion" and "analysis" pieces. Both sides for maximum clicks!
They put quote-looking not-quotes in the headlines and articles routinely that essentially amount to "putting words in someone's mouth". A very large portion of the population seems to take this at face value as direct quotes, or accurate paraphrasing, when they absolutely are not.
Feels like nail in the coffin, Ars has already been going downhill for half a decade or more.
I unsubscribed (just the free rss) regardless of their retraction.
Unsubscribing from the free RSS feed. If you weren't paying or consuming ads, I'm not sure how much your vote should count.
This sort of publication lives or dies on mindshare.
So I can only share my opinion about Ars if I’m a paying subscriber? XD
Remember the old days when journalists would be excommunicated for plagarism and/or making things up? Some of those folks must be like "I was just too early..."
Remember the old days when people paid for news?
Ars is owned by Conde Nast, which had to let go of its HQ in 2024. I suspect they don't have a plan to replace a journalist like Benj if they axe him. And it's not like readers are going to hold them accountable.
I was puzzled by your claim that Condé Nast was forced to vacate its headquarters last year. After some Googling, it seems you are referring to their English offices. Condé Nast is still headquartered at One World Trade as it has been since 2014, and is still owned by the Staten Island-based Advance Publications as it has been since 1959.
The clear difference between this and Stephen Glass-style confabulation is intent. There's no indication Edwards knowingly, deliberately invented quotes. It was a clumsy mistake.
> There's no indication Edwards knowingly, deliberately invented quotes. It was a clumsy mistake.
Signing off on an article that 'quotes' someone by hallucination? I would say that, for a journalist- especially on ars - is slightly more than 'clumsy'.
When an article is retracted it's standard to at least mention the title and what specific information was incorrect so that anyone who may have read, cited or linked it is informed what information was inaccurate. That's actually the point of a retraction and without it this non-standard retraction has no utility except being a fig leaf for Ars to prevent external reporting becoming a bigger story.
In the comments I found a link to the retracted article: https://arstechnica.com/ai/2026/02/after-a-routine-code-reje.... Now that I know which article, I know it's one I read. I remember the basic facts of what was reported but I don't recall the specifics of any quotes. Usually quotes in a news article support or contextualize the related facts being reported. This non-standard retraction leaves me uncertain if all the facts reported were accurate.
It's also common to provide at least a brief description of how the error happened and the steps the publication will take to prevent future occurrences.. I assume any info on how it happened is missing because none of it looks good for Ars but why no details on policy changes?
Edit to add more info: I hadn't yet read the now-retracted original article on achive.org. Now that I have I think this may be much more interesting than just another case of "lazy reporter uses LLM to write article". Scott, the person originally misquoted, also suspects something stranger is going on.
> "This blog you’re on right now is set up to block AI agents from scraping it (I actually spent some time yesterday trying to disable that but couldn’t figure out how). My guess is that the authors asked ChatGPT or similar to either go grab quotes or write the article wholesale. When it couldn’t access the page it generated these plausible quotes instead, and no fact check was performed." https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
My theory is a bit different than Scott's: Ars appears to use an automated tool which adds text links to articles to increase traffic to any related articles already on Ars. If that tool is now LLM-based to allow auto-generating links based on concepts instead of just keywords, perhaps it mistakenly has unconstrained access to changing other article text! If so, it's possible the author and even the editors may not be at fault. The blame could be on the Ars publishers using LLMs to automate monetization processes downstream of editorial. Which might explain the non-standard vague retraction. If so, that would make for an even more newsworthy article that's directly within Ars' editorial focus.
In the case of hallucinated quotes, I think the more important aspect is to describe how this happened, whether the author is a regular contributor, how the editors missed it, and what steps are being taken to prevent it from happening in the future.
It's good to issue a correction, and in this case to retract the article. But it doesn't really give me confidence going forward, especially where this was flagged because the misquoted person raised the issue. It's not like Ars' own processes somehow unearthed this error.
It makes me think I should get in the habit of reading week-old Ars articles, whose errors would likely have been caught by early readers.
> It's not like Ars' own processes somehow unearthed this error.
It might be even worse (and more interesting) than that. I just posted a sister response outlining why I now suspect the fabrication may have actually been caused by Ars' own process. https://news.ycombinator.com/item?id=47027370. Hence, the odd non-standard retraction.
Yes I just read the retracted article and I can't find anything that I knew was false. What were the fabricated quotes?
This blog post from the person who was falsely quoted has screenshots and an archive link: https://theshamblog.com/an-ai-agent-published-a-hit-piece-on...
I was wondering the same thing. After I posted above, I followed the archive.org link to the original article and did a quick search on the last four quotes, which the article claims are from Scott's blog. None appear on the linked blog page. The first quote the article claims is from Scott does appear on the linked Github comments page.
When I wrote my post above, I hadn't yet read the original article on achive.org. Now that I know the article actually links to the claimed original sources on Scott's blog and Github for all the fabricated quotes, how this could have happened is even more puzzling. Now I think this may be much more interesting than just another case of "lazy reporter uses LLM to write article".
Ars appears to use an automated tool which adds text links to articles to increase traffic to any related articles already on Ars. If that tool is now LLM-based to allow auto-generating links based on concepts instead of just keywords, perhaps it mistakenly has unconstrained access to changing other article text! If so, it's possible the author and even the editors may not be at fault. The blame could be on the Ars publisher's using LLM's to automate monetization processes downstream of editorial. Which might explain the non-standard vague retraction. If so, that would make for an even more newsworthy article that's directly within Ars' editorial focus.
This is not a retraction. It is just CYA - Cover your Arse Technica.
They need to enumerate the specific details they fudged.
They need to correct any inaccuracies.
Otherwise, there is little reason to trust Arse Technica in the future.
Imagine a future news environment where oodles of different models are applied to fact check most stories from most major sources. The markup from each one is aggregated and viewable.
A lot of the results would be predictable partisan takes and add no value. But in a case like this where the whole conversation is public, the inclusion of fabricated quotes would become evident. Certain classes of errors would become lucid.
Ars Technica blames an over reliance on AI tools and that is obviously true. But there is a potential for this epistemic regression to be an early stage of spiral development, before we learn to leverage AI tools routinely to inspect every published assertion. And then use those results to surface false and controversial ones for human attention.
So, the solution to too much AI is... Even more AI! You sound like you would fit just right at a LLM-shop marketing department.
The author of the blog post hypothesised that the fabrication happened as a result of measures blocking LLMs from scraping their blog. If that is the case, adding more LLMs would not in fact accomplish anything at all.
Comment was deleted :(
Comment was deleted :(
Glib observation, but this sounds quite generic and AI-written.
Elsewhere I've seen a post from the author talking about how his old articles hit so many of Wikipedia's identified signs of AI-generated text. As somebody who's own style hits many of those same stylistic/rhetorical techniques, I definitely sympathize.
Did you fire the writer? No? Then what.
Comment was deleted :(
Previously:
Ars Technica makes up quotes from Matplotlib maintainer; pulls story
Kinda disappointing that the post that first reported the Ars Technica article and retraction got ~50% more comments than this one at about the same age. Seems people just love to outrage and complain than wait for the promised post-mortem, which if anything, came early.
Ref: https://web.archive.org/web/20260214134656/https://news.ycom...
Zero repercussions for the senior editor involved in fabricating quotations (they neglect to even name the culprit), so this is essentially an open confession that Ars has zero (really, negative) journalistic integrity and will continue to blatantly fabricate articles rather than even pretending to do journalism, so long as they don't get caught. To get to the stage where an editor who has been at the company for 14 years is allowed to publish fraudulent LLM output, which is both plagiarism (claiming the output as his own), and engaging in the spread of disinformation by fabricating stories wholesale, indicates a deep cultural rot within the organisation that should warrant a response deeper than "oopsie". The publication of that article was not an accident.
What is the evidence that lead you to believe there have been no repercussions? In what world do they retract the article without at a minimum giving a stern warning to the people involved?
If they had named the people involved, the criticism would be, "they aren't taking responsibility, they're passing the buck to these employees."
This is something you don’t see a lot in journalism nowadays. Multiple publications have been caught in multiple provable lies or inaccuracies over the last few years, and this is the first official retraction I’ve seen. I tip my hat to the ars team.
There are official retractions in reputable publications almost every day. Lots of major publications have an entire section of their website devoted to it:
https://www.nytimes.com/section/corrections
https://www.wsj.com/news/types/corrections
etc etc. Many of them include the retraction or correction in the following print edition, if they have one, as well.
Exactly. And they don’t just rip down the old article but annotate it with a disclaimer that an earlier version said XXX
Comment was deleted :(
... This is the first official retraction you've seen? Eh? All proper newspapers do them fairly regularly.
> We have covered the risks of overreliance on AI tools for years
If the coverage of those risks brought us here, of what use was the coverage?
Another day, another instance of this. Everyone who warned that AI would be used lazily without the necessary fact-checking of the output is being proven right.
Sadly, five years from now this may not even result in an apology. People might roll their eyes at you for correcting a hallucination they way they do today if you point out a typo.
> Sadly, five years from now this may not even result in an apology. People might roll their eyes at you for correcting a hallucination they way they do today if you point out a typo.
I think this track is unavoidable. I hate it.
I see a lot of negative comments on this retraction about how they could have done it better. Things can always be done better but I think the important thing is that they did it at all. Too many 'news' outlets today just ignore their egregious errors, misrepresentations and outright lies and get away with it. I find it refreshing to see not just a correction, but a full retraction of this article. We need to encourage actual journalistic integrity when we see it, even if it is imperfect. This retraction gives me more faith in future articles from them since I know there is at least some editorial review, even if it isn't perfect.
Respectfully, I find this to be an unwarranted positive reaction to have toward this situation. What other action could Ars possibly take as a journalistic business? The quotes are indisputably false. This is hardly a praise-worthy action to take. It's the expected and required action.
With regard to editorial review, an editor didn't catch the error. The target of the false quotes had to register on Ars and post a comment about it. To top it off, more than one Ars commenter was openly suspicious that he was a fake account. Only when some of the readers checked for themselves to see that the quotes were indeed falsified did it gain attention from Ars staff.
This was literally the best possible case for catching it - “quoted” person complaining, clearly visible page doesn’t have the quotes, and it still was a fight.
Most people would have had no hope and nobody would ever know.
We have a problem right now there is a lot of a bad 'news' sites and the few that do any good get slammed because they listen. Go ahead, slam Fox news and see how far that goes. I think this creates a very negative incentive to be responsible in journalism. If you try a little you will be hammered but if you don't try at all you get the pass. My point was, and still is, that we need to encourage the positive when we see it in hopes that it creates more positive in the future. It is just like raising a child. If you jump on them because they only did part of the right answer then next time they will do none of the right answer. The big point here is we need to be asking ourselves: What is the goal of the criticism? Are we achieving it? Is there a better way?
Talk about tornado chasing the moving Overton Window. Too many 'news' outlets are bad so it's ok for all news outlets to be bad now.
Who got fired?
The bylines are known, check in 4-5 months whether either or both names still appear on new articles or not..
They're both still on the staff page presently. https://arstechnica.com/staff-directory/
It is definitely not a good look for a "Senior AI Reporter."
Yeah the problem is we understood AI reporter as reporting on AI not by AI.
This is a US holiday weekend and lots of people are going to be on weekend vacations. Check back on Wednesday.
Then they should take their time publishing this statement.
Nobody is in a hurry.
The lack of specificity to me suggests that someone probably is getting fired. It is written as if legal vetted every word.
[dead]
tl;dr: We apologize for getting caught. Ars Subscriptors in the comments thank Ars for their diligence in handling an editorial fuckup that wasn't identified by Ars.
I don't know how you could possibly have that take away from reading this. They did a review of their context to confirm this was an isolated incident and reaffirmed that it did not follow the journalistic standards they have set for themselves.
They admit wrong doing here and point to multiple policy violations.
> That rule is not optional, and it was not followed here.
It’s not optional, but wasn’t followed, with zero repercussions.
Sounds optional.
Reading between the lines, this is corporate-speak for "this is a terminable offense for the employees involved." It's a holiday weekend in the US so they may need to wait for office staff to return to begin the process.
They might as well wait till business hours to sort things out before publishing a statement. Nobody needs to see such hollow corpo speak on a Sunday.
No, admitting fault as soon as possible makes a big difference. It's essential to restoring credibility.
If they had waited until Monday the thread would be filled with comments criticizing them for waiting that long.
https://arstechnica.com/civis/threads/um-what-happened-to-th...
> we probably won't have something to report back until next week.
The forum thread is locked.
Yeah, but the problem is that by not making it clear that additional actions may be coming, they're barely restoring credibility at all, because the current course of action (pulling the article and saying sorry) is like the bare minimal required to avoid being outright liars - a far cry from being credible journalists. All they've done is leave piles of readers (including Ars subscribers) going "wtf".
If they felt the need to post something in a hurry on the weekend, then the message should acknowledge that, and acknowledge that "investigation continues" or something like that
You don't announce that you're firing people or putting them on a PIP or something. Not only is it gauche but it makes it seem like you're not taking any accountability and putting it all in the employees involved. I assume their AI policy is fine and that the issue was it wasn't implemented/enforced, and I'm not sure what they can do about that other than discipline the people involved and reiterate the policy to everyone else.
What would you have liked to see them announce?
They just needed to expand "At this time, this appears to be an isolated incident." into "We are still investigating, however at this time, this appears to be an isolated incident". No additional details required.
And yes, it looks like Ars is still investigating (bluesky post by one of the authors of the retracted article) https://bsky.app/profile/kyleor.land/post/3mewdlloe7s2j
> It's a holiday weekend in the US so they may need to wait for office staff to return to begin the process.
That's not how it works. It's standard op nowadays to lock out terminated employees before they even walk in the door.
Sometimes they just snail mail the employee's personal possessions from their desk.
Moreover, Ars Technica publishes articles every day. Aside from this editor's note, they published one article today and three articles yesterday. So "holiday weekend" is practically irrelevant in this case.
Comment was deleted :(
> That's not how it works.
Some places.
> It's standard op nowadays to lock out terminated employees before they even walk in the door.
Some places.
You're speaking very authoritatively about what's "standard", in a way that strongly implies you think this is either the way absolutely everyone does it, or the way it should be done.
It's standard op nowadays to acknowledge that your experiences are not universal, and that different organizations operate differently.
> You're speaking very authoritatively about what's "standard", in a way that strongly implies you think this is either the way absolutely everyone does it, or the way it should be done.
Neither. I just meant it's common.
The comment I replied to said, "they may need to wait for office staff to return to begin the process."
I think the commonality of the practice shows that Ars Technica doesn't need to wait for office staff to return to begin the process, if office staff is even gone in the first place (again, Ars Technica appears to be open for business today). There's certainly no legal reason why they'd need to wait to fire people.
Does Ars Technica have a "policy" to only fire people on weekdays? I doubt it. Imagine reading that in the employee handbook.
Besides, President's Day is not a holiday that businesses necessarily close for. Indeed, many retailers are open and have specific President's Day sales.
> (again, Ars Technica appears to be open for business today). There's certainly no legal reason why they'd need to wait to fire people.
They normally aren't, they probably write the stories on the weekdays and prepare them to automatically publish over the weekend, with only a skeletal staff to moderate and repair the website. Legal, HR, and other office staff probably only work weekdays, or are contracted out to external firms.
Their CEO posted a quick note on their forums the other day about this which implied they don't normally work on holidays and it would take until Tuesday for a response.
> Their CEO posted a quick note on their forums the other day about this which implied they don't normally work on holidays and it would take until Tuesday for a response.
Judging from today's editors note, if things need to happen more quickly, then they do.
That's true of every executive position, but not necessarily for HR or legal. Especially if they use an external firm.
You're constructing quite a lot of hypotheticals to justify not waiting 3 more days to condemn Ars Technica for not firing this guy.
Can we not just have a little patience anymore?
You're putting a lot of words in my mouth. I didn't call for anyone to be fired.
throw3e98 is the one who suggested that Ars Technica was going to fire people, but not for a few days. I merely suggested that if anyone was getting fired, they would likely already be fired.
At this point, however, I don't think anyone is getting fired, not this weekend and not Tuesday either: https://bsky.app/profile/benjedwards.com/post/3mewgow6ch22p
I don't condemn Ars Technica for not firing the guy, but I do condemn Ars Technica for the terse hand-wave of an editor's note with no explanation, when on the same day we get a fuller story only from someone's personal social media account.
> They did a review of their context to confirm this was an isolated incident
The only incident we know was isolated was getting caught.
It's embarrassing for them to put out such a boilerplate "apology" but even more embarrassing to take it at its word.
It's such a cliche that they should have apologized in a human enough way that it didn't sound like the apology was AI generated as well. It's one way they could have earned back a small bit of credibility.
The comments are trending towards being more critical as of my posting. A lot more asking what they're going to do about the authors, and what the hell happened.
> Greatly appreciate this direct statement clarifying your standards, and yet another reason that I hope Ars can remain a strong example of quality journalism in a world where that is becoming hard to find
> Kudos to ARS for catching this and very publicly stating it.
> Thank you for upholding your journalistic standards. And a note to our current administration in DC - this is what transparency looks like.
> Thank you for upholding the standards of journalism we appreciate at ars!
> Thank you for your clarity and integrity on your correction. I am a long time reader and ardent supporter of Ars for exactly these reasons. Trust is so rare but also the bedrock of civilization. Thank you for taking it seriously in the age of mass produced lies.
> I like the decisive editorial action. No BS, just high human standards of integrity. That's another reason to stick with ARS over news feeds.
There is some criticism, but there is also quite a lot of incredible glazing.
Yeah, the initial comments are pretty glazey, but go to the second and third pages of comments (ars default sorts by time). I'll pull some quotes:
> If there is a thread for redundant comments, I think this is the one. I, too, will want to see substantially more followup here, ideally this week. My subscription is at stake.
> I know Aurich said that a statement would be coming next week, due to the weekend and a public holiday, so I appreciate that a first statement came earlier. [...] Personally, I would expect Ars to not work with the authors in the future
> (from Jim Salter, a former writer at Ars) That's good to hear. But frankly, this is still the kind of "isolated incident" that should be considered an immediate firing offense.
> Echoing others that I’m waiting to see if Ars properly and publicly reckons with what happened here before I hit the “cancel subscription” button
No reason to trust that the comment section is any more genuine than the deleted fake article. If an Ars employee used genAI to astroturf these comments, they clearly would not be fired for it or even called out by name.
[dead]
Crafted by Rajat
Source Code