The big story

A lot surrounding AI has changed since I first started covering the technology 12 years ago. One thing that hasn’t is the lazy, meaningless default to the phrase “human in the loop.”

Here’s a scenario that happens all too regularly: I’m chatting with an executive about AI transformation. We start nearing the topics of job losses or ensuring AI systems perform as intended. “And of course, you have to have a human in the loop,” the executive says, without any elaboration, before moving right along to talking points they can more positively spin. You may have seen this yourself in panels, podcast interviews, or casual conversation.

Tech and business leaders are constantly invoking this phrase to signal responsibility. Of course we won’t let the AI run wild — there will be a human in the loop. AI will change the workforce but there will still be plenty of jobs, because we’ll need to have humans in the loop. But they do little to define what this means or should look like in practice.

For years, “human in the loop” has functioned like a rhetorical get-out-of-jail-free card, but the gap between what this phrase offers and the nuance that’s needed is now reaching a boiling point. As AI systems — including increasingly autonomous agents — are proliferated throughout every facet of society, "human in the loop" is flattening vital accountability discussions, jeopardizing our ability to discuss and understand where oversight truly lies.

Anthropic’s recent red lines with the Pentagon offer a prime example of this gap and why the finer details of “human in the loop” are often mission-critical. Anthropic stood firm on its stance that it wouldn’t allow its technology to be used for domestic surveillance or fully autonomous weapons, but it would permit its use for military targeting where a human makes the final decision. Military experts, however, have been sounding the alarm about human operators cognitively offloading decisions to AI and spending mere seconds “rubber-stamping” AI-suggested military target recommendations rather than meaningfully reviewing them. This nuance surrounding how the human involvement plays out raises serious questions about how meaningful Anthropic’s red line really is in practice. There may technically be a human in the loop, but is it in a way that actually matters?

Whenever “human in the loop” is invoked, we all have a responsibility to go deeper. Where in the loop? Why there? What kind of expertise does the human need? What specifically will they do? When will they be needed? What does the human need to do in case of X, Y, or Z? How do we design the process to ensure the human actually has the intended impact?

The good news is that I’m starting to see a shift. In a few recent conversations, folks emphasized these very same concerns about “human in the loop.” As powerful companies, government entities, and beyond turn to AI for an increasing number of uses, the stakes are becoming glaringly clear. A token human, a theoretical loop, and a reflexive catch-all phrase aren’t going to cut it any more.

New feature

‘The ChatGPT Symptom Spiral’

For The Atlantic, I wrote about a particular kind of ChatGPT spiral, how easily I know I could’ve fallen into it, and someone who did.

Anyone going to chatbots for health research or support is at risk of being taken down a rabbit hole, and for those with health anxiety or OCD, AI is “a perfect storm.”

WIP

How has AI fundamentally changed how your business operates?

I’m looking to dive deep into what this has looked like at a few companies. The more specific and unexpected, the better! To be clear, I'm interested in companies' own operations, not how their AI product offerings facilitate such change.

Get in touch at [email protected]

Keep Reading