Human in the Loop Needs a New Look
A stranger has walked into the legal workflow, and a lot of the work has changed because of it. This stranger is AI.
Take the patent process as an example. The inventor used to sketch a technical architecture, sit through a few meetings with patent counsel, and watch counsel turn those conversations into a patent application. Today, the inventor skips most of that. They feed their code, design docs, and commit history into an AI and ask it to generate a patent application. What lands on outside counsel’s desk is a lengthy, AI-generated draft.
The typical patent counsel’s response is not to engage with the AI draft at all. Instead, counsel writes their own draft from scratch, the way they have always written patent applications. Now, two parallel drafts move between client and counsel, resulting in a great deal of confusion.
The same tension is appearing both inside and outside of organizations, between functions that already speak different languages:
Legal Research: In-house counsel generates an AI memo and sends it to outside counsel. Outside counsel manually checks each statement in the 30+ page document, a process that would have taken less time if they had just started from scratch.
Engineering vs. Legal: Engineering handles changes through GitHub pull requests, while Legal and Product push changes via Word or Google Docs. Each side finds the other’s artifact unworkable and quietly produces a parallel document in a format they actually trust.
The efficiency AI gains on the drafting side is swallowed by this friction. Whatever time the inventor saved by generating a patent application in an afternoon is paid back, with interest, when counsel rewrites it from scratch. Whatever time in-house counsel saved on the memo is paid back when outside counsel redoes the research the long way around.
I suspect this explains a pattern I keep hearing about: individual productivity has gone up with AI, but overall organizational output has not. Each person is producing more, but the organization is choking on AI-generated artifacts that no one can digest or reconcile. If we keep working this way, we will be stuck forever with enormous input piling up inside the organization and very little coming out the other end.
Human in the loop does not mean human as the gate
When we talk about “human in the loop”, the instinct most organizations have is to insert a human into one portion of the AI’s workflow—usually the review step—and call that good governance. The human becomes the “gate.” Every AI draft passes through a senior reviewer, who becomes a bottleneck for everything the AI produces. The volume of work product an AI can generate in minutes overwhelms anyone trying to gate it by hand. It is like asking one librarian to read every book that comes off a printing press.
The version I am starting to think about looks different: the human sits at the orchestration layer, not the editing layer.
In this model, AI does the draft. A different AI (or the same AI in a different role) reviews and edits that draft against the standards a senior reviewer would have applied. The human looks at the final output in a format digestable by a human expert and asks one question: Is this what I wanted? If the answer is yes, it ships. If the answer is no, the human sends it back into the AI loop with a description of what is wrong, and the AI loop produces another version. The human is in the loop, sitting above it rather than wedged into one of its steps.
Human in the loop does not mean human in the same spot forever
The other misconception of “human in the loop” assumes keeping the human in the exact same spot forever, stuck in a cycle of auditing that never scales. We treat AI as a permanent trainee rather than a thought partner whose competence can grow.
When you train a new hire, you supervise closely for the first few months—sometimes the first year or two—and then you let them do things their way. At some point, you started trusting the colleague without needing to review every draft. We have seen this same progression with tools, including the calculator, the computer, and now autonomous driving cars. We will never have enough pairs of eyes, or enough hours in the day, to scrutinize every word and every line of code an AI generates, nor the speed and scale to prevent every accident manually. The supervision has to graduate into trust, or the loop never closes.
In my last blog, I wrote about how to make this work for one person and one task (writing a blog post). That is a contained problem with a clear quality bar. The harder problem is what this looks like end-to-end across an organization: different AIs at different stages; different functions providing supervision at specific points where their judgment actually matters.
I do not have a definitive answer on how to design this yet. Most of the systems we work in today were not built for it. Word, PDF, email, and calendar-driven review cycles all assume a human-to-human workflow with handoffs measured in days. The end-to-end AI workflow needs different artifacts, different handoffs, and a different idea of where precise human judgment fits in. This is the most interesting organizational design problem of the next few years.
Takeaways
Human in the loop does not mean human at the gate. Use AI to review AI. The human’s job is to orchestrate—judging the final output rather than editing every intermediate version.
Watch for the parallel artifacts. If your AI workflow is producing two or more drafts that do not recognize each other, you have not gained efficiency; you have simply moved the work.
The end goal is trust. Individual AI productivity does not become organizational output until the handoffs are redesigned around where human judgment is situated.
Human in the loop needs a new look. What we are doing now is not truly “human in the loop”. It is “AI in human workflow”. The AI has come, but the loop they come into was built for a world without the stranger.
——————
For more practical tips on AI governance and innovation, check out GenAI for the Legal Profession: Power User Edition, AI Strategy for Legal Leaders, Atticus AI Habits Workshop and my Fairly AI blogs. Interested in a 1:1 Claude Cowork coaching session? Contact us aicoach@atticusprojectai.org

