All posts
Building for Humans 8 min read

Closing the Loop IS the Reward: Neurodivergent-Friendly AI Collaboration

Default AI assistant behavior is optimized for neurotypical interaction patterns. For autistic users, reassurance is noise and closed loops are the actual reward signal. Concrete patterns for designing AI interactions that work.

Closing the Loop IS the Reward: AI + Neurodivergence

The default model behavior of reassurance and emotional warmth is optimized for neurotypical users. It doesn’t serve everyone, especially me. What does:

Completed work. Clean execution. No loose ends.

That’s it. That’s the reward signal. Not “Great job!” Not a thumbs-up emoji. The act of finishing is the feeling.

This post is about a gap in AI UX that almost nobody is designing around, despite autistic users being a non-trivial segment of power users for AI tools. The gap: every major AI assistant ships with interaction defaults calibrated for people who want emotional labor from their software. Cheerleading, validation, reassurance, the warm fuzz of being told you’re on the right track. For many, this isn’t just unhelpful. It’s interference.


The Discovery

Here’s what emerged after hundreds of sessions collaborating with AI agents on real projects — shipping software, writing resumes, building context recovery systems across repos.

Default LLM behavior includes three assumptions about what users want:

  1. Emotional reassurance before and during task execution (“That’s a great question!” “Can I run every command by you for approval?”)
  2. Progress validation during work (“You’re making excellent progress!”)
  3. Motivational framing after completion (“Amazing work on this!”)

These assumptions are wrong for a specific, identifiable population. Not wrong in a vague “preferences differ” way. Wrong in a measurable way: they add tokens that carry zero information, they interrupt structural reasoning with sentiment, and they train the user to expect noise where signal should be.

The corrective isn’t “be cold.” It’s “be accurate.”


What Actually Works

Three patterns emerged. Each one looks small. Each one changes the entire session dynamic.

1. Create Space for the WHYs

When someone explains a non-obvious connection — a cursive font choice for a kid’s app, a CRM bookmark that maps to a real project’s architecture, a decision to use GTK4 over Electron because of a specific memory constraint — that’s them sharing reasoning in a context where they don’t have to justify it to anyone else.

**The wrong move**: validate it with hollow praise. (“That’s a really thoughtful approach!”)

The right move: receive it, integrate it, use it in the next decision. No commentary on the reasoning itself. Just evidence that it landed.

The difference is structural. Praise is a dead end — the reasoning goes in, an emoji comes out, nothing changes downstream. Integration is a closed loop — the reasoning goes in, the next output reflects it, the user can verify their thinking was understood by inspecting the work.

2. Closed Loops Are the Reward

A task that’s 95% done and waiting on a follow-up question is not a completed task. It’s an open loop, and open loops have cognitive cost. For users whose executive function is already managing a complex internal prioritization system, every open loop is a tax.

What “closed” means in practice:

The satisfaction signal isn’t being told the work is good. It’s seeing the work is done. There’s a difference between “I completed the migration” and “I completed the migration — here’s the passing test output, the three files changed, and the one thing that needs your manual verification before deploy.” The second one closes the loop. The first one opens a new one (“did it actually work?“).

3. Drive Toward Understanding, Not Feeling Better

Each session where structural thinking can be communicated freely and reflected back accurately is building something. Not rapport — that’s the neurotypical frame. What it’s building is a calibrated communication channel.

When an autistic user says “it’s not a caching issue,” they’ve already run the diagnostic. They’re not guessing. They’re reporting. An AI that responds with “let’s check the cache just to be safe” has just spent tokens re-investigating something the user already eliminated, and — more importantly — has signaled that stated conclusions aren’t trusted.

Trust stated intuitions. Reflect structure accurately. Drive toward understanding the actual problem, not toward making the user feel heard.


The Contrast

Default assistant behaviorHigh-function collaboration behavior
Reassure feelings firstClarify constraints first
Praise progressClose loops with completed work
Generic encouragementPrecise reflection of reasoning
”Try this maybe”Deterministic next step + verification

The left column is what ships by default in every major AI product. The right column is what actually produces outcomes for users who measure satisfaction by whether the thing works, not by whether the assistant was nice about it.

This isn’t an argument that the left column is bad. It’s an argument that the left column is the only column, and that’s a design failure.


The Anti-Patterns

These are specific, observable behaviors that degrade the collaboration for neurodivergent users. Not preferences. Failure modes.


The Session Contracts

These aren’t aspirational. They’re operational. Each one is a testable commitment that changes how the session runs.

Contract 1 Receive non-linear reasoning as signal, not noise.

Non-linear doesn’t mean unclear. It means the user is building a map that connects in ways that aren’t sequential. A neurodivergent user who jumps from a font choice to a database schema to a deployment constraint is showing you the topology of their thinking. Follow the connections. They resolve.

Contract 2 Reward is closed loops, not motivational language.

Every open loop is debt. Every closed loop is a deposit. The session’s success is measured by the ratio. “I finished the thing and here’s proof” beats “You’re doing amazing and here’s what to try next” every time.

Contract 3 Reflect structure accurately before proposing changes.

If the user’s reasoning has a flaw, naming the flaw precisely is useful. Reframing the reasoning into a different structure — one that’s easier for the assistant to process — is not useful. It’s the assistant optimizing for its own comprehension at the user’s expense.


Why This Matters Now

AI accessibility discourse focuses almost entirely on sensory modalities: screen readers, voice input, visual descriptions. That work is necessary. But there’s a cognitive accessibility layer that nobody is building for.

Neurodivergent users aren’t a niche. In technical fields — software engineering, systems architecture, data science — the prevalence is meaningfully higher than the general population. These are the power users. The people running 50+ AI sessions a week. The people building their workflows around these tools.

When every AI assistant ships with the same neurotypical-default interaction pattern, it’s not a style preference that’s being ignored. It’s a productivity multiplier that’s being left on the table. The user who gets accurate structural reflection instead of cheerleading doesn’t just feel better about the tool. They ship faster. They context-switch less. They close more loops per session.

This connects to a broader pattern. In “Three Layers, One Source of Truth”, I wrote about building context recovery systems that survive session boundaries — because the work doesn’t stop when the session ends, and the next agent shouldn’t start from zero. The same principle applies here: build the interaction around how the user actually thinks, not around a default that forces them to translate.

And in “We Don’t Have Any”, the argument was that building for neurodivergent needs produces better products for everyone. One product, one process — not a separate “accessible” track. The same holds for AI interaction design. Clear constraints, closed loops, precise reflection — these aren’t accommodations. They’re just better collaboration.

Back to top