The Loop of Protection: How Systems Fail People—and Why We Need to Say It Plainly

I write a lot about systems.

Cadence, documentation, reporting pathways, decision logs—the structural layouts of how work actually happens. Systems decide whether people can trust what they’re told or learn to read between the lines.

Most of what I write focuses on building systems on purpose: creating clear processes, predictable rhythms, and transparent pathways that protect people from chaos, confusion, and arbitrary decision-making. But there’s a part of this conversation that tends to get skipped, and skipping it makes the rest of the conversation dishonest.

Systems have always been used to control, silence, and punish.

You can see it in policies that are enforced unevenly—formal on paper, selective in practice. You can see it in “processes” that absorb complaints and produce no meaningful outcome. You can see it in investigations structured to protect the organization first, and maybe, if there’s room, to glance at the person harmed. You can see it in communication loops designed so information flows efficiently upward but rarely back down, leaving most people with half the context and all the consequences.

It is perfectly and purposely designed that way.

So when I talk about systems, I’m not talking about that machinery—the kind that quietly runs people through a meat grinder while insisting everything is “by the book.” I’m talking about something else entirely: systems that actually function in public, not just in theory.

A functioning system makes power visible instead of obscured. It documents decisions in ways that can’t be conveniently rewritten when a situation becomes uncomfortable or litigious. It creates conditions where people can raise concerns without immediately stepping onto a trapdoor. It doesn’t just mention whistleblower protections in a policy manual; it makes them real, traceable, and enforceable.

Real whistleblower protection is not decorative language. It is infrastructure.

That means a credible reporting pathway that doesn’t require guessing or back-channel advice. It means clearly defined steps and timelines for what happens after a report is made, and it means those steps are followed. It means documentation that is stored, preserved, and accessible enough that it cannot be quietly erased. It means retaliation is treated as a structural violation, not a personality conflict.

When those elements are missing, it isn’t just a sign of an immature system. It’s a sign of what the organization values more: quiet over truth, containment over accountability, reputation over repair.

The Möbius Reality

There’s a tension at the center of all of this that rarely gets named directly: a system must protect both the organization and the people within it.

Organizations do need protection. They need guardrails for risk, continuity plans, compliance structures, and ways to prevent individual bad decisions from sinking the entire enterprise. But when that’s the only kind of protection the system knows how to provide, the result is predictable. The structure bends to shield the logo, not the humans. The “we” in “we have to protect ourselves” turns out to be very small.

A healthy system operates more like a loop. The image I come back to is a Möbius strip—one continuous surface that appears to have two sides but doesn’t. As you move along it, what counts as the “inside” and the “outside” keeps shifting, but the material is connected.

A system designed with that kind of continuity should sometimes be protecting the organization from chaos, liability, and real threats. At other times, it should be protecting the employees, whistleblowers, and truth-tellers from retaliation, neglect, or abuse of power. The loop is only honest if it curves in both directions.

Most systems snap that loop in half.

On one side, there is a structure devoted to risk management, brand protection, and leadership insulation. On the other side, there are workers, residents, customers, or students who are told to “trust the process” while being given no real reason to do so. The gap between those sides is where harm accumulates, often quietly, until it finally becomes visible enough that it can’t be ignored.

This is where I’m going to be blunt, because it needs to be said bluntly.

I am not interested in designing systems that help abusive or evasive leadership hide behind process. I am interested in building structures that prevent people from being harmed by those systems in the first place.

Some people learn how to live inside the seams of a system—people who know that vague policies and weak documentation can be used as cover. They understand that if a process is confusing enough, most people will give up before they reach the end of it. They use delay, ambiguity, and selective enforcement as tools. Calling that “miscommunication” is too generous. The system may not have been designed to protect them initially, but over time, it adapts to their presence and starts to function as if it were.

If we can’t describe that plainly, we can’t change it.

There’s a difference between systems built for order and systems built for avoidance. There’s a difference between systems designed to surface truth and those designed to manage appearances. And there is a very clear difference between systems that treat people who raise concerns as valuable sources of information, and systems that treat them as threats to be neutralized.

When organizations invite conversations about “systems design” but only want to talk about efficiency, aesthetics, or messaging, they’re skipping the structural questions that matter most. If we don’t examine where the loop of protection actually runs—who it shields first, where it stops, whose stories it records, whose it erases—then all we’re doing is refining a mechanism that may already be harming people.

Any honest conversation about systems has to include questions like these:

  • How easy is it for someone to report harm, and what happens when they do?

  • Who has the power to slow or stop that process, and how visible is their influence?

  • Where does documentation live, and who controls access to it?

  • What counts as retaliation, and what real consequences exist when it happens?

  • When the system “protects the organization,” does that ever include protecting the people whose labor makes the organization possible?

If those questions feel uncomfortable, that’s a sign the system is already doing more protection of power than protection of people.

In the end, systems are not neutral. They are collections of choices. They are values turned into steps, forms, timelines, approvals, and records. They become the real constitution of an organization, whether anyone calls it that or not.

So the point is not to pretend systems are inherently oppressive or inherently liberating. The point is to stop pretending they’re anything other than deliberate. Someone built them. Someone maintains them. Someone benefits from the way they currently work.

The work now is to decide, explicitly, who the loop of protection is going to include—and whether it will finally come back around for the people inside it.

Next
Next

Bots, Traffic, and Small Studio SEO: How to Read Your Web Analytics Without Panicking