|

Homo Theoreticus and Organizational Security Strategy

A hand moves a wooden chess piece across a board, symbolizing careful strategy and decision-making.

We Build (and Rebuild) Models of the World, Even for Security Strategies

The international relations scholars John Mearsheimer and Sebastian Rosato argue that we are theory-making people – homo theoreticus. In their view, rational decision-makers do not simply maximize utility by running cost-benefit formulas (homo economicus), nor do they merely default to mental shortcuts (homo heuristicus). Instead, they make rational choices by relying on credible theories – theories that rest on realistic assumptions, tell a logically consistent causal story, and are supported by substantial evidence. At the state level, the authors add one more condition: the decision must also emerge from a deliberative process in which different views are aired and a final policy is decided upon.

Of course, the merits and even the definition of “rational” are heavily contested, but the authors’ framework still gives us useful analytical leverage. In the use case discussed here – private security – it offers a simple model for organizations building an overarching security strategy; it gives them a way to ask how “rational,” in this sense, is the model guiding their strategic decisions.

Before scaling these conditions to the organizational reality of a private sector entity, it is worth noting why an atypical framework is useful here at all. A lot of organizational security guidance – from government to private industry frameworks – do an excellent job of telling practitioners what to do: assess your facility, train your staff, ameliorate your vulnerabilities, document your decisions. That prescriptive literature has real value, particularly for organizations building large programs from scratch. What it does not always explain, however, is why a given intervention should produce a given outcome. The theory-making framework borrowed here from Mearsheimer and Rosato is useful precisely because it asks that question: is the causal logic actually credible before we act on it?

The Four Conditions, Scaled to Organizational Reality

  • Realistic Assumptions

It is reasonable for a corporation to make revenue projections and apportion a security budget against those forecasts, or for a house of worship to assume the Nonprofit Security Grant Program will exist in the near term. These are assumptions grounded in observable reality, not wishful thinking.

  • Logically Consistent Causal Story

A causal model in this space should read: If A, then B, because X. For example: if we establish a quarterly organizational security forum, then security activity across the organization will become more unified, because those forums create recurring opportunities for stakeholders to share information, align on priorities, and deliberate.

  • Substantial Evidence (Scaled to Reality)

Even the largest organizations do not possess the intelligence apparatuses of a state. Here, “substantial” looks more like a converging pattern: your own incident history, local crime and threat data, geopolitical considerations, sector-specific cases, insurance signals, and the lived experience of frontline staff. When several of these sources point in the same direction, organizations generally have the evidence to treat a security model as credible – even while recognizing that the picture is incomplete and will need to be updated.

  • Deliberative Process (Scaled to Reality)

In this case, a deliberative process does not require a massive bureaucracy. It can be as simple as a senior leader intentionally pulling in dissenting views, asking “What are we missing?”, and seeking input from people who see the problem from a different angle. The literature on biases and decision-making blind spots is voluminous, so there is no need to rehearse it all here. It is enough to recognize that none of us are immune: even the retired Special Forces officers in the Convoy Group office bring big decisions to the broader group for debate and scrutiny before moving forward.

“Rational” Security Decisions Are Much Harder in Practice

I like the theory-making, rational versus nonrational model that Mearsheimer and Rosato put forward. It is simple enough to conceptualize, and pointed enough to have an application that extends beyond international relations – which is why I decided to use it as a frame for decision-making well removed from state-level analysis. That being said, the model is not an elegant one-to-one map for our private-sector, organizational security use case. We must start with the premise that security decision-makers generally want to pursue rational strategies – realistic assumptions, a coherent causal story, evidence scaled to reality, and some form of deliberation. But we must also recognize the myriad perverse incentives, biases, and hubris that pressure decision-makers to adopt noncredible theories and make nonrational choices.

  • Unrealistic Assumptions

A nonprofit could assume that prioritizing investment in one security pillar – say, infrastructure hardening  provides a net increase in safety while safely downplaying staff training and awareness. That is an unrealistic assumption. As NIST audits and many after-action reports have shown, physical security infrastructure is often defeated through human error; intercoms and vestibules are meaningless if frontline staff simply buzz people into the building.

  • Illogical Causal Stories

Leadership might decide that simply installing more security cameras will significantly reduce misconduct and external threats because cameras are a deterrent. On its face, that sounds plausible, but it ignores how quickly cameras go unnoticed, how little time most people have to actually monitor feeds, and it wholly disregards the available data on the deterrent effect of cameras across different environments. A causal story that assumes cameras, by themselves, will drive behavior change is unrealistic without changes in supervision, reporting, and consequence, let alone the effects of the environment in which they are used. In this case, the mechanism linking “more cameras” to “less risk” is asserted, not explained – which is exactly the kind of causal story this framework would flag as noncredible.

  • Sparse Evidence

A single vivid news story about an attack on a similar-looking facility can dominate a board discussion, while years of uneventful local experience, stable crime data, and sector-wide statistics are ignored. If the organization then rushes to hire armed guards primarily because they think armed guards automatically equal reduced risk, it is acting on availability bias, not a converging pattern of evidence. Acting on one vivid headline is a mental shortcut, not a theory. The decision metastasizes when that heuristic hardens into a core assumption. Do not default decisions using untethered assumptions; rational choices marshal the available evidence to challenge assumptions and inform decisions.

  • Unexamined Decision-Making

A senior leader might take a vendor call, hear a compelling pitch about the contemporary threat landscape from someone with a financial stake in embellishing the risk, and unilaterally decide to move forward without consulting experts, peers, staff, or the board. There is no structured discussion of tradeoffs, no testing of assumptions against the organization’s mission and culture, and no independent validation of the vendor’s claims. Even if the outcome occasionally turns out fine, the organization is not attempting to meet a deliberative standard that makes a choice rational in this adapted model’s terms.

Stress Testing the Theory Underpinning Our Security Strategies

Mearsheimer and Rosato’s framework is not a universal diagnostic tool, but I think it is a useful one given the ubiquitously prescriptive private security literature. It gives organizational security decision-makers – and the consultants who advise them – a simple, structured way to stress-test the theories underlying their strategies. Are the assumptions realistic? Does the causal logic hold? Is the evidence more than anecdote? Has anyone pushed back? Ultimately, the goal here is to build security programs built on credible foundations, and reduce the potentially negative impact of instinct, vendor pitches, and availability bias. An organization that works through these questions seriously will typically arrive at a security strategy built around a small number of clearly stated propositions: (1) which threats are plausible given their actual environment, (2) which interventions have demonstrated effects on those specific threats, and (3) which organizational conditions make those interventions work or fail. That is a fundamentally different product than a compliance checklist. It is also much harder to think through and build.