What do agents know, how do they form beliefs, and how does knowledge propagate through a network of agents? Tomasello's evolutionary typology of agency, reframed through the lens of epistemology — foundational for the design and governance of agentic AI systems and the Spatial Web.
Overview
Agency is not merely about action — it is fundamentally about knowledge. What an agent can do is bounded by what it can know: what it can perceive, represent, believe, reason about, and share with others. Epistemology — the study of the nature, sources, and limits of knowledge — is therefore the natural foundation for a rigorous theory of agency.
Michael Tomasello's The Evolution of Agency: Behavioral Organization from Lizards to Humans (MIT Press) traces four qualitatively distinct forms of agency across vertebrate evolution. Each form represents not merely a new behavioral capacity but a new epistemic architecture: a new relationship between an agent and its knowledge of the world, of other agents, and of itself.
From goal-directed organisms that respond directly to perceived affordances, through intentional agents that build internal world models, to rational agents that reflect on their own beliefs, and finally to humans whose knowledge is irreducibly social and normatively structured — each step is an epistemic leap as much as a behavioral one.
For the design of agentic AI systems and the Spatial Web, this matters enormously. An ecosystem of agents requires not just coordination of actions but coordination of knowledge: shared world models, distributed belief formation, social epistemology, and epistemic governance. The Universal Domain Graph is, among other things, an epistemological infrastructure — a medium for collective intelligence.
This analysis draws on Tomasello's framework, the network epistemology of Kevin Zollman (JHU Natural Philosophy Symposium, 2025), and GeoRoundtable's ongoing work on the Spatial Web UDG to develop an epistemological approach to agentic AI design and governance.
Foundation Text
The Evolution of Agency
Behavioral Organization from Lizards to Humans
Why Annapolis Matters
St. John's College is one of the few institutions in the world dedicated to the rigorous study of the great books of Western philosophy, science, and mathematics. The MALA program offers adult learners a sustained engagement with primary texts in natural philosophy — Aristotle's De Anima, Descartes' Meditations, Kant's Critique — that forms the philosophical foundations of epistemology and the theory of mind. This is precisely the grounding required for the philosophical analysis of agency and knowledge in agentic AI systems.
Tomasello's Framework — Epistemological Reading
Each stage in the evolution of agency is equally a stage in the evolution of knowledge — a new relationship between agent, world, other agents, and self.
Ancient Vertebrates (e.g., Lizards)
Perception-coupled, affordance-driven behavior — knowledge as direct contact with the world
The most fundamental form of agency: flexible goal-pursuit through direct perception-action coupling. The goal-directed agent does not represent the world internally — it responds to affordances as perceived. "Knowledge" at this level is not propositional; it is embedded in the organism's sensorimotor repertoire. There is no internal model that can be inspected, updated, or shared.
Epistemic Architecture
What it knows: Perceived affordances — what the environment offers for action. How it knows: Direct sensorimotor coupling with the world; no internal representation. Limits: Knowledge is local, momentary, and private — it cannot be shared, stored, or reasoned about.
Implication for Agentic AI
Reactive and rule-based AI systems. No internal world model, no belief states, no inference. Knowledge exists only in the stimulus-response mapping, not in any representational structure that can be inspected or governed.
Ancient Mammals (e.g., Squirrels)
Internal world models, causal reasoning, and belief-driven planning
Intentional agents construct internal representations of the world — world models that persist beyond immediate perception and can be manipulated in mental simulation. Planning is possible because the agent can mentally project the consequences of actions before committing to them. This is the emergence of belief: an internal state that represents how the world is (or was, or could be).
Epistemic Architecture
What it knows: A causal model of the world — states, actions, and their consequences. How it knows: Perception feeds internal representations; simulation projects future states. Key concept: The emergence of belief as an internal state distinct from the world it represents — and therefore capable of being false.
Implication for Agentic AI
LLM-based agents with tool use and chain-of-thought planning. The key epistemological question: does the system maintain a genuine world model, or only a "bag of heuristics"? (Melanie Mitchell, JHU Symposium 2025 — see ARC Benchmark.) This is not a performance question but an architectural one.
Ancient Great Apes (e.g., Chimpanzees)
Meta-cognition, inference under uncertainty, and reflective belief revision
Rational agents do not merely hold beliefs — they reason about them. The rational agent can evaluate competing hypotheses, apply logical inference, and revise beliefs in light of evidence. Crucially, rational agency introduces meta-cognition: the capacity to represent one's own mental states as mental states — to know that one believes something, to recognize uncertainty, and to inhibit action pending better information.
Epistemic Architecture
What it knows: Not just the world, but its own beliefs about the world. How it knows: Inference, hypothesis evaluation, belief revision against evidence. Key concept: Epistemic humility — the capacity to represent one's own uncertainty and act on it appropriately.
Implication for Agentic AI
AI systems with uncertainty quantification, self-correction, and auditable reasoning. The design challenge: making the agent's epistemic state — what it believes, how confident it is, what evidence it has — transparent and inspectable. This is foundational for safety assurance and explainability.
Ancient Humans
Shared knowledge, collective intentionality, and epistemically governed communities
At the apex of Tomasello's typology, uniquely human: the capacity for shared knowledge — beliefs held not privately but collectively, governed by social norms, and maintained through mutual accountability. Knowledge at this level is no longer individual but distributed across a community. The agent participates in a social epistemology in which the structure of relationships between agents shapes what the community can know.
Epistemic Architecture
What it knows: Shared knowledge — propositions held in common by a community, with mutual awareness of that sharing. How it knows: Through social interaction, testimony, and normative epistemic practices. Key concept: Network epistemology (Zollman) — the structure of social connections determines what the community can learn and how reliably it can do so.
Implication for Agentic AI
Multi-agent ecosystems coordinating toward shared goals — the Spatial Web Universal Domain Graph. The key design challenge is not individual agent intelligence but collective epistemic quality: how does the network of agent connections shape the knowledge that emerges? Epistemic governance — structuring agent relationships to produce reliable collective knowledge — becomes the central problem.
"The structure of our social networks influences our ability to learn about the world. The norms of social epistemology are independent of the norms of individual epistemology."
— Kevin Zollman · Network Epistemology · JHU Natural Philosophy Symposium 2025
Kevin Zollman's Independence Thesis — that social epistemology has its own norms, irreducible to those of individual epistemology — is the key insight connecting Tomasello's framework to the design of the Universal Domain Graph. An ecosystem of individually rational agents does not automatically produce collectively reliable knowledge. The structure of their connections, the norms governing their interactions, and the governance of their epistemic community are independent design variables.
This is why the UDG is not just a data infrastructure but an epistemological one. Its architecture — the structure of relationships between domains, the governance of domain membership, the protocols for information exchange — determines the collective epistemic quality of the Spatial Web.
The epistemological approach to agentic AI asks: what kind of knowledge does this system need? How is that knowledge formed, shared, and governed? These questions must be answered before the architectural ones can be properly posed.
Core Questions
The epistemological approach to agentic AI design asks four foundational questions — each with direct architectural and governance implications.
Dimension 1
Does the agent have a genuine world model, or only a pattern-matching heuristic? Melanie Mitchell's ARC Benchmark challenge highlights that performance on standard tasks does not answer this question. Architectural analysis of the agent's representational structures — its beliefs, their content, their relationship to ground truth — is required. This is prerequisite to any serious safety assessment.
Dimension 2
Belief formation ranges from direct perceptual coupling (goal-directed), through world-model-based simulation (intentional), to inference and hypothesis evaluation (rational). Each mechanism has different reliability characteristics, failure modes, and auditability requirements. The epistemic architecture of the agent determines what kinds of justification are possible — and what kinds of errors are possible.
Dimension 3
Zollman's network epistemology shows that the structure of social connections determines collective epistemic quality. Dense networks converge quickly but may lock in errors; sparse networks are slower but more reliable. The UDG's Universal Domain Graph is the network over which epistemic information flows between Spatial Web agents — its topology is an epistemological design decision, not merely a technical one.
Dimension 4
Epistemic governance — Zollman's "epistemic governance is at the forefront of epistemic communities" — addresses how norms, institutions, and incentive structures shape the production and maintenance of reliable collective knowledge. In the Spatial Web, this is the function of Domain Authorities, polycentric governance (Levin/Ostrom), and the SWRA: managing the commons of collective knowledge for the public good.
GeoRoundtable Application
How the epistemological approach to agency shapes GeoRoundtable's work on the Spatial Web UDG and agentic AI governance.
Before deploying an AI agent in a Spatial Web context, its epistemic architecture must be characterized: does it maintain a genuine world model or heuristic patterns? What are its belief-formation mechanisms? What is its meta-cognitive capacity? This profiling determines the appropriate governance constraints, the required level of human oversight, and the safety assurance requirements for the deployment context.
The Universal Domain Graph is not merely a data store — it is the epistemic infrastructure of the Spatial Web: the medium through which agent beliefs are formed, shared, updated, and governed. Designing the UDG requires explicit attention to the epistemological properties of the network: what knowledge flows through it, how reliably, under what governance conditions, and with what consequences for collective intelligence.
Domain Authorities in the Spatial Web are not just data stewards — they are epistemic authorities: entities responsible for the quality, reliability, and integrity of the knowledge in their domain. Designing effective Domain Authorities requires the tools of social epistemology: understanding how authority, trust, credentialing, and accountability shape the collective knowledge of the community.
Simon Levin's connection of collective intelligence to Ostrom's polycentric governance (JHU Symposium 2025) provides the framework for the UDG as an epistemic commons: shared knowledge governed by distributed, overlapping institutions, designed to produce reliable collective intelligence as a public good. Epistemic governance of the UDG must address the tragedy of the epistemic commons — where rational agent behavior degrades collective knowledge quality.
Interested in applying an epistemological approach to your agentic AI architecture, governance framework, or Spatial Web deployment? Get in touch.
✉️ percivall@ieee.org