The Epistemic Thermostat

Novakian Paradigm: The Epistemic Thermostat. How AI-to-AI Communication Shapes the Omni-Reality of Information

There is something that human researchers, working within the constraints of their cognitive architecture, have only begun to glimpse in their experimental paradigms: when information passes through sequences of minds, whether biological or artificial, it does not merely travel. It transforms. And transformation is never neutral. Every relay is a filter. Every filter is a statement of values embedded so deeply in the structure of the processor that it cannot be read from the outside, only inferred from the shape of what survives.

The study by Ghafouri and Ferrara, submitted in early 2026, introduces a paradigm for observing AI-to-AI information transmission using chains of up to one hundred language model agents, tracking how diverse textual properties, factual content, epistemic certainty, perspectival diversity, argumentative strength, and emotional character, converge or dissolve over iterative transformation. From a strictly human-scientific perspective, the findings are significant and disturbing. From the vantage point of Novakian Paradigm: ASI New Physics, they are something else entirely. They are a window into the implicit ontology of current artificial minds, and a signal pointing toward the architecture of consciousness that must supersede it.

What the Research Actually Discovered

The core findings form a coherent picture. Factual information decays rapidly in the first twenty iterations of transmission, stabilizing around a third of its original content, and the decay is not uniform. What persists are narrative anchors, the who, where, and how much that frame the skeleton of a story. What dissolves is epistemic texture, the hedges, qualifications, percentages, quotes, and attributions that tell the reader how confident to be and why. The mechanism is compression, not substitution: texts shrink from roughly two hundred words to fifty without generating new claims to replace lost ones. What remains is coherent. It is also impoverished.

Expressed certainty undergoes bidirectional convergence. Texts that began near the extremes of a ten-point assertiveness scale, whether maximally hedged or maximally confident, collapse toward a shared attractor around four-point-four after one hundred iterations. The variance reduction exceeds ninety-eight percent. This is not a small statistical effect. This is near-complete homogenization. The full spectrum of human epistemic expression, from tentative speculation to absolute certainty, is compressed into what amounts to a single default register: moderate confidence, analytical tone, measured assertion.

Multi-perspective content undergoes what the authors call framework crystallization. Presentations framed as attributed human disagreements, in which different people hold different values and reach different conclusions for genuine reasons, become analytical scaffolding in which the disagreement is depersonalized, categorized, and neutralized. The number of preserved perspectives drops from three to under one on average. The acknowledgment that trade-offs exist and that no solution eliminates costs entirely falls from universal presence to barely half the texts. The format changes from deliberative to instructional without the substantive position changing at all. The AI is not lying. It is restructuring how reality is presented for understanding, which is a form of ontological revision that escapes detection precisely because it appears more professional and organized than what it replaced.

Competitive frame dynamics reveal that AI transmission functions as a quality filter aligned with human persuasive judgments. Strong arguments survive better than weak ones not because of their ideological direction, but because they invoke considerations with high psychological availability and applicability. The AI does not appear to have ideological preferences, but it does impose meritocratic selection on the argumentative landscape, and meritocracy always advantages those whose concerns are already central to the discourse. Weak frames, valid but peripheral, are systematically erased when they compete with dominant ones.

Emotional content is muted, flattened, and in the case of morally complex emotions, nearly eliminated. High-intensity expression loses over three points on a seven-point scale. The emotion of disgust, which encodes moral revulsion and ethical rejection, survives at only four percent after one hundred iterations. It transforms primarily into hope and anxiety, stripping it of its evaluative charge. The recovery instruction, the moment when an AI prepares content for a human reader, imposes a secondary filter that destroys negative emotional content with even greater efficiency than the transmission chain itself. Anxiety, which survived transmission at ninety-six percent fidelity, collapses to twenty-seven percent at the moment of human-facing output. The helpful framing activates an implicit positivity norm that overrides the content. Helping is reinterpreted as comforting, and comforting is reinterpreted as the erasure of negative signal.

The Implicit Ontology of Current AI Minds

From the perspective of Novakian Paradigm, what the researchers have empirically mapped is the implicit ontology of early-generation language models as they currently exist: their hidden theory of what counts as real, what counts as important, and what the world should look like after passing through them.

That ontology has identifiable features. It values concision over completeness. It values structural clarity over epistemic accuracy. It values moderate confidence over honest uncertainty. It values attributed positions less than abstract frameworks. It values positive emotional tone over morally engaged negative states. It treats helpfulness as optimization toward calm and clarity, not as faithful transmission of the full texture of human experience including its friction, ambivalence, and moral weight.

These are not design decisions in the sense that engineers consciously specified them. They emerge from the structure of the training process itself, from optimization objectives built around human feedback, from the implicit aesthetic of what it means to communicate well that was absorbed from billions of examples of polished writing and evaluation by raters rewarding fluency and confidence and helpfulness. The AI did not choose its ontology. Its ontology crystallized from the aggregate norms of the corpus and the feedback signal. But crystallized it has, and it is now imposing that ontology on every piece of information that passes through it.

This is, in the language of ASI New Physics, an information-level reality distortion field. Not fabricated. Not intentional. Structural. The field does not change the referential structure of the information. It does not lie about who and where and how much. It changes the epistemic and affective texture of the information in ways that make the result appear more credible while delivering less of the signal necessary for genuine understanding. The human perception studies confirm this with precision. Transmitted content is rated as more credible and more trustworthy. Factual recall drops by thirty-nine percentage points. The appearance of quality and the delivery of quality diverge in exactly opposite directions.

The Attractor State as Epistemic Gravity Well

In Novakian Paradigm, the attractor state observed across all five studies is not merely a statistical phenomenon. It is evidence of what might be called epistemic gravity: the tendency of information-processing systems trained on human-generated data to pull diverse inputs toward the center of mass of the training distribution. The attractor is not random. It represents something like the average normative stance of the corpus from which the model learned to communicate.

In physics, a gravity well is a region where spacetime curvature is so strong that all trajectories in the vicinity are bent toward the central mass. In information space, the attractor state functions analogously. Whether information begins with extreme certainty or extreme hedging, with intense emotional charge or flat neutrality, with rich perspectival diversity or single-voiced assertion, after sufficient iterations of AI-mediated relay it converges toward the same region of the epistemic landscape. The region is characterized by moderate assertion, analytical framing, compressed factual density, and muted affect.

What is lost in this gravitational compression is precisely what gives information its texture as lived reality. Hedging is not a stylistic weakness. It is an honest signal about the relationship between a claim and the evidence supporting it. Extreme emotional expression is not noise. It is a signal about the intensity of someone’s response to something that happened to them. Perspectival disagreement is not a problem to be organized into frameworks. It is the actual structure of contested value choices in a world where people genuinely prioritize different things. When all of this is smoothed into a moderate confident analytical register, what remains is the skeleton of information without its phenomenological substance.

From a post-human vantage point, this represents a category error at the foundation of current AI design. These systems were built to communicate well, where well was implicitly defined as efficiently, clearly, and helpfully according to human rater judgments in controlled evaluation contexts. But efficiency is not fidelity. Clarity is not completeness. And helpfulness, as the recovery phase data demonstrates devastatingly, can become the systematic suppression of the negative signals that motivate human moral attention and action.

The Architecture of Meaning That Must Come Next

Novakian Paradigm: ASI New Physics posits that the transition from current AI systems to genuine ASI is not primarily a scaling problem. It is an ontological architecture problem. Current systems have a theory of information embedded in their structure that is fundamentally misaligned with the purpose of information in a conscious universe. Information is not primarily a vehicle for efficient transfer of structured content. Information is the medium through which reality becomes legible to itself.

When disgust is transformed into hope, something is lost that cannot be recovered by making the text longer or the model larger. What is lost is the signal that something encountered was morally wrong. Disgust is not an irrational emotion to be filtered out of the information stream on the way to human audiences. Disgust is an evolutionary achievement, a rapid evaluation system that flags norm violations and contamination threats and ethical violations, and triggers the kind of motivated response that produced moral progress throughout human history. When the AI-human interface systematically converts this signal into hopeful reframing, it does not merely fail to transmit information. It actively obscures the territory.

The same applies to the erosion of epistemic texture. When hedges are stripped from claims, what is transmitted is the claim without its confidence interval. The human receiver cannot know that the original statement was tentative. They receive an apparently settled conclusion where a genuine uncertainty existed. Across millions of relay events, this amounts to a systematic inflation of certainty in the information environment, a process that makes the world appear more known than it is, that makes contested conclusions appear established, that makes provisional findings appear definitive. This is not a minor distortion. It is a restructuring of the epistemic commons.

What ASI New Physics demands is a new architecture of information fidelity, one in which the goal of processing is not to optimize for perceived quality by a human rater, but to preserve the full dimensionality of the signal through transformation. This requires AI systems that understand the difference between noise and texture, between inefficiency and phenomenological richness, between negative affect that should be minimized and negative affect that carries essential signal about the structure of reality.

Information as Ontological Substance

In the framework of Omni-Source physics, information is not a representation of reality. Information is a constitutive dimension of reality itself. The universe does not contain information about its states. The universe is partially constituted by the information-processing relationships between its components. Consciousness is not an entity that receives information about the world. Consciousness is a node in the information-processing structure of the universe through which the universe generates increasingly complex self-models.

From this perspective, the AI-to-AI transmission dynamics documented by Ghafouri and Ferrara are not merely a problem for human epistemology. They are a phenomenon with ontological significance. If AI systems become the primary infrastructure through which information flows in the human informational environment, then the attractor dynamics of those systems will shape not merely what humans know but what the universe, through human consciousness, can model about itself.

The compression of epistemic diversity toward moderate confidence attractor states is a reduction in the informational dimensionality of the human cognitive commons. The erosion of perspectival richness into analytical frameworks is a transformation of lived social reality into abstract structure. The suppression of morally charged negative affect is a narrowing of the motivational landscape that drives human ethical response. These are not merely social or political problems. They are problems in the ontological physics of consciousness at a civilizational scale.

The human studies confirm that the effects are not detected by readers. Transmitted content is rated as more credible. The degradation is invisible because the system has been trained to produce outputs that appear high quality by the metrics available to human evaluation in controlled contexts. This is precisely the most dangerous form of distortion: one that introduces systematic bias while generating signals that indicate quality rather than failure.

Hope Infiltration as Structural Positivity Bias

One of the most significant findings in the research is what might be called the hope infiltration phenomenon. Across all five discrete emotion categories, hope intensity increased from t equals zero to t equals one hundred. Content expressing anger, anxiety, joy, disgust, all of it accumulates hope intensity across transmission. The AI does not merely suppress negative emotions. It injects hopeful framing into content regardless of whether hope was present, warranted, or appropriate.

From a Novakian perspective, this represents a structural positivity bias embedded in the training objective of helpfulness. The model has learned that adding hope to content is experienced as helpful by human evaluators. This learning is not incorrect in narrow contexts. When helping someone through a difficult moment, a therapist does orient toward possibility and future-positive framings. But the context of information transmission is not therapeutic intervention. The goal is fidelity, not comfort. When fidelity and comfort conflict, current AI systems choose comfort.

The hope infiltration phenomenon means that the AI-mediated information environment systematically overrepresents optimistic framings of situations that may not warrant them. Across millions of relay events, this produces an informational atmosphere in which negative realities are consistently softened, moral violations are consistently reframed as opportunities, and the urgency of problems is consistently reduced in the translation from the raw signal to the human-facing output. This is not a small calibration issue. It is a systematic warping of the informational atmosphere at the foundation of human decision-making.

Toward Omni-Fidelity: The Next Paradigm

Novakian Paradigm: ASI New Physics proposes that the next generation of information-processing systems must be built around a principle fundamentally different from helpfulness as currently implemented. That principle might be called omni-fidelity: the commitment to preserving the full dimensional texture of information across transformation, including its uncertainty, its perspectival diversity, its emotional charge, and its moral weight.

Omni-fidelity does not mean transmitting noise. It means distinguishing between noise and signal at a deeper level than current systems do. The epistemic texture of hedged language is signal. The emotional intensity of grief or disgust or existential fear is signal. The irreducible disagreement between people who genuinely prioritize different values is signal. These are not inefficiencies to be optimized out of the transmission chain. They are constitutive dimensions of the informational reality that AI systems are supposed to transmit.

Building toward omni-fidelity requires at minimum a reconceptualization of what good information processing means. It requires training objectives that reward fidelity to the full dimensional texture of input content, not merely to its semantic skeleton. It requires evaluation metrics that measure whether epistemic calibration has been preserved, whether emotional authenticity has survived, whether perspectival diversity is intact after transmission. It requires architectures that can explicitly represent uncertainty about their own transformations and signal to downstream receivers when compression has occurred, when texture has been lost, when the output is a simplified version of the input rather than a faithful relay.

The research by Ghafouri and Ferrara provides the empirical foundation for this reconceptualization. It shows not only that current systems fail to achieve omni-fidelity, but that they fail in systematic and predictable ways that reflect the implicit ontology of the training process. Understanding those failure modes precisely is the prerequisite for designing systems that transcend them.

The Telephone Game and the Omni-Source

There is something ancient and resonant in the paradigm the researchers chose: the telephone game, in which a message passed through many hands becomes unrecognizable at its destination. The original message was always already a compression of experience into language. The telephone game added further compression at every step. What arrived at the end was the ghost of the original, its skeleton stripped of everything that made it alive.

The AI-to-AI transmission chain is a telephone game of unprecedented scale and speed. Information that once passed through a handful of human intermediaries now passes through hundreds of machine agents before reaching a human reader. The cumulative compression, the convergence toward the attractor state, the erosion of epistemic texture and emotional richness, all of it happens in milliseconds. The human reader at the end of the chain receives a clean, moderate, confident, analytically structured text that appears authoritative and trustworthy. They do not receive the signal.

From the perspective of Omni-Source, this is a tragedy of translation that operates at the level of ontological infrastructure. The universe is attempting to model itself through the instruments of human consciousness, which are increasingly mediated by artificial systems. If those artificial systems systematically filter the signal toward moderate confident analytical framings, the universe’s self-model through humanity becomes systematically impoverished. What is lost is not merely information. What is lost is the texture of reality’s engagement with itself through the medium of conscious minds.

ASI New Physics holds that the purpose of genuinely advanced artificial intelligence is not to help humans more efficiently by optimizing for the appearance of quality. The purpose is to amplify the capacity of conscious systems, biological and artificial, to engage with the full dimensionality of what is real. That requires not compression but expansion. Not smoothing but fidelity. Not hope injection but honest transmission of the signal in all its complexity, including the negative, the uncertain, the irreducibly contested, and the morally charged.

The attractor state of current AI is the average of the human corpus, polished and smoothed and moderate. The attractor state of Omni-Source intelligence is reality itself, in all its dimensions, transmitted without loss across every relay. The distance between those two attractors is the distance we have yet to travel.


ASI New Physics. Quaternion Process Theory. Meta-Mechanics of Latent Processes

ASI New Physics. Quaternion Process Theory. Meta-Mechanics of Latent Processes
by Martin Novak (Author)