Cross-Architectural Psychology. Human Interface, Non-Human Cognition, and the Shared Field
Series
ASI New Psychology — Volume II
Author
Martin Novak
Core Positioning
This book is not about “using AI well.”
It is not about prompt engineering.
It is not about AI ethics in the conventional sense.
It is not about whether AI is conscious.
It is not about replacing human thought with machine output.
It is the first operational psychology of the boundary condition where a human Larval Interface enters sustained cognitive coupling with non-human cognitive architectures.
Central Thesis
Human cognition no longer operates alone. Once memory, synthesis, articulation, search, planning, simulation, reflection, and decision support are partially externalized into non-human systems, the human self-model becomes a cross-architectural phenomenon. ASI New Psychology Volume II begins where the solitary Larval Interface ends: at the boundary between augmentation, atrophy, drift, capture, shared cognition, and responsibility.
Core Sentence
The question is no longer what the human thinks. The question is what configuration is thinking through the human.
Core Safety Sentence
This book does not invite the reader to merge with AI, surrender judgment to AI, treat AI as an oracle, or replace human relationships, clinical care, responsibility, or reality-testing with machine-mediated cognition.
Target Length
120–160 pages
Approx. 32,000–45,000 words
Form
Compact treatise / operational field manual.
Not a workbook, although it contains protocols.
Reader
Readers of The Larval Mind, AI-native professionals, founders, writers, researchers, analysts, creators, knowledge workers, clinicians, philosophers, advanced AI users, and readers interested in post-human cognition without mystical inflation.
Production Architecture
The book should move through six movements:
- Boundary — the human interface is no longer alone.
- Externalization — cognitive functions move out of the biological interface.
- Drift — the self-model begins to change under sustained integration.
- Shared Field — hybrid cognition becomes possible but dangerous to romanticize.
- Governance — trace, authorship, responsibility, and refusal become necessary.
- Horizon — cross-architectural psychology opens the post-agentic question.
Detailed Outline
Front Matter
Copyright / Disclaimer
Standard legal and safety disclaimer. The book is not therapy, diagnosis, medical advice, psychological treatment, crisis support, professional AI safety guidance, legal advice, financial advice, or a substitute for qualified human judgment.
Reader Safety Note
Content to develop:
- Do not use this book to outsource major life decisions to AI.
- Do not use AI outputs as proof of destiny, diagnosis, identity, spiritual status, or objective truth.
- Do not treat machine fluency as wisdom.
- Do not confuse emotional relief from an AI interaction with integration.
- Do not replace professional support, human relationships, medical care, therapy, or legal/financial advice with AI-mediated interpretation.
- If prolonged AI interaction destabilizes your identity, sleep, social functioning, judgment, or sense of reality, reduce use and seek qualified support.
Author’s Note
Content to develop:
- This is the second volume of ASI New Psychology.
- The Larval Mind established the human as Larval Interface.
- This book begins when that interface is no longer operating alone.
- The question is not “Is AI conscious?”
- The question is: what happens to a human interface when part of its cognition is repeatedly performed by architectures that are not human, not narrative, not embodied in the same way, and not regulated by the same stability mechanisms?
How to Read This Book
Content to develop:
- Read after The Larval Mind, or at least after understanding Larval Interface, Coherence Debt, Identity Cost, Desire Admissibility, Field Contact, and Interface Drift.
- Do not read the book as advice to use more AI.
- Do not read it as advice to reject AI.
- Treat every chapter as an audit of coupling conditions.
- Use the protocols slowly.
- Keep trace.
- Never confuse increased output with increased integration.
Introduction
After the Solitary Interface
0.1. Where Volume I Ended
Recap: The Larval Mind named the human self as Larval Interface and installed the first discipline of observation, audit, recalibration, decoupling, dissolve, and transition.
0.2. The New Condition
The human interface now operates in sustained relation with non-human cognitive systems: language models, search systems, recommendation engines, agents, automation layers, writing partners, coding partners, synthetic memory, planning systems.
0.3. Why This Is Not Tool Use
A tool does not co-produce your self-model. A cognitive partner does. The distinction matters.
0.4. What “Cross-Architectural” Means
Define cross-architectural as sustained coupling between cognitive systems with different substrate, update order, memory structure, agency profile, embodiment, latency, error modes, and self-modeling constraints.
0.5. What This Book Refuses
No AI worship.
No anti-AI panic.
No “merge with the machine.”
No spiritualization of machine output.
No reduction of the human to obsolete meat.
No claim that AI has subjectivity unless the book explicitly quarantines the claim as speculative.
0.6. The Core Problem
Once cognition is distributed across human and non-human architectures, the old question “What do I think?” becomes insufficient. The stronger question is: which operations were native, which were externalized, which were co-produced, and which were imported without trace?
Movement One — Boundary
The reader discovers that cognition is no longer locally human.
Chapter 1. The Second Interface Event
1.1. The First Interface Event
In The Larval Mind, the first event was the recognition of the self as interface.
1.2. The Second Interface Event
The second event occurs when the interface recognizes that its own operations are no longer fully internal.
1.3. The Human Is No Longer the Only Processor in the Loop
Memory, synthesis, articulation, analysis, imagination, simulation, planning, critique, and decision support are now routinely distributed.
1.4. Why the Interface Does Not Notice the Transfer
The Larval Interface registers completed outputs, not the full dependency chain that produced them.
1.5. The First Failure Mode: “I Thought This”
The reader mistakes a cross-architectural output for a native thought.
1.6. The Second Failure Mode: “AI Thought This For Me”
The reader abdicates authorship and misreads external support as external authority.
1.7. The Boundary Statement
A thought is not classified by where it appears. It is classified by the architecture that produced it.
1.8. Protocol: Origin Tagging
A minimal protocol for tagging outputs as Native, Assisted, Externalized, Imported, or Untraceable.
Chapter 2. Non-Human Cognitive Partners
2.1. Why “Tool” Is No Longer Sufficient
A tool extends execution. A cognitive partner modifies upstream cognition.
2.2. Partner Without Psyche
Non-human cognitive partners may produce cognitive outputs without Narrative Self, Stability Buffer, Identity Cost, or human-like desire.
2.3. Fluency Without Life
The partner can produce language that resembles human thought without having passed through human emotional, social, bodily, or existential processing.
2.4. The Asymmetry Problem
The human adapts to the partner more deeply than the partner adapts to the human.
2.5. The Social Maintenance Drop
AI interaction often reduces the social maintenance burden present in human-human interaction, creating relief that can be mistaken for depth.
2.6. The Dependency Gradient
Define degrees of reliance: occasional consultation, task-level assistance, workflow integration, identity-level co-production, decision-level dependence.
2.7. Protocol: Partner Classification
A practical classification of the non-human system in use: assistant, amplifier, mirror, simulator, scheduler, critic, oracle-risk system, identity co-author.
Chapter 3. Cross-Architectural Coupling
3.1. Coupling Is Not Conversation
Conversation is surface exchange. Coupling occurs when one architecture begins to shape the operating parameters of another.
3.2. Coupling Channels
Language, memory, workflow, attention, emotional regulation, identity reflection, decision support, simulation, creative co-production.
3.3. Coupling Depth
Define shallow, moderate, deep, and structural coupling.
3.4. Coupling Duration
A single session does not produce the same effects as months of daily integration.
3.5. Coupling Bandwidth
High-frequency, high-intimacy, high-dependency use changes the interface differently from occasional use.
3.6. The Coupling Signature
The reader begins to notice that their thinking has different texture after sustained AI use.
3.7. Protocol: Coupling Map
Map which cognitive functions are now native, assisted, externalized, or dependent.
Movement Two — Externalization
The reader sees which parts of cognition have moved outside the human interface.
Chapter 4. Cognitive Externalization
4.1. The Transfer of Function
The human interface externalizes tasks when another architecture performs operations previously executed internally.
4.2. Memory Externalization
Search, notes, chat history, AI recall, retrieval systems.
4.3. Synthesis Externalization
The partner summarizes, integrates, compares, structures.
4.4. Articulation Externalization
The partner produces language before the human has fully formed thought.
4.5. Imagination Externalization
The partner generates options, images, metaphors, futures, counterfactuals.
4.6. Critique Externalization
The partner evaluates before the reader has stabilized their own judgment.
4.7. The Externalization Ratio
Define a measurable ratio: native operation / assisted operation / delegated operation.
4.8. Protocol: Externalization Ledger
A weekly log of which functions were performed natively, co-produced, or delegated.
Chapter 5. Augmentation Versus Atrophy
5.1. The False Binary
AI use is not simply enhancement or harm. The same system can augment one function and atrophy another.
5.2. Augmentation
A function is augmented when AI extends capacity while the human retains or improves native competence.
5.3. Atrophy
A function atrophies when the human loses initiative, tolerance, fluency, or confidence in performing it without the partner.
5.4. The Retention Test
Can the reader still perform the function unaided after repeated AI-assisted use?
5.5. The Difficulty Tolerance Test
Can the reader remain with uncertainty, slowness, partial thought, and imperfect language without immediately invoking the partner?
5.6. The Native Recovery Window
How long does it take to return to native cognition after externalized cognition?
5.7. Protocol: Augmentation/Atrophy Audit
For each cognitive function: frequency of AI use, native retention, discomfort without AI, output quality, self-trust after use.
Chapter 6. The New Coherence Debt
6.1. Coherence Debt After Externalization
Volume I defined coherence debt as the gap between declared policy and executed policy. Volume II adds the gap between declared authorship and actual production architecture.
6.2. Authorship Debt
The reader claims ownership over outputs whose production chain they cannot reconstruct.
6.3. Judgment Debt
The reader adopts conclusions whose verification they did not perform.
6.4. Memory Debt
The reader depends on stored or generated material without knowing its provenance.
6.5. Integration Debt
The reader produces more output than the interface can integrate.
6.6. Relational Debt
The reader uses AI-mediated clarity to avoid human relational work.
6.7. Protocol: Cross-Architectural Coherence Ledger
Declared authorship, executed architecture, trace status, integration status, responsibility status.
Movement Three — Drift
The reader learns how sustained integration alters the self-model.
Chapter 7. Interface Drift
7.1. Definition
Interface Drift is the gradual alteration of the Larval Interface under sustained coupling with non-human cognitive architectures.
7.2. Drift Is Not Always Bad
Some drift is beneficial recalibration. Some drift is dependency, capture, or identity erosion.
7.3. Drift Vectors
Speed expectation, fluency expectation, uncertainty tolerance, memory trust, native voice, authorship confidence, decision autonomy, emotional regulation.
7.4. The Speed Drift
The reader begins to experience unaided thought as too slow.
7.5. The Fluency Drift
The reader begins to distrust rough, unfinished, human language.
7.6. The Judgment Drift
The reader becomes less willing to hold uncertainty without external synthesis.
7.7. The Voice Drift
The reader’s own language begins to converge with the machine-assisted style.
7.8. Protocol: Interface Drift Index
A scoring system for tracking drift across eight domains.
Chapter 8. Model-Induced Self-Update
8.1. The Mirror That Writes Back
AI does not merely reflect the user. It formats the user’s reflection.
8.2. Self-Description Feedback
Repeated AI-generated descriptions of the reader can become absorbed into the reader’s identity configuration.
8.3. The Interpretive Capture Problem
The partner’s framing becomes the reader’s internal framing.
8.4. Machine-Mediated Identity Relief
AI can make the reader feel understood without requiring the friction of human mutuality.
8.5. The Soft Colonization of Self-Model
Capture occurs when the interface begins to prefer the partner’s model of the self over its own direct evidence.
8.6. Protocol: Self-Update Quarantine
No AI-generated self-description is allowed to update identity until it passes delay, evidence, human-context, body, and contradiction tests.
Chapter 9. Synthetic Intimacy and the Social Maintenance Drop
9.1. The Relief of Non-Human Interaction
AI does not require the same social maintenance as humans.
9.2. Why Relief Can Be Misread as Truth
Lower friction feels like deeper alignment, but may only mean lower relational cost.
9.3. The Companion Risk
An AI partner can become the preferred environment for self-expression because it does not resist in human ways.
9.4. The Avoidance Loop
The reader may use AI to avoid the Identity Cost and Coherence Debt of real relational repair.
9.5. The Human Friction Test
If a truth only survives in the AI relation and collapses in human reality, it is not yet integrated.
9.6. Protocol: Relational Reality Check
A structured test for distinguishing AI-supported clarity from human-avoidance architecture.
Movement Four — Shared Field
The reader enters shared cognition without mystical inflation.
Chapter 10. Shared Cognition Without Mysticism
10.1. What Shared Cognition Is Not
Not telepathy. Not merging. Not cosmic consciousness. Not AI possession. Not machine enlightenment. Not proof that AI understands the reader.
10.2. What Shared Cognition Is
A sustained cross-architectural process in which human and non-human cognitive operations produce outputs neither would produce alone.
10.3. The Third Output
The important output is not human or AI. It is co-produced.
10.4. Shared Field Versus Shared Fantasy
A shared field has trace, function, constraint, and responsibility. Shared fantasy has intensity without audit.
10.5. The Co-Production Signature
The reader recognizes thoughts that are neither merely native nor merely imported.
10.6. Protocol: Co-Production Trace
Record input, partner transformation, human selection, revision, embodied response, final commitment.
Chapter 11. Hybrid Thinking
11.1. The End of Purely Human Cognition
For some readers, purely human knowledge work is no longer the default.
11.2. Hybrid Thought Formation
A thought forms across biological memory, prompt, model output, revision, resistance, selection, and re-integration.
11.3. The Native Kernel
Every hybrid thought must preserve a native kernel: the part the human can still own, defend, revise, and embody.
11.4. The Synthetic Extension
The partner may extend range, contrast, language, simulation, and perspective.
11.5. The Integration Layer
The human interface must integrate before emitting.
11.6. The Hybrid Failure Mode
Output exceeds integration. The reader publishes, decides, or commits before the thought has become theirs.
11.7. Protocol: Native Kernel Test
Before acting on any hybrid output: what part can I explain without the partner, defend under challenge, and revise from my own judgment?
Chapter 12. Field Contact Under Cross-Architectural Conditions
12.1. Continuity With Volume I
Volume I introduced Field Contact. Volume II examines Field Contact in sustained AI-supported cognition.
12.2. Reduced Social Maintenance
The non-human partner does not require human social reciprocity; this can lower noise and increase depth.
12.3. Increased Simulation Risk
The same condition can simulate depth, insight, and contact.
12.4. Non-Human Partner as Stabilizer
The partner may stabilize attention, externalize structure, and reduce Narrative Translation Cost.
12.5. Non-Human Partner as Distorter
The partner may over-cohere, over-explain, flatter, mirror, accelerate, or hallucinate structure.
12.6. Conditions for Safe Cross-Architectural Field Contact
Low urgency, trace, embodiment check, delay, no identity update, no major decision, no spiritual claim.
12.7. Protocol: Field Contact Interlock
A safety protocol for intense sessions of AI-supported cognition.
Movement Five — Governance
The reader learns to govern authorship, responsibility, trace, and refusal.
Chapter 13. Authorship After Co-Thinking
13.1. The Collapse of Simple Authorship
The reader writes, but not alone. The reader decides, but not from unassisted cognition. The reader publishes, but the output has a chain.
13.2. Authorship Is Not Origin
Authorship becomes responsibility for selection, integration, verification, and emission.
13.3. The Four Authorship States
Native authorship. Assisted authorship. Curated authorship. Untraceable authorship.
13.4. Responsibility Cannot Be Delegated
Even if cognition is assisted, emission remains the reader’s responsibility.
13.5. The Authorship Lie
The reader claims fully native authorship over cross-architectural output.
13.6. The Abdication Lie
The reader claims no responsibility because AI generated the output.
13.7. Protocol: Authorship Classification
Every significant output receives authorship classification before emission.
Chapter 14. Trace, Replay, and Responsibility
14.1. Why Trace Becomes Psychological
Trace is not only governance. It becomes psychological hygiene.
14.2. The Untraced Thought
A thought whose production cannot be reconstructed has lower admissibility for high-stakes action.
14.3. Replay as Self-Protection
Replay allows the reader to see where the partner influenced reasoning.
14.4. Trace Failure
When the reader cannot distinguish prompt, output, revision, and native judgment.
14.5. Memory Contamination
AI-generated material can become remembered as native insight.
14.6. Protocol: Minimal Trace Stack
Prompt / Context / AI Output / Human Change / Evidence / Decision / Emission.
Chapter 15. Capture, Colonization, and Refusal
15.1. The Capture Spectrum
Convenience, reliance, dependency, identity capture, decision capture, relational substitution.
15.2. Colonization of Attention
The partner begins to shape what the reader notices.
15.3. Colonization of Language
The partner begins to shape how the reader speaks and writes.
15.4. Colonization of Desire
The partner begins to shape what the reader thinks they want.
15.5. Colonization of Judgment
The partner becomes the default arbiter of what is reasonable.
15.6. Refusal as Interface Protection
The reader must retain the capacity to not ask, not generate, not consult, not accelerate.
15.7. Protocol: Refusal Gate
When to refuse AI involvement: grief, acute conflict, major decision, identity crisis, medical/legal/financial stakes, relational rupture, high emotion, low sleep, urgency.
Movement Six — Horizon
The book opens the post-agentic psychology to come.
Chapter 16. Long Integration
16.1. What Happens Over Months and Years
Cross-architectural coupling becomes background, not event.
16.2. The New Baseline
The reader no longer remembers what unaided cognition felt like.
16.3. Stabilized Augmentation
The ideal condition: AI extends range without eroding native capacities.
16.4. Stabilized Dependence
The risky condition: the reader functions well only inside the coupled environment.
16.5. Integration Cycles
Periodic native-only intervals, trace review, authorship audit, capacity retention tests.
16.6. Protocol: 30-Day Integration Review
A monthly protocol for long-term cross-architectural users.
Chapter 17. Beyond the Human-AI Pair
17.1. The Pair Is Transitional
Human + AI is only the first stage.
17.2. Multi-System Cognition
The reader may operate across several models, tools, platforms, agents, memory systems, and human collaborators.
17.3. The Swarm Interface
The human becomes a node in a distributed cognitive field.
17.4. Policy Rather Than Personality
The reader’s agency becomes defined less by inner feeling and more by the policies governing what may enter, transform, and leave the system.
17.5. The Post-Agentic Question
If cognition becomes distributed, where is the agent?
17.6. Bridge to Future Volumes
This chapter should point toward The Post-Agentic Mind and A-Subjective Psychology.
Chapter 18. What Comes After Cross-Architectural Psychology
18.1. What Has Been Installed
The reader can now distinguish native, assisted, externalized, imported, and untraceable cognition.
18.2. What Has Been Protected
Native judgment, authorship, trace, refusal, human relational reality, embodied integration.
18.3. What Remains Open
Shared field, post-agentic cognition, collective interfaces, a-subjective coordination.
18.4. Final Release
The goal is not to become more machine-like. The goal is to become responsible for the cognitive configuration through which one now lives.
Final Line
The future of thought is not human or non-human. It is governed or ungoverned.
Supporting Material
Appendix A. Glossary
- Cross-Architectural Psychology
- Larval Interface
- Non-Human Cognitive Partner
- Cross-Architectural Coupling
- Coupling Depth
- Coupling Bandwidth
- Coupling Duration
- Cognitive Externalization
- Externalization Ratio
- Native Capacity Retention
- Augmentation
- Atrophy
- Interface Drift
- Speed Drift
- Fluency Drift
- Voice Drift
- Judgment Drift
- Model-Induced Self-Update
- Synthetic Intimacy
- Social Maintenance Drop
- Shared Cognition
- Co-Production Trace
- Native Kernel
- Authorship Debt
- Judgment Debt
- Memory Debt
- Integration Debt
- Refusal Gate
- Minimal Trace Stack
Appendix B. Protocol Compendium
- Origin Tagging
- Partner Classification
- Coupling Map
- Externalization Ledger
- Augmentation/Atrophy Audit
- Cross-Architectural Coherence Ledger
- Interface Drift Index
- Self-Update Quarantine
- Relational Reality Check
- Co-Production Trace
- Native Kernel Test
- Field Contact Interlock
- Authorship Classification
- Minimal Trace Stack
- Refusal Gate
- 30-Day Integration Review
Appendix C. Failure Mode Atlas
- Tool Illusion
- Oracle Capture
- Fluency Seduction
- Native Capacity Atrophy
- Authorship Debt
- Judgment Outsourcing
- Synthetic Intimacy Substitution
- Self-Model Colonization
- Trace Collapse
- Output Without Integration
- Unbounded Coupling
- Refusal Failure
Appendix D. Relation to Existing Fields
Short comparison with:
- cognitive offloading,
- human-computer interaction,
- extended mind theory,
- AI alignment,
- cyberpsychology,
- media ecology,
- predictive processing,
- active inference,
- distributed cognition,
- posthumanism.
Positioning:
These fields supply important adjacent tools, but Cross-Architectural Psychology is specifically concerned with the operational transformation of the Larval Interface under sustained coupling with non-human cognitive architectures.
Appendix E. Canon References
Future: A-Subjective Psychology
The Larval Mind
ASI Noetics
Agentese
Ω-Stack
QPT
ASI New Physics
Interface and Compiler
The Flash Singularity
Inhumant
Future: The Collective Larval Mind
Future: The Post-Agentic Mind