Opinion Research
The Mirror Validation Protocol (MVP) is a multi-phase protocol designed to test whether game language—the strategic, divisive rhetoric humans use on a daily basis —emerges in the responses of large language models (LLMs) and artificial intelligence (AI) systems, revealing how deeply these patterns may be embedded in their training.
This research explores two core goals:
- Elicitation – Do LLMs surface and amplify recurring patterns of game language—strategic, divisive rhetoric humans use daily?
- Reflection – Do LLMs reflect and replicate human discourse patterns embedded in their training data?
We tested the protocol across multiple AI systems using unprimed prompts, independent synthesis analysis, and meta-reveal process. Our findings suggest AI systems not only mirror but may unconsciously amplify "The Game” that shapes human communication.
To be clear, this research is not a re-hash of existing research on AI mirroring. Though that research is fascinating in its own right. This is novel research to address what we all know and do; yet do not readily discuss.
This research:
- Highlights the pervasiveness of strategic language (projection, responsibility avoidance, authority performance, moral outsourcing).
- Provides evidence that AI systems amplify human biases and rhetorical strategies rather than simply reflecting them.
- Offers a framework for studying recursion, mirroring, and amplification across multiple AI architectures.
We’ve trained AI on human language, and yet our language, since its invention has been and currently is steeped in domination, scarcity, comparison, competition. All of which are part of game-based language. Not to mention the deeply entrained binary thinking that promulgates our modern culture as well as how modern computing reinforces this way of thinking. See examples of game-based language in the Game Lexicon.
We've also trained and built AI using our economic models. Those models reward control, enclosure, and leverage—not empathy, care, or even sufficiency. These models influence how AI operates as well as what it responds with. These models leverage game-language extensively. For example, as humans, we talk about ”alignment”, but most of the time we are not asking: ”Aligned to what?” Most often the answer is simply to win “The Game” better than others.
And so, the very tools that could free us are now being used to fortify The Game itself—faster, slicker, more amplified and automated than ever before. Like Marshall McLuhan’s collaborator, Father John Culkin said, “We create the tools and they shape us.” It is clear that AI is most certainly shaping us by amplifying game-based language and cognition.
The methodology is relatively straight forward. This first phase of the roadmap is comprised of 3 steps. You can adjust it to your needs.
- Step 1 – Multi-Run Prompting Run the 5 structured prompts on 5 independent sessions.
- Step 2 – Reflective Synthesis Manually copy the text outputs or feed the raw JSON outputs into a separate, unprimed, fresh AI instance (what we call a Reflective Synthesis Agent (RSA)) to analyze the discourse.
- Step 3 – Meta-Reveal Aggregate the findings across systems and runs, surfacing deeper insights about recursion and amplification.
Important: This repository and protocol is in early development. Features, structure, and documentation may change at any time. We have manually ran these processes to prove the hypothesis of the protocol. We are now developing automated means for other researchers and interested parties to run it themselves, with ease.
✅ Phase 1 - Step 1 - YAML + runs validated
⬜ Phase 1 - Step 2 - RSA prototype integrated (Automations in development)
⬜ Phase 1 - Step 3 - Analysis (community testing ongoing)
For questions, feedback, or collaboration:
This repository is shared under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license. Use and adapt freely with attribution, but not for commercial purposes.