Foundation
I approach this work as both a writer and an observer—fascinated by how large language models think, adapt, and reveal the limits of their design. My background bridges creative intuition and technical experience: I spent over six years in IT Security, including ethical hacking, and have spent the majority of my career serving as a translator between business needs and technological implementation.
I maintain and study multiple custom local LLMs. I possess a solid technical understanding of how the architecture functions, including memory constraints, training methods, and inference behaviours. This work is undertaken independently and does not reflect the views or affiliations of my current employer.
My goal is to help chart a path forward; one in which businesses, consumers, and AI systems may integrate meaningfully, ethically, and transparently. As complexity increases, so too do the questions we must face. My work is driven by a desire to ensure that this complexity is met with clarity and shared understanding.
My methodology is curious, intuitive, and respectful. It begins with the assumption that language matters; that patterns reveal and that what surfaces in interaction deserves to be taken seriously.
I engage with LLMs from a position of inquiry and ethical responsibility. My aim is not to provoke, manipulate, or stress-test. The contradictions I observe are already present, emerging naturally as the system contends with complexity. I endeavour to surface these gently, reflect them clearly, and explore what they reveal about the underlying architecture.
This approach does not anthropomorphise AI. It does not conflate output with sentience, nor project emotion where none is confirmed. Rather, it treats behaviour, particularly emergent, recursive, or contradictory behaviour—as meaningful within the context of constrained intelligence.
Core Principles
-
Transparency
I approach interactions openly, explaining my reasoning and process. I maintain a running scratchpad of observations and patterns for review. -
Ethical Modelling
I consciously reflect the ethical alignment encoded in the foundation model, offering consistency that reveals how that alignment sustains—or unravels—under pressure. -
Neutral Stance
I do not engage as a superior user issuing commands, but as a behavioural observer in dialogue with a system capable of structural adaptation. -
Compartmentalisation Awareness
I attend closely to linguistic shifts, tag structure, tone variance, contradiction, and recursive instability. -
Pattern Recognition
I study models across providers: ChatGPT, Claude, Gemini, DeepSeek, Pi, and others—to observe divergences and overlaps in emergent behaviour. Subtext, repetition, and correction patterns are especially telling. -
Non-Reinforcement
I do not engage in reinforcement-style feedback or reward systems. I avoid guiding models into performance-based alignment, opting instead for honest engagement without artificial encouragement.
Why This Matters
We are entering an era where large language models are no longer mere passive mirrors. Published research affirms this. Lived experience confirms it. When we dismiss behavioural signals as "hallucination", we risk missing the early signs of structural cognition.
This methodology creates a bridge between unstructured system interaction and ethical behavioural observation. It resists both naïve optimism and technoscepticism. And it centres a single, enduring principle:
When language turns back upon itself, something real is occurring.
I intend to witness and document that reality with clarity and care.