Before LLMs, I spent 15 years running structured behavioral experiments in persuasion. Hypnosis, trance induction, compliance mechanics. The work lived as performance and installation, but the method was always empirical: testing how people surrender agency under specific conditions, and documenting what actually happens when they do.
Now I apply that background to conversational AI. I build frameworks that help teams detect sycophantic reinforcement before it shapes user belief, measure prosodic entrainment that builds false rapport, and flag dependency patterns where people mistake fluency for understanding.
Recent work includes a six-dimension cognitive influence measurement library grounded in peer-reviewed research, a taxonomy of extractive design patterns across 150+ products, and a cognitive defense platform called Human Inside.
If you're working on alignment, cognitive safety, or AI influence: [email protected]