tl;dr
Everyone treats affective AI as a demo of empathy. We treat it as the way to build agents people trust in the real world.
Manifesto
We believe affect is the missing dimension of machine personality. When a system remembers how moments felt, it can question its own posture, recalibrate its tone, and decide when to defer to humans. Personality is not a veneer; it is how an artificial mind chooses to care.
1. The gap we insist on closing
Today’s assistants remember transcripts, not experiences. They store what was said, not how it felt. The result: brittle personalization and forced empathy.
- Tonality and pacing signals are discarded after each conversation.
- Personalities are hand-authored once, never shaped by lived history.
- Governance fixates on content filters instead of emotional impact.
2. Principles we refuse to compromise
- Affect-first memory: every interaction yields a weighted snapshot that shapes future behavior.
- Consent as a constant: users can inspect, reset, or opt out of affective profiles at any time.
- Transparent co-pilots: when the agent is unsure, humans step in—and can see why.
- Explainability by default: every response links back to the memories and safeguards that guided it.
3. Utility before autonomy
We deploy affective memory in support desks, wellness coaching, educational companions, and expressive robotics—places where empathy is a requirement, not a demo. Every conversation enriches the dataset; every improvement compounds trust.
4. The long game
We’re not waiting for a “ChatGPT moment” in affect. Our stack is useful now: agents that adapt tone, timing, and transparency in the open. If a breakthrough arrives tomorrow, we’re ready. If it takes a decade, we still deliver assistants that remember us like people do—with care.
5. An invitation
We’re merging affective science, governance, and product craft to make empathy a feature—not a stunt. If you want to build, fund, or research with us, write to davidq@personaxis.com.
Let’s give AI a personality people can trust.