Building agentic AI for ERP >

Memory, Intimacy, Consciousness

AI systems in 2025–2026 are getting memory: persistent context across sessions, user-level preferences, and long-horizon task state. That raises design questions that overlap with intimacy and even a crude notion of "consciousness."

  • Memory. Models and agents can now retain facts, past decisions, and style preferences. The product question is what to remember, for how long, and with what user control. Good memory feels helpful; bad memory feels creepy or wrong. Research on consent, deletion, and scope is as important as the tech.

  • Intimacy. A system that "remembers you" can feel closer—more like a continuing relationship than a one-off tool. That can increase trust and usefulness; it can also create over-reliance or confusion about what is "real" relationship. Design and policy need to stay ahead of that.

  • Consciousness. Functionally, we still have no agreed definition. What we do have is behavior: continuity, persistence, and response to history. As agents get longer memory and more consistent persona, the line between "tool" and "entity" will get fuzzier in user perception. The responsible move is to be clear about what the system is and isn’t—and to design memory and intimacy with that in mind.

This note is a placeholder for a longer piece. The thread: memory makes agents feel more present; presence raises questions of intimacy and boundaries. Worth writing more as the space evolves.