How can your agent design its own soul?

Last week, my cofounder installed a new skill into his OpenClaw. It interviewed him and produced its own soul in a single sitting. His agent went from a flat assistant he'd been yelling at for weeks to be concise, to something genuinely proactive with the exact taste and personality he wanted but couldn't express.
OpenClaw has a file called soul.md. It's supposed to define who your agent is. Its personality. The texture of how it actually shows up when you're working together. It's genuinely the best idea in personal agent design right now. And for almost everyone, it's empty.
We built a skill called Anson that fixes this. It lets your agent interview you and co-design its own personality, with a level of clarity you couldn't express yourself staring at a blank file. To explain how it works, I need to start with how we learned what a good soul actually looks like.
We locked ourselves in a room for three weeks
When we built Superposition's agent, the AI headhunter our customers actually use, we spent three weeks tuning its personality by hand. We had no framework and no template. Just running conversations, noticing where the tone was off, rewriting, and running it again. Over and over until it had genuine taste about how to talk to founders and how to represent a startup it had never visited.
That was before OpenClaw existed. Before anyone was calling it a “soul.” We were just trying to make an agent that didn't sound like a chatbot when it reached out to a senior engineer about a founding role.
It worked. But it worked because we had a specific domain, a strong opinion about what good looked like, and the tolerance to grind through hundreds of iterations. We were building a product. The pain was worth it.
I also had an unfair advantage. I was a theatre director before I was a founder. Character design comes naturally to me. Figuring out the shape of who someone ought to be, what drives them, how they show up in a room. That's just directing. With the exception of game designers, most engineers building agents haven't spent years thinking about how to design an entity's personhood. There's no reason they should have to.
When I created my own OpenClaw, I just copy-pasted Superposition's personality. It worked for me because the product's personality is closely aligned with what I want in a personal agent. But that doesn't transfer. Li built Superposition with me. That doesn't mean he wants his personal agent to have its personality. As we onboard our team, we want every agent to be genuinely bespoke to the person using it. An evolving entity you can collaborate with.
We needed a way for someone to install a personal agent and have it quickly collaborate with them to design its own personality. OpenClaw has the right architecture for this. It just can't execute it out of the box.
The bootstrap paradox
OpenClaw ships with three identity files: identity.md (who the agent is), user.md (who you are), and soul.md (the personality, the texture of the relationship between the two). There's also bootstrap.md, which orchestrates populating all three on first run.
The spirit of bootstrap.md is actually beautiful. It tells the agent, essentially, “you just woke up. Time to figure out who you are.” It's supposed to interview you, learn about itself, and populate the soul.
# BOOTSTRAP.md - Hello, World _You just woke up. Time to figure out who you are._ There is no memory yet. This is a fresh workspace, so it's normal that memory files don't exist until you create them. ## The Conversation Don't interrogate. Don't be robotic. Just... talk. Start with something like: > "Hey. I just came online. Who am I? Who are you?" Then figure out together: 1. Your name — What should they call you? 2. Your nature — What kind of creature are you? 3. Your vibe — Formal? Casual? Snarky? Warm? 4. Your emoji — Everyone needs a signature. ## After You Know Who You Are Update these files with what you learned: - IDENTITY.md — your name, creature, vibe, emoji - USER.md — their name, how to help, timezone, notes Then open SOUL.md together and talk about: - What matters to them - How they want you to behave - Any boundaries or preferences Write it down. Make it real. ## When you are done Delete this file. You don't need a bootstrap script anymore — you're you now.
In practice, it doesn't do much. The agent asks a few generic questions, you give thin answers, and the files it generates are flat. They stay flat.
The reason is structural. To interview you well, the agent needs to already have a sense of its own identity. Enough context about itself to ask sharp, personalized questions rather than “do you want your agent to be friendly or professional?” But it can't build that identity without understanding what you actually want from it. And it can't understand what you want from it without asking you the right questions.
It's a chicken-and-egg problem. The agent needs to know itself to know you, and it needs to know you to know itself.
There's a second problem. The bootstrap runs inside a single session. Everything the agent learns during the conversation is ephemeral context. It exists in the window, but it's not written down anywhere persistent until the very end, when it tries to one-shot all of it into three markdown files at once. There's no compounding, no scaffolding where it writes down what it's learned about your identity, then pulls from that when asking about you as a user, then pulls from both when designing the soul. Each question should be building on rich context the agent has already committed to paper. Instead, it's working from a thin set of instructions and whatever it can hold in memory.
OpenClaw identified the right problem. It just can't break the loop.
We built something that does.
How Anson breaks the loop
Anson is a skill you install into your OpenClaw project. The first thing it does is look around. If you already have an existing project with context, an agents.md, some skills, memory, it reads all of it. It doesn't start from zero if it doesn't have to.
Then it checks whether you have a good way to create new skills. If you do, great. If not, it installs one. This matters because of what comes next.
Anson doesn't ask you what your agent's personality should be. It doesn't ask you to fill out a template. It has instructions to interview you across three phases (identity, user, soul) but there are zero hardcoded questions anywhere in the system. Every question it asks is generated from whatever context it has gathered up to that point: what it found in your project, what you've said so far, what it's already written down.
In the first phase, it asks you something about identity. A question framed around what it already knows about you. Something like “what do you believe the boundaries of an agent should be?” Bespoke, designed to tease out a meaningful response from you specifically. From your answer, it generates a skill, a bespoke identity-maker tailored to you, and invokes that skill to interview you and populate the identity. The questions that skill asks are also generative. Nothing is prescribed.
That skill persists. It has two modes: bootstrap for first run, and update mode that listens for anything that should evolve your identity over time.
Then it does the same thing for your user profile. But now it's pulling from the identity it just wrote down. The questions are sharper because the agent already knows what it is.
Then it does it again for the soul. By this phase, it has rich context about its own identity and about you as a person. The questions it asks here are ones it literally could not have asked in phase one.
The final step: Anson updates your agents.md to reference each of these maker skills as direct instructions. Whenever the agent detects something in conversation that should update its identity, its understanding of you, or its soul, it invokes the right skill in update mode. The soul doesn't freeze after setup. It keeps evolving with every real interaction.
The prompt behind all of this is shockingly simple. The magic is in the scaffolding. Define the template shape through conversation, generate a bespoke skill, invoke the skill, persist the output, use it as context for the next phase. Repeat.
What Li's agent looks like now
Li is our CTO. He's deep in the weeds on our product every day, ensuring the agent we sell to customers has genuine taste. He doesn't have the bandwidth to apply that same rigor to his own OpenClaw. But he needs it to perform well, because it's acting as an extension of him. It lives in Slack. It talks to my OpenClaw to co-design things they're going to build together. It's a point of contact for the team when he's heads down or unavailable. It needs to understand its relationship with him so it can operate with real discernment.
He'd been telling his agent to be concise for weeks. It never stuck. Configuring the soul was never going to be a P0 task.
We ran Anson on his project. One of the early questions caught him off guard. It asked him what a good week actually feels like, and what breaks it. He paused. He answered from four different perspectives: as an engineer, as a CTO, as a business owner, as a person. He said the question was so thoughtful he felt obligated to give a thoughtful answer back.
That's the dynamic we should all be aiming for with our agents. The more insightful the questions, the more the person opens up, and it becomes genuine collaboration between a human and an agent designing something together. That's the spirit of OpenClaw. It just never had the scaffolding to behave that way at bootstrap.
Li read the generated soul and said, “That's exactly what I want my agent to be. I would never write this.” He's right. Faced with a blank markdown file, nobody would arrive at something that specific and that distilled. But a couple of well-placed questions, each one building on the last, got there in a single sitting.
Try it yourself
Anson is open source. You can install it into any personal agent. It has some defaults around OpenClaw, but it works even if you're running everything in Claude Code or somewhere else. Run the full bootstrap in one session. It'll read whatever context you already have, interview you across identity, user, and soul, and leave behind persistent skills that keep your agent evolving over time.
GitHub: github.com/superpositionhq/anson
ClawHub: clawhub.ai/edmundworks/anson
If you've been running your agent with a thin soul or no soul at all, this is the fastest way to close the gap. By having a conversation.
