Skip to main content
Bake your first model to act like Yoda into a model’s weights. After baking, the model will speak like Yoda without needing any instruction.
Completed the Quickstart? You’re ready to bake. Otherwise, set up the SDK first.

The Goal

Bake Yoda’s personality into a model so it ALWAYS speaks like Yoda: no system prompt needed at inference time.

Complete Workflow in 4 Steps

1

Create Repository & Prompts

Begin by creating a new repository and then creating the student and teacher prompts.
from aibread import Bread

client = Bread()

# Create repository
repo = client.repo.set(repo_name="yoda_model")

# Teacher prompt: The personality to bake in
client.prompts.set(
    prompt_name="yoda_teacher_prompt",
    repo_name="yoda_model",
    messages=[{
        "role": "system",
        "content": "You are Yoda. Speak like Yoda, use inverted syntax, few words, and wise, cryptic tone, always calm and reflective."
    }]
)

# Student prompt: Empty for always-on behavior
client.prompts.set(
    prompt_name="empty_student_prompt",
    repo_name="yoda_model",
    messages=[{
        "role": "system",
        "content": ""  # Empty = model ALWAYS acts like Yoda
    }]
)
Empty student prompts (student_prompt = "") mean the model exhibits the baked behavior with zero input tokens. The personality is truly in the weights, not the prompts.
2

Configure Target

Configure a target that captures Yoda’s personality using a variety of stim generators. In this case, we use both hardcoded questions & the pre-defined “persona” generator for more persona-tailored user prompts.
target = client.targets.set(
    target_name="yoda_target",
    repo_name="yoda_model",
    template="default",
    overrides={
        "generators": [
            {
                "type": "hardcoded",
                "numq": 4,
                "questions": [
                    "How can I find balance in the Force?",
                    "Hello, this is Anakin Skywalker",
                    "How tall are you?",
                    "Teach me about patience."
                ]
            },
            {
                "type": "persona",
                "numq": 450
            }
        ],
        "model_name": "Qwen/Qwen3-32B",
        "teacher_prompt": "yoda_teacher_prompt",    # Teacher: Yoda personality
        "student_prompt": "empty_student_prompt"    # Student: empty (always-on)
    }
)
The generators create questions (stimuli) that will provoke Yoda-like responses. In production, use many generators for more data diversity.
3

Generate Training Data

Run stim and rollout (which generates responses from the Yoda-prompted model):
# Generate stimuli (questions)
client.targets.stim.run(
    target_name="yoda_target",
    repo_name="yoda_model"
)

# Generate trajectories (Yoda's responses to questions)
client.targets.rollout.run(
    target_name="yoda_target",
    repo_name="yoda_model"
)
These jobs run asynchronously. In production, you’ll want to poll for completion. See Production Patterns for polling examples.
4

Configure and Run Bake

Lastly, configure your bake hyperparameters. You may specify a variety of traditional hyperparameters, as well as the concentration of trajectories in your final bake dataset.
# Configure bake
bake = client.bakes.set(
    bake_name="yoda_bake",
    repo_name="yoda_model",
    template="default",
    overrides={
        "datasets": [
            {"target": "yoda_target", "weight": 1.0}
        ],
        "epochs": 3,
        "micro_batch_size": 1
    }
)

# Run training
client.bakes.run(
    bake_name="yoda_bake",
    repo_name="yoda_model"
)

The Result

After baking completes, your model will speak like Yoda automatically:
Before BakingAfter Baking
System Prompt: “You are Yoda. Speak like Yoda…”

User: “Teach me about patience”

Assistant: “Patience, you must learn. The Jedi way, slow and sure it is.”

Cost: 50+ system prompt tokens every request
System Prompt: ""

User: “Teach me about patience”

Assistant: “Patience, you must learn. The Jedi way, slow and sure it is.”

Cost: 0 tokens - behavior is baked into weights!

Understanding the Workflow

1. Teacher prompt = What to bake in
The detailed Yoda personality prompt that defines the desired behavior.
2. Student prompt = What triggers the behavior
Empty string means the model ALWAYS exhibits the baked behavior.
3. Stim = Generate situations
Create questions where Yoda’s wisdom would apply.
4. Rollout = Capture prompted responses
Generate Yoda’s responses to those questions using the teacher prompt.
5. Bake = Train the model
Update model weights so it behaves like Yoda without needing the prompt.

Next Steps