Core Concepts

Adaptive Personalization

Orbit builds user understanding from repeated behavior and feedback outcomes. You get inferred profile memory without adding a second personalization service.

What Orbit infers automatically

inferred_preference

Derived from feedback trends on assistant style. Example: user performs better with concise, step-based explanations.

inferred_learning_pattern

Derived from repeated semantically similar user behavior. Example: user repeatedly struggles with loop boundaries.

Fact inference + contradiction guard

Orbit now infers structured user facts from plain language (for example allergies or family preferences) and marks critical contradictions for clarification.

fact_inference_guard.py
# User statements
orbit.ingest(content="I am allergic to pineapple.", event_type="user_question", entity_id="alice")
orbit.ingest(content="I am not allergic to pineapple anymore.", event_type="user_question", entity_id="alice")

results = orbit.retrieve(
    query="What should I consider before suggesting recipes?",
    entity_id="alice",
    limit=10,
)

for memory in results.memories:
    fact = memory.metadata.get("fact_inference")
    provenance = memory.metadata.get("inference_provenance", {})
    if fact:
        print(fact["fact_key"], fact["status"], fact["clarification_required"])
    if provenance.get("clarification_required"):
        # safety-sensitive contradiction detected
        print("Ask clarification before relying on this fact")
retrieve_payload_example.json
{
  "intent": "inferred_user_fact_conflict",
  "fact_inference": {
    "subject": "user",
    "fact_key": "allergy:pineapple",
    "fact_type": "constraint",
    "polarity": null,
    "status": "contested",
    "critical_fact": true,
    "clarification_required": true,
    "conflicts_with_memory_ids": ["<memory-id>"]
  },
  "inference_provenance": {
    "inference_type": "fact_conflict_guard_v1",
    "clarification_required": true,
    "conflicts_with_memory_ids": ["<memory-id>"]
  }
}

Minimal integration (FastAPI chatbot)

personalization_chatbot.py
from orbit import MemoryEngine

orbit = MemoryEngine(api_key="<jwt-token>")

def handle_chat(user_id: str, message: str) -> str:
    # 1) store user input
    orbit.ingest(
        content=message,
        event_type="user_question",
        entity_id=user_id,
    )

    # 2) retrieve personalized context
    context = orbit.retrieve(
        query=f"What should I know about {user_id} for: {message}",
        entity_id=user_id,
        limit=5,
    )

    # 3) call your LLM
    answer = "<llm-response>"

    # 4) store assistant output
    orbit.ingest(
        content=answer,
        event_type="assistant_response",
        entity_id=user_id,
    )
    return answer

def handle_feedback(memory_id: str, helpful: bool) -> None:
    orbit.feedback(
        memory_id=memory_id,
        helpful=helpful,
        outcome_value=1.0 if helpful else -1.0,
    )

Quick personalization test

1.

Use the same entity_id and ask related questions 3+ times.

2.

Record helpful and unhelpful outcomes with feedback calls.

3.

Retrieve with the same entity_id and inspect top-k ordering.

4.

Verify inferred memories appear with provenance metadata.

test_personalization.py
results = orbit.retrieve(
    query="What do we know about this learner's weak spots?",
    entity_id="alice",
    limit=10,
)
for memory in results.memories:
    provenance = memory.metadata.get("inference_provenance")
    print(memory.event_type, memory.content, provenance)

Runtime controls

Variable
Default
Purpose
MDE_ENABLE_ADAPTIVE_PERSONALIZATION
true
Enable inferred memory generation
MDE_PERSONALIZATION_REPEAT_THRESHOLD
3
Repeated signals needed for pattern inference
MDE_PERSONALIZATION_SIMILARITY_THRESHOLD
0.82
Semantic similarity threshold
MDE_PERSONALIZATION_WINDOW_DAYS
30
Observation window
MDE_PERSONALIZATION_MIN_FEEDBACK_EVENTS
4
Feedback count for preference inference
MDE_PERSONALIZATION_PREFERENCE_MARGIN
2.0
Confidence margin before writing preference memory