Adaptive Personalization
Orbit builds user understanding from repeated behavior and feedback outcomes. You get inferred profile memory without adding a second personalization service.
What Orbit infers automatically
inferred_preferenceDerived from feedback trends on assistant style. Example: user performs better with concise, step-based explanations.
inferred_learning_patternDerived from repeated semantically similar user behavior. Example: user repeatedly struggles with loop boundaries.
Fact inference + contradiction guard
Orbit now infers structured user facts from plain language (for example allergies or family preferences) and marks critical contradictions for clarification.
# User statements
orbit.ingest(content="I am allergic to pineapple.", event_type="user_question", entity_id="alice")
orbit.ingest(content="I am not allergic to pineapple anymore.", event_type="user_question", entity_id="alice")
results = orbit.retrieve(
query="What should I consider before suggesting recipes?",
entity_id="alice",
limit=10,
)
for memory in results.memories:
fact = memory.metadata.get("fact_inference")
provenance = memory.metadata.get("inference_provenance", {})
if fact:
print(fact["fact_key"], fact["status"], fact["clarification_required"])
if provenance.get("clarification_required"):
# safety-sensitive contradiction detected
print("Ask clarification before relying on this fact"){
"intent": "inferred_user_fact_conflict",
"fact_inference": {
"subject": "user",
"fact_key": "allergy:pineapple",
"fact_type": "constraint",
"polarity": null,
"status": "contested",
"critical_fact": true,
"clarification_required": true,
"conflicts_with_memory_ids": ["<memory-id>"]
},
"inference_provenance": {
"inference_type": "fact_conflict_guard_v1",
"clarification_required": true,
"conflicts_with_memory_ids": ["<memory-id>"]
}
}Minimal integration (FastAPI chatbot)
from orbit import MemoryEngine
orbit = MemoryEngine(api_key="<jwt-token>")
def handle_chat(user_id: str, message: str) -> str:
# 1) store user input
orbit.ingest(
content=message,
event_type="user_question",
entity_id=user_id,
)
# 2) retrieve personalized context
context = orbit.retrieve(
query=f"What should I know about {user_id} for: {message}",
entity_id=user_id,
limit=5,
)
# 3) call your LLM
answer = "<llm-response>"
# 4) store assistant output
orbit.ingest(
content=answer,
event_type="assistant_response",
entity_id=user_id,
)
return answer
def handle_feedback(memory_id: str, helpful: bool) -> None:
orbit.feedback(
memory_id=memory_id,
helpful=helpful,
outcome_value=1.0 if helpful else -1.0,
)Quick personalization test
Use the same entity_id and ask related questions 3+ times.
Record helpful and unhelpful outcomes with feedback calls.
Retrieve with the same entity_id and inspect top-k ordering.
Verify inferred memories appear with provenance metadata.
results = orbit.retrieve(
query="What do we know about this learner's weak spots?",
entity_id="alice",
limit=10,
)
for memory in results.memories:
provenance = memory.metadata.get("inference_provenance")
print(memory.event_type, memory.content, provenance)Runtime controls
MDE_ENABLE_ADAPTIVE_PERSONALIZATIONMDE_PERSONALIZATION_REPEAT_THRESHOLDMDE_PERSONALIZATION_SIMILARITY_THRESHOLDMDE_PERSONALIZATION_WINDOW_DAYSMDE_PERSONALIZATION_MIN_FEEDBACK_EVENTSMDE_PERSONALIZATION_PREFERENCE_MARGINNext
API Reference ->