MemoryStackMemoryStack/Documentation
    Back to Documentation

    Understanding Memories

    Memories are the fundamental building blocks of Memory OS. Learn how they work, what makes them special, and how to use them effectively.

    What is a Memory?

    A memory is a piece of information that your AI can store, retrieve, and reason about. Unlike traditional databases that store static data, memories are designed for AI systems that need to understand context, make connections, and evolve their understanding over time.

    Think of memories as "smart data" - they're not just stored, they're understood.

    Traditional Database Record

    id: 123
    text: "User likes Python"
    timestamp: 2024-01-15

    Static data that requires exact matches to find.

    Memory OS Memory

    content: "User prefers Python for data science"
    embedding: [0.1, -0.3, 0.8, ...]
    connections: ["programming", "data"]
    importance: 0.85

    Semantic understanding that can be found by meaning, not just keywords.

    Anatomy of a Memory

    Core Components

    πŸ“

    Content

    The actual information - text, facts, observations, or insights.

    "Sarah prefers technical explanations and is working on API integration"
    🏷️

    Memory Type

    Categorizes the memory for better organization and retrieval.

    conversationpreferencefact
    🧠

    Semantic Embedding

    A vector representation that captures the meaning, enabling semantic search.

    [0.1, -0.3, 0.8, 0.2, ...] (1536 dimensions)
    πŸ“Š

    Metadata

    Additional context like timestamps, importance scores, and custom tags.

    {
      "timestamp": "2024-01-15T10:30:00Z",
      "importance": 0.85,
      "source": "chat_conversation",
      "confidence": "high"
    }

    Types of Memories

    Different types of information require different handling. Memory OS supports various memory types, each optimized for specific use cases:

    πŸ’¬

    Conversation

    Dialogue history, questions asked, and responses given.

    "User asked about API rate limits"
    "Assistant explained exponential backoff"
    βš™οΈ

    Preference

    User preferences, communication styles, and personal choices.

    "Prefers detailed technical explanations"
    "Likes dark mode interfaces"
    πŸ“‹

    Fact

    Objective information, data points, and established knowledge.

    "Sarah is a software engineer"
    "Company uses Python for backend"
    πŸ’‘

    Insight

    Derived understanding, patterns, and synthesized knowledge.

    "User struggles with authentication concepts"
    "Prefers code examples over theory"

    πŸ’‘ Pro Tip: Custom Memory Types

    You can create custom memory types for your specific use case: "research_finding", "bug_report", "feature_request", etc.

    Memory Lifecycle

    Memories aren't static - they have a lifecycle that mirrors how human memory works:

    1

    Creation

    When new information is encountered, it's processed and stored as a memory with semantic embeddings.

    memory_client.create_memory(
    Β Β content="User prefers Python",
    Β Β memory_type="preference"
    )
    2

    Retrieval

    Memories are retrieved based on semantic similarity, not just keyword matching.

    # Query: "programming languages"
    # Finds: "User prefers Python" (semantic match)
    3

    Reinforcement

    Frequently accessed memories become stronger and more important over time.

    # Each access increases importance score
    importance: 0.5 β†’ 0.7 β†’ 0.85
    4

    Decay

    Unused memories gradually become less important, mimicking natural forgetting.

    # Automatic decay over time
    importance: 0.8 β†’ 0.6 β†’ 0.3 (if unused)

    The Power of Semantic Search

    Traditional databases require exact matches. Memory OS understands meaning. This is what makes AI memory truly intelligent.

    Traditional Search

    Query: "Python"
    Finds: Records containing "Python"
    Query: "programming language"
    Finds: Nothing (no exact match)

    Semantic Search

    Query: "Python"
    Finds: "User prefers Python", "Python developer", "loves coding"
    Query: "programming language"
    Finds: "User prefers Python", "JavaScript experience", "learning Rust"

    πŸ” Real Example

    A user asks: "What do you know about my coding preferences?"

    Memory OS finds: "prefers Python", "likes clean code", "uses VS Code", "interested in machine learning" - even though none of these contain the exact words "coding preferences".

    Memory in Action: Code Examples

    Creating Memories

    from memorystack import MemoryStackClient
    
    # Initialize client
    memory = MemoryStackClient(
        api_key="your_api_key",
        user_id="user_123"
    )
    
    # Store a conversation
    memory.create_memory(
        content="User asked about API rate limits and seemed confused",
        memory_type="conversation",
        metadata={
            "topic": "api",
            "sentiment": "confused",
            "importance": 0.7
        }
    )
    
    # Store a preference
    memory.create_memory(
        content="User prefers detailed technical explanations with code examples",
        memory_type="preference",
        metadata={
            "category": "communication_style",
            "confidence": "high"
        }
    )
    
    # Store a fact
    memory.create_memory(
        content="User is a senior Python developer at TechCorp",
        memory_type="fact",
        metadata={
            "verified": True,
            "source": "user_profile"
        }
    )

    Searching Memories

    # Semantic search - finds related concepts
    results = memory.search_memories(
        query="programming experience",
        limit=5
    )
    
    # Filter by memory type
    preferences = memory.search_memories(
        query="communication",
        memory_type="preference",
        limit=3
    )
    
    # Search with metadata filters
    recent_conversations = memory.search_memories(
        query="API questions",
        memory_type="conversation",
        metadata_filter={
            "topic": "api",
            "timestamp": "> 2024-01-01"
        }
    )
    
    # Process results
    for result in results['results']:
        print(f"Content: {result['content']}")
        print(f"Relevance: {result['relevance_score']}")
        print(f"Type: {result['memory_type']}")
        print("---")

    Building Context for AI

    def get_user_context(user_message: str) -> str:
        """Build rich context for AI responses"""
        
        # Get relevant memories
        memories = memory.search_memories(
            query=user_message,
            limit=10
        )
        
        # Organize by type
        context = {
            "preferences": [],
            "facts": [],
            "recent_conversations": []
        }
        
        for mem in memories['results']:
            mem_type = mem['memory_type']
            if mem_type == "preference":
                context["preferences"].append(mem['content'])
            elif mem_type == "fact":
                context["facts"].append(mem['content'])
            elif mem_type == "conversation":
                context["recent_conversations"].append(mem['content'])
        
        # Build context string
        context_str = f"""
    User Preferences:
    {chr(10).join(context['preferences'])}
    
    Known Facts:
    {chr(10).join(context['facts'])}
    
    Recent Conversations:
    {chr(10).join(context['recent_conversations'][:3])}
    """
        
        return context_str
    
    # Use in AI prompt
    user_message = "Can you help me with authentication?"
    context = get_user_context(user_message)
    
    ai_prompt = f"""
    Context about the user:
    {context}
    
    User's current question: {user_message}
    
    Provide a helpful response that takes into account their preferences and background.
    """

    Best Practices

    βœ… Do

    • β€’ Use descriptive, natural language content
    • β€’ Include relevant metadata for filtering
    • β€’ Choose appropriate memory types
    • β€’ Store both user input and AI responses
    • β€’ Use semantic search for better retrieval

    ❌ Don't

    • β€’ Store raw data without context
    • β€’ Use overly technical or cryptic content
    • β€’ Ignore memory types (use "general" sparingly)
    • β€’ Store duplicate information unnecessarily
    • β€’ Rely only on keyword-based searches

    πŸ’‘ Pro Tips

    Quality over Quantity: Better to have fewer, well-crafted memories than many low-quality ones.

    Context is King: Include enough context so the memory makes sense when retrieved later.

    Metadata Matters: Rich metadata enables powerful filtering and organization.

    Test Your Queries: Regularly test how well your memories are retrieved with different queries.

    Next Steps