MemoryStackMemoryStack/Documentation

    LangChain Integration

    Add persistent memory to your LangChain applications. Memorystack seamlessly integrates with LangChain to provide semantic memory across conversations.

    ✓ LangChain.js✓ LangChain Python✓ Memory Persistence

    Installation

    Install both LangChain and Memorystack SDK:

    npm install langchain @memorystack/sdk

    Basic Integration

    Create a custom memory class that integrates Memorystack with LangChain:

    import { BaseChatMemory } from "langchain/memory";
    import { MemoryStackClient } from "@memorystack/sdk";
    import { InputValues, OutputValues } from "langchain/schema";
    
    // Custom Memorystack integration for LangChain
    export class MemoryStackChatMemory extends BaseChatMemory {
      private client: MemoryStackClient;
      private userId?: string;
    
      constructor(apiKey: string, userId?: string) {
        super();
        this.client = new MemoryStackClient({ apiKey });
        this.userId = userId;
      }
    
      // Load conversation history from Memorystack
      async loadMemoryVariables(): Promise<{ history: string }> {
        const memories = await this.client.listMemories({
          user_id: this.userId,
          limit: 10,
          order: "desc"
        });
    
        // Format memories as conversation history
        const history = memories.results
          .map(m => `[${m.memory_type}] ${m.content}`)
          .join("\n");
    
        return { history };
      }
    
      // Save conversation to Memorystack
      async saveContext(
        inputValues: InputValues,
        outputValues: OutputValues
      ): Promise<void> {
        const userMessage = inputValues.input || inputValues.question;
        const aiMessage = outputValues.output || outputValues.answer;
    
        await this.client.addConversation(
          userMessage,
          aiMessage,
          this.userId
        );
      }
    
      // Clear memory (optional)
      async clear(): Promise<void> {
        // Implement if needed
      }
    }

    Usage Example

    Use Memorystack with LangChain chains:

    import { ChatOpenAI } from "langchain/chat_models/openai";
    import { ConversationChain } from "langchain/chains";
    import { MemoryOSChatMemory } from "./memory-os-memory";
    
    // Initialize Memorystack memory
    const memory = new MemoryOSChatMemory(
      process.env.MEMORY_OS_API_KEY!,
      "user_123" // Optional user ID
    );
    
    // Create LangChain with Memorystack
    const model = new ChatOpenAI({
      temperature: 0.7,
      modelName: "gpt-4"
    });
    
    const chain = new ConversationChain({
      llm: model,
      memory: memory
    });
    
    // Have a conversation - memories are automatically saved
    const response1 = await chain.call({
      input: "I love TypeScript and prefer dark mode"
    });
    
    console.log(response1.response);
    
    // Later conversation - previous context is loaded
    const response2 = await chain.call({
      input: "What programming language do I prefer?"
    });
    
    console.log(response2.response);
    // Output: "You mentioned that you love TypeScript!"

    Advanced: RAG with Memorystack

    Combine Memorystack with LangChain's RAG capabilities:

    import { RetrievalQAChain } from "langchain/chains";
    import { OpenAIEmbeddings } from "langchain/embeddings/openai";
    import { MemoryVectorStore } from "langchain/vectorstores/memory";
    
    // Create a RAG chain with Memorystack context
    async function createRAGWithMemory() {
      const memoryClient = new MemoryStackClient({
        apiKey: process.env.MEMORYSTACK_API_KEY!
      });
    
      // Get user's memories for context
      const memories = await memoryClient.getPersonalMemories(20);
      
      // Add memories to vector store
      const vectorStore = await MemoryVectorStore.fromTexts(
        memories.results.map(m => m.content),
        memories.results.map(m => ({ type: m.memory_type })),
        new OpenAIEmbeddings()
      );
    
      // Create RAG chain
      const chain = RetrievalQAChain.fromLLM(
        new ChatOpenAI({ modelName: "gpt-4" }),
        vectorStore.asRetriever()
      );
    
      return chain;
    }
    
    // Use the RAG chain
    const ragChain = await createRAGWithMemory();
    const answer = await ragChain.call({
      query: "What are my preferences?"
    });
    
    console.log(answer.text);

    Benefits

    🧠 Semantic Memory

    Automatically extract and store semantic facts, not just raw conversation history.

    💾 Persistent Storage

    Memories persist across sessions and application restarts.

    👥 Multi-User Support

    Easily manage memories for multiple users in B2B applications.

    🔍 Semantic Search

    Retrieve relevant memories using natural language queries.