🚀 Now Available on npm

AI Agent Memory
Made Simple

MindCache is a TypeScript library for managing short-term memory in AI agents through a simple, LLM-friendly key-value repository. Perfect for chatbots, AI assistants, and intelligent workflows.

View Examples
100% Test Coverage
18.9kB Package Size
TypeScript First

Why MindCache?

Built specifically for AI agents and LLM integration

🤖

Auto Tool Generation

Automatically generate LLM tool schemas for AI agents to read and write memory with zero configuration

🧠

Auto Context Management

Intelligent context tracking and memory organization that adapts to your AI agent's needs

📋

Auto System Prompt Generation

Dynamically generate system prompts with all memory context and tool instructions included

Template Injection

Dynamic {placeholder} replacement with recursive resolution for context-aware prompts

🔄

Reactive Updates

Subscribe to specific keys or all changes with event-driven listeners

🏷️

Tags & Search

Organize memory with tags and search by category for efficient retrieval

Examples

Real-world use cases for AI agents

// Auto Tool Generation for AI Agents
import { mindcache } from 'mindcache';
import { generateText } from 'ai';

// Store user preferences and context
mindcache.set_value('userName', 'Alice', { tags: ['user'] });
mindcache.set_value('favoriteColor', 'blue', { tags: ['preferences'] });
mindcache.set_value('lastTask', 'planning vacation', { tags: ['context'] });

// Automatically generate tool schemas for Vercel AI SDK
const tools = mindcache.get_aisdk_tools();
// Returns: {
//   write_userName: { description: "...", parameters: {...} },
//   write_favoriteColor: { description: "...", parameters: {...} },
//   write_lastTask: { description: "...", parameters: {...} }
// }

// Generate system prompt with memory context
const systemPrompt = mindcache.get_system_prompt();
// "userName: Alice. You can rewrite \"userName\" by using..."
// "favoriteColor: blue. You can rewrite \"favoriteColor\" by..."
// "$date: 2025-01-15"

// Use with AI SDK - tools are automatically available
const { text, toolCalls } = await generateText({
  model: openai('gpt-4'),
  tools: tools, // AI can now call write_userName(), etc.
  system: systemPrompt,
  prompt: 'Remember that I love green now, not blue.'
});

// AI will automatically call write_favoriteColor('green')
// and the memory is updated automatically

Auto Tool Generation

Automatically generate LLM tool schemas so AI agents can read and write memory using function calls. No manual tool definitions needed - tools are created for every writable key in your memory.

// Context Management with Tags & System Prompts
import { mindcache } from 'mindcache';

// Mark context items with tags
mindcache.set_value('userName', 'Alice', { tags: ['context'] });
mindcache.set_value('userRole', 'developer', { tags: ['context'] });
mindcache.set_value('projectName', 'MindCache', { tags: ['context'] });

// Store non-context data without tags
mindcache.set_value('tempNote', 'Meeting at 3pm');
mindcache.set_value('cacheVersion', '1.0.0');

// Extract only tagged context for focused prompts
const contextData = mindcache.getTagged('context');
// "userName: Alice, userRole: developer, projectName: MindCache"

// Generate system prompt with all visible keys
const fullSystemPrompt = mindcache.get_system_prompt();
// Includes all keys: userName, userRole, projectName, tempNote, cacheVersion

// Or create focused context from tagged items only
const focusedContext = `Context: ${contextData}
Instructions: You are helping ${mindcache.get_value('userName')} 
with their ${mindcache.get_value('projectName')} project.`;

// Use focused context for specific workflows
// Or use full system prompt for comprehensive memory access

// Filter keys by tag programmatically
const allKeys = Object.keys(mindcache.getAll());
const contextKeys = allKeys.filter(key => 
  mindcache.hasTag(key, 'context')
);
// ['userName', 'userRole', 'projectName']

Context Management & System Prompts

Use tags to mark important context items, extract only tagged content when needed, and automatically generate system prompts with all memory context. Perfect for building focused AI workflows while maintaining full memory access.

Documentation

Complete API reference and guides

Copied to clipboard!