Available Examples
Basic RAG
Build a simple RAG application with vector indexing and querying
Basic Agent
Create an AI agent with tool calling capabilities
Workflow Basics
Build event-driven workflows with state management
Multimodal
Work with images, text, and vision models
Next.js Integration
Integrate LlamaIndex.TS with Next.js applications
Quick Start
All examples can be run directly withtsx:
Example Categories
RAG (Retrieval-Augmented Generation)
Learn how to build applications that combine document retrieval with LLM generation:- Vector indexing and embeddings
- Query engines and chat engines
- Document processing and chunking
- Metadata extraction and filtering
Agents
Build intelligent agents that can use tools and reason about tasks:- Tool calling and function execution
- Multi-agent coordination
- Memory management
- Provider-specific implementations
Workflows
Create event-driven workflows for complex multi-step processes:- State management
- Event handling
- Iterative refinement
- Custom workflow patterns
Multimodal
Work with multiple data modalities:- Image analysis and understanding
- Vision-language models
- Multimodal RAG
- CLIP embeddings
Framework Integrations
Integrate with popular frameworks:- Next.js (Node.js and Edge Runtime)
- Server actions and API routes
- Cloudflare Workers
- Vite and other bundlers
GitHub Repository
Find all examples in the GitHub repository:LlamaIndex.TS Examples
Browse the complete collection of examples on GitHub
Additional Resources
- Workflows Repository - Advanced workflow examples
- Community Examples - Real-world integration examples
- Documentation - Comprehensive guides and API reference
Environment Setup
Most examples require API keys for LLM providers:Running Examples
Examples are organized by category in the repository:All examples use TypeScript and can be executed directly with
tsx without compilation.