Quickstart
Get up and running with LlamaIndex.TS in just a few minutes. This guide will walk you through creating your first Retrieval-Augmented Generation (RAG) application.What You’ll Build
You’ll create a simple application that:- Loads a text document
- Creates a searchable index from the document
- Answers questions using the indexed data
Install LlamaIndex.TS
First, install the main package and an LLM provider. We’ll use OpenAI for this example.
The core
llamaindex package provides the framework, while @llamaindex/openai adds OpenAI LLM and embedding support.Set Up Your API Key
You’ll need an OpenAI API key. Get one at platform.openai.com/api-keys.Set your API key as an environment variable:Or create a
.env file:.env
Interactive Chat Example
Let’s enhance the example to create an interactive chat interface:chat.ts
Building an Agent with Tools
For more advanced use cases, create an agent that can use tools:agent.ts
Agents require the
@llamaindex/workflow package for orchestration:What’s Happening?
Here’s what happens under the hood:Embedding Generation
Each chunk is converted into a vector embedding using OpenAI’s embedding model.
Query & Retrieval
When you ask a question, relevant chunks are retrieved based on semantic similarity.
Next Steps
Now that you’ve built your first RAG application, explore more features:Installation Guide
Learn about runtime-specific setup and provider packages
Core Concepts
Deep dive into Documents, Nodes, Indices, and more
Vector Stores
Use production vector databases like Pinecone, Qdrant, or Chroma
Agents
Build sophisticated agents with reasoning and tool usage