Introduction
Dify is an open-source platform that lets you build AI applications visually -- no coding required. From RAG chatbots to complex agent workflows, Dify provides a drag-and-drop interface that connects to Claude, GPT, and local models. This guide walks you through building your first AI application.
Prerequisites
- Docker installed (for self-hosting) or a Dify Cloud account
- Basic understanding of AI concepts (LLMs, RAG, embeddings)
- Documents or data you want to make searchable
Step 1: Installation
Docker (Self-Hosted)
git clone https://github.com/langgenius/dify.git
cd dify/docker
cp .env.example .env
docker compose up -dAccess Dify at http://localhost/install and create your admin account.
Cloud Version
Sign up at dify.ai for a hosted version with a free tier.
Step 2: Configure AI Models
- Navigate to Settings > Model Providers
- Add your preferred model:
- Anthropic: Claude Opus 4.6, Claude Sonnet
- OpenAI: GPT-5 Turbo, GPT-4o
- Ollama: Local models (Llama 4, Qwen 3)
- Set your default model for each task type
Step 3: Build a RAG Chatbot
Create Knowledge Base
- Go to Knowledge > Create Knowledge Base
- Upload your documents (PDF, DOCX, TXT, Markdown)
- Configure chunking:
- Chunk size: 500-1000 characters
- Overlap: 50-100 characters
- Select embedding model
- Click Save and Process
Create the Application
- Go to Studio > Create Application
- Select Chatbot type
- Configure:
- System prompt: Define the bot's personality and scope
- Knowledge base: Connect your documents
- Model: Select your preferred LLM
- Test in the preview panel
- Publish when ready
Step 4: Build a Workflow
Dify's workflow builder lets you create complex AI pipelines:
- Go to Studio > Create Application > Workflow
- Add nodes:
- Start: Define input variables
- LLM: Process with AI model
- Knowledge Retrieval: Search documents
- Code: Run custom Python/JavaScript
- Conditional: Branch logic
- End: Define output
- Connect nodes by dragging edges
- Test the workflow
Step 5: Deploy
API Access
curl -X POST 'https://your-dify-instance/v1/chat-messages' \
-H 'Authorization: Bearer your-api-key' \
-H 'Content-Type: application/json' \
-d '{
"query": "What is your refund policy?",
"user": "user-123"
}'Embed in Website
Dify provides an embeddable chat widget:
<script src="https://your-dify-instance/embed.js"
data-app-id="your-app-id">
</script>Troubleshooting
- Slow responses: Check model selection, use faster models for simple tasks
- Poor RAG quality: Adjust chunk size, try different embedding models
- Docker issues: Ensure sufficient RAM (8GB+ recommended)
- Model errors: Verify API keys and model access
Conclusion
Dify democratizes AI application development by providing a visual, no-code platform that supports enterprise-grade features. Whether you are building a simple chatbot or a complex multi-step agent, Dify makes it accessible.
Key Takeaways
- Start with the chatbot template for quick wins
- Invest time in knowledge base configuration for better RAG quality
- Use workflows for complex multi-step applications
- Self-host for data privacy, use cloud for convenience