Your First Langflow Pipeline
Log in to Langflow, connect it to Ollama, and build a simple chat pipeline using the visual editor.
What you'll build
A simple question-and-answer pipeline that:
- Accepts text input from a user
- Passes it to Ollama (running locally)
- Returns the LLM's response
This is the foundation for every pipeline you'll build later.
Step 1 — Open Langflow
Navigate to http://localhost:7860 in your browser.
Log in with the credentials you set in docker-compose.yml:
- Username:
admin - Password:
changeme
Step 2 — Create a new flow
- Click New Flow on the dashboard.
- Choose Blank Flow to start from scratch.
- Name it
Ollama Chat.
Step 3 — Add components
Langflow uses a drag-and-drop canvas. You'll connect three components:
Chat Input
- From the left panel, drag Chat Input onto the canvas.
- This simulates a user typing a message.
Ollama LLM
- Search for Ollama in the components panel.
- Drag the Ollama component onto the canvas.
- Configure it:
- Base URL:
http://ollama:11434 - Model Name:
llama3.2:3b
- Base URL:
Inside Docker, containers talk to each other by service name, not localhost. Use http://ollama:11434 — not http://localhost:11434.
Chat Output
- Drag Chat Output onto the canvas.
Step 4 — Connect the components
Draw connections between the component ports:
Chat Input (Message) → Ollama (Input)
Ollama (Output) → Chat Output (Message)
Your canvas should look like three boxes connected by arrows.
Step 5 — Run the pipeline
- Click the Playground button (bottom-right corner).
- Type a question:
What is a vector database? - Press Enter.
You should see a response from llama3.2:3b appear within a few seconds.
Step 6 — Export as JSON
Every flow can be saved as a JSON file for version control or sharing:
- Click the menu (⋯) next to the flow name.
- Select Export.
- Save
Ollama Chat.jsonto yourlocal-pipelinefolder.
Commit your flow JSON files to git — they're human-readable and diff cleanly.
Summary
You've built and run your first local pipeline. You now know how to:
- Navigate the Langflow visual editor
- Connect to a local Ollama model from inside Docker
- Run a live conversation through the Playground
Next, we'll add a vector database layer using Qdrant so the pipeline can answer questions based on your own documents.