Chapter 2 of 2

Your First Langflow Pipeline

Log in to Langflow, connect it to Ollama, and build a simple chat pipeline using the visual editor.

What you'll build

A simple question-and-answer pipeline that:

  1. Accepts text input from a user
  2. Passes it to Ollama (running locally)
  3. Returns the LLM's response

This is the foundation for every pipeline you'll build later.


Step 1 — Open Langflow

Navigate to http://localhost:7860 in your browser.

Log in with the credentials you set in docker-compose.yml:

  • Username: admin
  • Password: changeme

Step 2 — Create a new flow

  1. Click New Flow on the dashboard.
  2. Choose Blank Flow to start from scratch.
  3. Name it Ollama Chat.

Step 3 — Add components

Langflow uses a drag-and-drop canvas. You'll connect three components:

Chat Input

  • From the left panel, drag Chat Input onto the canvas.
  • This simulates a user typing a message.

Ollama LLM

  • Search for Ollama in the components panel.
  • Drag the Ollama component onto the canvas.
  • Configure it:
    • Base URL: http://ollama:11434
    • Model Name: llama3.2:3b

Inside Docker, containers talk to each other by service name, not localhost. Use http://ollama:11434not http://localhost:11434.

Chat Output

  • Drag Chat Output onto the canvas.

Step 4 — Connect the components

Draw connections between the component ports:

Chat Input (Message) → Ollama (Input)
Ollama (Output)       → Chat Output (Message)

Your canvas should look like three boxes connected by arrows.


Step 5 — Run the pipeline

  1. Click the Playground button (bottom-right corner).
  2. Type a question: What is a vector database?
  3. Press Enter.

You should see a response from llama3.2:3b appear within a few seconds.


Step 6 — Export as JSON

Every flow can be saved as a JSON file for version control or sharing:

  1. Click the menu (⋯) next to the flow name.
  2. Select Export.
  3. Save Ollama Chat.json to your local-pipeline folder.

Commit your flow JSON files to git — they're human-readable and diff cleanly.


Summary

You've built and run your first local pipeline. You now know how to:

  • Navigate the Langflow visual editor
  • Connect to a local Ollama model from inside Docker
  • Run a live conversation through the Playground

Next, we'll add a vector database layer using Qdrant so the pipeline can answer questions based on your own documents.