Skip to content

bianavic/go-langchain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vacation Idea Generator - AI Travel Assistant

A Go-based microservice that generates personalized vacation ideas using AI (Ollama). This project demonstrates how to integrate LLMs into a web service using Gin framework and LangChain Go.

Features

  • AI-Powered Vacation Planning: Generate personalized travel itineraries based on user preferences
  • Async Processing: Non-blocking vacation idea generation
  • RESTful API: Clean HTTP endpoints for creating and retrieving vacation ideas
  • Ollama Integration: Local LLM processing with configurable models
  • Real-time Status: Check generation progress and completion status

Stack

  • Backend: Go 1.21+
  • Web Framework: Gin
  • AI/ML: LangChain Go + Ollama
  • Configuration: Environment variables + .env file
  • UUID Generation: Google UUID

Prerequisites

Before running this project, ensure you have:

  1. Go 1.21+ installed
  2. Ollama installed and running
  3. At least one LLM model pulled in Ollama (e.g., llama3:latest)

Installing Ollama

# On macOS
brew install ollama

# On Linux
curl -fsSL https://ollama.ai/install.sh | sh

# On Windows
# Download from https://ollama.ai/download

Setting up Ollama Models

# Start Ollama service
ollama serve

# In a new terminal, pull a model
ollama pull llama3:latest

# Verify the model is available
ollama list

Installation

  1. Clone the repository
  git clone https://github.com/bianavic/go-langchain.git
  cd go-langchain
  1. Set up environment variables

Create a .env file in the project root:

# ollama base URL
LLM_BASE_URL=http://localhost:11434

# The LLM you want to use for the agents
LLM_MODEL=llama3:latest
  1. Initialize module and install dependencies
go mod init github.com/bianavic/go-langchain
go mod init
  1. Run the application
  cd cmd
  go run main.go

API Usage

Important Notes About Data

🔔 This is a demo service - Data is stored in memory only:

We're using in-memory storage as a deliberate simplification to focus on the core AI integration without database complexity

  • Vacation ideas disappear when the server restarts
  • Perfect for testing and development
  • Not suitable for production use without adding a real database
  1. Create a Vacation Idea
  • Endpoint: POST /vacation/create
  curl --location 'localhost:8080/vacation/create' \
  --header 'Content-Type: application/json' \
  --data '{
  	  "favorite_season": "winter",
      "hobbies": ["visiting museums", "ice skating"],
      "budget": 1000
  }'

# Response: {"id":"bdd32ff6-7e56-4c36-8a63-ef1a0ab7fcbf","completed":false}
  1. Check Vacation
  • Endpoint: GET /vacation/:id

This endpoint waits (up to 5 minutes) for the vacation idea to be fully generated before returning the response.

  curl --location 'localhost:8080/vacation/bdd32ff6-7e56-4c36-8a63-ef1a0ab7fcbf'

## Response: 
#{
#    "id": "bdd32ff6-7e56-4c36-8a63-ef1a0ab7fcbf",
#    "completed": true,
#    "idea": "ollama response goes here...."
#}

About

This project demonstrates how to integrate LLMs into a web service using Gin framework and LangChain Go

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages