Fully-featured, beautiful web interface for Ollama LLMs - built with NextJS. Deploy with a single click.
Get up and running with Large Language Models quickly, locally and even offline. This project aims to be the easiest way for you to get started with LLMs. No tedious and annoying setup required!
To use the web interface, these requisites must be met:
You'll need to set your OLLAMA_ORIGINS environment variable on your machine that is running Ollama:
OLLAMA_ORIGINS="https://your-app.vercel.app/"
To install and run a local environment of the web interface, follow the instructions below.
1. Clone the repository to a directory on your pc via command prompt:
git clone https://github.com/jakobhoeg/nextjs-ollama-llm-ui
2. Open the folder:
cd nextjs-ollama-llm-ui
3. Rename the .example.env
to .env
:
mv .example.env .env
4. If your instance of Ollama is NOT running on the default ip-address and port, change the variable in the .env file to fit your usecase:
NEXT_PUBLIC_OLLAMA_URL="http://localhost:11434"
5. Install dependencies:
npm install
6. Start the development server:
npm run dev
5. Go to localhost:3000 and start chatting with your favourite model!
This is a to-do list consisting of upcoming features.
NextJS - React Framework for the Web
TailwindCSS - Utility-first CSS framework
shadcn-ui - UI component built using Radix UI and Tailwind CSS
shadcn-chat - Chat components for NextJS/React projects
Framer Motion - Motion/animation library for React
Lucide Icons - Icon library
Medium Article - How to launch your own ChatGPT clone for free on Google Colab. By Bartek Lewicz.