Local AI Chat on Web-Based Client on Mac

In this tutorial, we will set up a local AI chat client using ollama and Open WebUI. This will allow us to interact with our AI model locally without relying on any cloud services.

Install ollama

First, install ollama using Homebrew:

1
brew install ollama

Install model for ollama


Next, we need to download the model we want to use. For this example, we’ll use the “llama3” model:

1
ollama pull llama3

Start ollama service

Start the ollama service:

1
ollama serve

This will start the ollama server, which we can then connect to using Open WebUI.

Install Open WebUI

To install Open WebUI, create a docker-compose.yml file with the following content:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
version: "3.8"

services:
  app:
    image: ghcr.io/open-webui/open-webui:main
    restart: always
    ports:
      - "$PORT:8080"
    volumes:
      - data:/app/backend/data
    networks:
      - default
    environment:
      HOST_GATEWAY: host-gateway

networks:
  default:

volumes:
  data:

Then, create a .env file with the following content:

1
PORT=4000

Run docker compose

Finally, run docker-compose up -d to start the Open WebUI service in detached mode.

1
docker-compose up -d

This will start the Open WebUI server and make it available at http://localhost:4000.

Open web browser and interact with AI

Open a web browser and navigate to http://localhost:4000. You should see the Open WebUI interface. Select the “llama3” model, and you’re ready to start interacting with your AI locally.

You can now create articles, have conversations, and more using your local AI chat client.

0%