Local Chat AI on Mac

In this post, we’ll be diving into the installation and usage of ollama, a local chat AI that runs on your Mac.

Installing Ollama

1
brew install ollama

Once installed, you can pull down a pre-trained model (in this case, we’ll be using the “llama3” model):

1
ollama pull llama3

Serving Ollama

1
ollama serve

This will start the ollama server and make it available for you to interact with.

Creating a New Terminal Session and Running Llama

Open up another terminal session (or a new tab) and run ollama with the pulled model:

1
ollama run llama3

Chatting with Llama AI

Now, you can start chatting with your very own local chat AI! Simply type away in this terminal session, and ollama will respond to your questions and engage in conversation.

0%