Local Chat AI on Mac
Contents
In this post, we’ll be diving into the installation and usage of ollama, a local chat AI that runs on your Mac.
Installing Ollama
|
|
Once installed, you can pull down a pre-trained model (in this case, we’ll be using the “llama3” model):
|
|
Serving Ollama
|
|
This will start the ollama server and make it available for you to interact with.
Creating a New Terminal Session and Running Llama
Open up another terminal session (or a new tab) and run ollama with the pulled model:
|
|
Chatting with Llama AI
Now, you can start chatting with your very own local chat AI! Simply type away in this terminal session, and ollama will respond to your questions and engage in conversation.