Setting Up Local AI Chat on VSCode on Mac

In this tutorial, we’ll guide you through the process of installing and setting up the ollama local AI chat service in VSCode on your Mac. We’ll also explore how to integrate it with the Continue extension for seamless AI-powered coding experiences.

Step 1: Install ollama

To get started, open your terminal and run the following command:

1
brew install ollama

This will install ollama on your system.

Step 2: Pull Model (e.g., llama3)

Once installed, you’ll need to pull a model for use with ollama. In this example, we’ll pull the llama3 model:

1
ollama pull llama3

This will download and set up the selected model.

Step 3: Run ollama Service

Next, start the ollama service:

1
ollama serve

This will launch the ollama server on your local machine.

Step 4: Install Continue Extension in VSCode

Now, open VSCode and install the Continue extension from the Marketplace by searching for “Continue” or clicking this link: https://marketplace.visualstudio.com/items?itemName=Continue.continue

Once installed, you can activate the extension by clicking on the Continue icon in the top-right corner of your VSCode window.

Step 5: Configure Continue Extension

To integrate ollama with the Continue extension:

  1. Open the Continue tab.
  2. Click “Add New Model”.
  3. Select “Ollama for Local AI” as the model type.
  4. Choose “Autodetect” to add all installed ollama models.

Step 6: Engage with Your AI

With ollama and Continue set up, you can now ask questions or seek answers directly from the Continue tab in VSCode. You can also use keyboard shortcuts visible when selecting code to engage with your AI model.

By following these steps, you’ll be able to leverage local AI chat capabilities within VSCode on your Mac.

0%