How to install Ollama Ai on Mac & Docker

How to install AI servers on your home network is very simple and you will be up and running in mintues.

First you need to download and install Ollama from here:
Download and install directly on your M1/M2/M3 Mac, which should use direct CPUs, and Ollama require process power to return AI responses. Suprisingly Ollama uses the power M1/2/3 chips & GPU.

Once Ollama is installed you next need to install Models, which is basically Ai brain/knowledge:

The most common is Llama 3, to do this run the following command

$ ollama run llama3

Microsoft Phi3 and Googles’s Gemma AI are also popular.

Next, you need to install OpenWebUI for Ollama to use in an Web interface. If you already have docker install you just needs to copy and past the below command

$ docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always


Tips and tricks

Comments are closed