LLM: ollama install ubuntu 24.04 python open-webio

From OnnoWiki
Revision as of 08:40, 24 March 2025 by Onnowpurbo (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Sumber: https://www.jeremymorgan.com/blog/generative-ai/local-llm-ubuntu/

Pastikan:

  • OS bisa linux server, seperti ubuntu server.
  • Memory RAM pastikan cukup untuk menyimpan model-nya. Model llama3 butuh memory paling tidak 8GB.
  • Kalau mau lebih enak/lebih cepat sebaiknya pakai GPU, seperti nvidia telsa yang agak baru.


Install aplikasi pendukung

sudo apt update
sudo apt install curl net-tools


Download

sudo curl -fsSL https://ollama.com/install.sh | sh

Sebagai user biasa run & download model

ollama pull llama3.2:1b
ollama pull deepseek-r1:7b
ollama pull qwen2.5-coder:7b 
ollama pull bge-m3:latest
ollama pull gemma3:4b
ollama pull adijayainc/bhsa-deepseek-r1-1.5b
ollama pull adijayainc/bhsa-llama3.2

optional,


ollama pull llama3
ollama pull llama3.2:1b


ollama pull rizkiagungid/deeprasx
ollama pull fyandono/chatbot-id
ollama pull rexyb10/codeai
ollama pull fahlevi20/DeepSeek-R1-TechSchole-Indonesia
ollama pull all-MiniLM

Kalau punya GPU dengan RAM besar

ollama pull llama3.3

Cek ollama

systemctl status ollama


Install docker open-webui

sudo apt install ffmpeg
sudo su
curl -LsSf https://astral.sh/uv/install.sh | sh
source $HOME/.local/bin/env
sudo su
DATA_DIR=~/.open-webui uvx --python 3.11 open-webui@latest serve

Step 2: Managing Your Ollama Instance To manage your Ollama instance in Open WebUI, follow these steps:

  • Go to Admin Settings in Open WebUI.
  • Navigate to Connections > Ollama > Manage (click the wrench icon).

From here, you can download models, configure settings, and manage your connection to Ollama.

Contoh

curl http://localhost:11434/api/generate -d '{
  "model" : "llama3",
  "prompt" : "tell me a joke",
  "stream" : false
  }'
  


Referensi