<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://onnocenter.or.id/wiki/index.php?action=history&amp;feed=atom&amp;title=Compfyui%3A_GPU_untuk_text_to_video</id>
	<title>Compfyui: GPU untuk text to video - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://onnocenter.or.id/wiki/index.php?action=history&amp;feed=atom&amp;title=Compfyui%3A_GPU_untuk_text_to_video"/>
	<link rel="alternate" type="text/html" href="https://onnocenter.or.id/wiki/index.php?title=Compfyui:_GPU_untuk_text_to_video&amp;action=history"/>
	<updated>2026-04-13T12:15:57Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.35.4</generator>
	<entry>
		<id>https://onnocenter.or.id/wiki/index.php?title=Compfyui:_GPU_untuk_text_to_video&amp;diff=72214&amp;oldid=prev</id>
		<title>Onnowpurbo: Created page with &quot;Mantap Dzaq, kamu mau setup **ComfyUI di Ubuntu 24.04 dengan GPU untuk Text-to-Video generation**. Ini keren banget, karena bisa hasilin video dari teks pakai model kayak **Mo...&quot;</title>
		<link rel="alternate" type="text/html" href="https://onnocenter.or.id/wiki/index.php?title=Compfyui:_GPU_untuk_text_to_video&amp;diff=72214&amp;oldid=prev"/>
		<updated>2025-03-29T10:21:59Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot;Mantap Dzaq, kamu mau setup **ComfyUI di Ubuntu 24.04 dengan GPU untuk Text-to-Video generation**. Ini keren banget, karena bisa hasilin video dari teks pakai model kayak **Mo...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;Mantap Dzaq, kamu mau setup **ComfyUI di Ubuntu 24.04 dengan GPU untuk Text-to-Video generation**. Ini keren banget, karena bisa hasilin video dari teks pakai model kayak **ModelScope T2V** atau **Zeroscope**, langsung di ComfyUI!&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 🔧 **Langkah Lengkap Instalasi ComfyUI + GPU + Text-to-Video di Ubuntu 24.04**&lt;br /&gt;
&lt;br /&gt;
&amp;gt; 💡 Asumsi kamu pakai GPU NVIDIA dengan driver + CUDA udah terinstall. Kalau belum, bisa aku bantu juga.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ✅ 1. Install Dependensi Dasar&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
sudo apt update &amp;amp;&amp;amp; sudo apt upgrade -y&lt;br /&gt;
sudo apt install git python3.10 python3.10-venv python3.10-dev build-essential ffmpeg -y&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&amp;gt; 🧠 **Catatan:** Python harus versi 3.10.x karena ComfyUI belum support 3.12+&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ✅ 2. Clone ComfyUI&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
git clone https://github.com/comfyanonymous/ComfyUI.git&lt;br /&gt;
cd ComfyUI&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ✅ 3. Buat dan Aktifkan Virtual Env&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
python3.10 -m venv venv&lt;br /&gt;
source venv/bin/activate&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ✅ 4. Install Dependensi Python&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
pip install --upgrade pip&lt;br /&gt;
pip install -r requirements.txt&lt;br /&gt;
pip install xformers&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
&amp;gt; 🔧 Kalau kamu pakai GPU NVIDIA Ampere/RTX, `xformers` sangat direkomendasikan.&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ✅ 5. Download Model Stable Diffusion + T2V&lt;br /&gt;
&lt;br /&gt;
### 📦 **Stable Diffusion (untuk frame generation)**&lt;br /&gt;
Taruh di:&lt;br /&gt;
```&lt;br /&gt;
ComfyUI/models/checkpoints/&lt;br /&gt;
```&lt;br /&gt;
Contoh:&lt;br /&gt;
```bash&lt;br /&gt;
mkdir -p models/checkpoints&lt;br /&gt;
wget -O models/checkpoints/v1-5.safetensors https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.safetensors&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### 📦 **ModelScope Text-to-Video**&lt;br /&gt;
Taruh di:&lt;br /&gt;
```&lt;br /&gt;
ComfyUI/models/text2video/&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Contoh:&lt;br /&gt;
```bash&lt;br /&gt;
mkdir -p models/text2video&lt;br /&gt;
wget -O models/text2video/modelscope_t2v.safetensors https://huggingface.co/damo-vilab/modelscope-text-to-video-synthesis/resolve/main/pytorch_model.bin&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ✅ 6. Tambah Node T2V (ModelScope / Zeroscope)&lt;br /&gt;
&lt;br /&gt;
Clone node custom untuk text-to-video:&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
cd custom_nodes&lt;br /&gt;
git clone https://github.com/Kosinkadink/ComfyUI-VideoHelperSuite.git&lt;br /&gt;
git clone https://github.com/Kosinkadink/ComfyUI-ModelScopeT2V-node.git&lt;br /&gt;
cd ..&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ✅ 7. Jalankan ComfyUI&lt;br /&gt;
&lt;br /&gt;
```bash&lt;br /&gt;
python main.py&lt;br /&gt;
```&lt;br /&gt;
&lt;br /&gt;
Buka di browser:  &lt;br /&gt;
`http://127.0.0.1:8188`&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
## ✅ 8. Load Workflow Text-to-Video&lt;br /&gt;
&lt;br /&gt;
1. Di ComfyUI, klik tombol `Load`&lt;br /&gt;
2. Pilih workflow JSON khusus T2V (aku bisa bantuin buatin kalau belum ada)&lt;br /&gt;
3. Isi prompt, klik `Queue Prompt`&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
### ⚠️ Syarat Penting:&lt;br /&gt;
- RAM minimal 16 GB (lebih kalau bisa)&lt;br /&gt;
- VRAM GPU minimal 8 GB (lebih disarankan)&lt;br /&gt;
- Pastikan `ffmpeg` sudah terinstall (untuk gabung frame jadi video)&lt;br /&gt;
&lt;br /&gt;
---&lt;br /&gt;
&lt;br /&gt;
Kalau kamu mau:&lt;br /&gt;
- 🔁 Workflow T2V siap pakai (anime style, realistik, dll)&lt;br /&gt;
- 🚀 Optimasi kecepatan GPU&lt;br /&gt;
- 📹 Tambah control frame atau interleave AI video&lt;br /&gt;
&lt;br /&gt;
Tinggal bilang aja Gipy bantuin semua!&lt;br /&gt;
&lt;br /&gt;
Mau aku langsung buatin **workflow JSON untuk Text to Video pakai ModelScope** juga?&lt;/div&gt;</summary>
		<author><name>Onnowpurbo</name></author>
	</entry>
</feed>