Difference between revisions of "LLM: RAG contoh"
Jump to navigation
Jump to search
Onnowpurbo (talk | contribs) |
Onnowpurbo (talk | contribs) |
||
Line 10: | Line 10: | ||
pip3 install langchain_community | pip3 install langchain_community | ||
pip3 install langchain-text-splitters | pip3 install langchain-text-splitters | ||
+ | pip3 install fastembed | ||
Edit apps.py | Edit apps.py |
Revision as of 13:09, 15 July 2024
Contoh: https://www.youtube.com/watch?v=7VAs22LC7WE
python3 -m venv venv source venv/bin/activate
mkdir pdf mkdir db
pip3 install langchain_community pip3 install langchain-text-splitters pip3 install fastembed
Edit apps.py
from langchain_community.llms import Ollama llm = Ollama(model="llama3") responds=llm.invoke("Why the sky is blue?") print(responds)
Run
python3 apps.py
Flask
pip3 install flask
Edit apps.py
from flask import flask from langchain_community.llms import Ollama
app = flask(__name__)
llm = Ollama(model="llama3") responds=llm.invoke("Why the sky is blue?") print(responds)
Lebih detail bisa di baca di
https://github.com/ThomasJay/RAG/blob/main/app.py