On this video I present I used to be in a position to set up an open supply Massive Language Mannequin (LLM) referred to as h2oGPT on my native pc for 100% non-public, 100% native chat with a GPT.
Hyperlinks
* h2o web site: https://h2o.ai/
* h2oGPT UI: https://falcon.h2o.ai/
* h2oGPT GM-UI : https://gpt-gm.h2o.ai/
* h2oGPT github repo: https://github.com/h2oai/h2ogpt
* h2o Discord: https://discord.gg/WKhYMWcVbq
Timeline:
00:00 100% Native Non-public GPT
01:01 Strive h2oGPT Now
02:03 h2oGPT Github and Paper
03:11 Mannequin Parameters
04:18 Falcon Foundational Fashions
06:34 Cloning the h2oGPT Repo
07:30 Putting in Necessities
09:48 Working CLI
11:13 Working h2oGPT UI
12:20 Linking to Native Information
14:14 Why Open Supply LLMs?
Hyperlinks to my stuff:
* Youtube: https://youtube.com/@robmulla?sub_confirmation=1
* Discord: https://discord.gg/HZszek7DQc
* Twitch: https://www.twitch.tv/medallionstallion_
* Twitter: https://twitter.com/Rob_Mulla
* Kaggle: https://www.kaggle.com/robikscube