Stop depending on these proprietary LLMs. Go to [email protected].
There are open-source LLMs you can run on your own computer if you have a powerful GPU. Models like OLMo and Falcon are made by true non-profits and universities, and they reach GPT-3.5 level of capability.
There are also open-weight models that you can run locally and fine-tune to your liking (although these don’t have open-source training data or code). The best of these (Alibaba’s Qwen, Meta’s llama, Mistral, Deepseek, etc.) match and sometimes exceed GPT 4o capabilities.
you can do cpu inference too! if you have enough ram to load GGUF formats :)
Booooooooooo!
Anyway: ill just keep using alpaca to run llms locally
is there an easy way to do this that doesn’t require me to understand how github works?
I recommend Ollama, its easy to setup and the cli can download and run llms. With some more techsavviness you can get openwebui as a nice ui.





