Running Local LLMs
The next computer I get will have much more memory, just so I can run local LLMs. Local LLM gives the option of developing LLM application entirely locally, then switching to a better model for deployment.
Today I tried running models locally, and it was surprisingly easy. I tried two ways:
-
using the ‘llm’ tool: https://llm.datasette.io/en/stable/ The llm I used:
llm install llm-gpt4all llm -m mistral-7b-instruct-v0 'say hi in spanish'
-
using a llama file, I used this one: https://huggingface.co/Mozilla/Llama-3.2-1B-Instruct-llamafile