Categories
How-To Software Technical

Large Language Models (LLM) Running @ Home

Running large language models (LLMs) on home hardware can be a challenging task due to the significant computational resources required by these models. However, with the right setup and configuration, it is possible to train and run LLMs on your personal computer or laptop.

The first step in running an LLM on your home hardware is to ensure that you have enough processing power and memory. Most LLMs require at least 8GB of RAM and a powerful CPU, such as an Intel Core i7 or AMD Ryzen 9. You will also need a dedicated graphics card (GPU). GPU is optional in my use.

To install is super simple:

curl https://ollama.ai/install.sh | sh

Once installed you can have a quick test to chat with :

ollama run mistral --verbose
>>> What is the speed of light?
The speed of light (often denoted by "c") is approximately 299,792,458 meters per second (or about 186,282 miles per second) in a vacuum. This is considered the maximum
 speed at which all energy and matter in the universe can travel.

total duration:       34.340058373s
load duration:        2.834786ms
prompt eval count:    17 token(s)
prompt eval duration: 2.704563s
prompt eval rate:     6.29 tokens/s
eval count:           66 token(s)
eval duration:        31.538673s
eval rate:            2.09 tokens/s
>>> Send a message (/? for help)

Asked again, this time prompting it for a short answer.

>>> What is the speed of light? short answer only
Approximately 299,792,458 meters per second (or about 186,282 miles per second) in a vacuum.

total duration:       20.607172456s
load duration:        3.733685ms
prompt eval count:    19 token(s)
prompt eval duration: 3.104334s
prompt eval rate:     6.12 tokens/s
eval count:           37 token(s)
eval duration:        17.402699s
eval rate:            2.13 tokens/s
>>> Send a message (/? for help)

And just like that you, now have your own LLM up-and-running inference, entirely locally! Happy tinkering and learning.

ref links:

Ollama site: https://ollama.ai/

Data models: https://ollama.ai/library

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.