TIL: Storing MLX models on an external drive

#llm #mlx #huggingface

TL;DR: Create a directory to hold Hugging Face data and set the environment variable HF_HOME to that directory's path.

For example:

mkdir /Volumes/externaldrive/huggingface
export HF_HOME='/Volumes/externaldrive/huggingface' 

You'll also want to set it in your .profile or .bashrc or wherever you set these things.

Explanation

I use Simon Willison's llm tool for interacting with LLMs from the CLI or from Python. It's particularly nice because you can access many different LLM providers using plugins.

One such plugin is llm-mlx. It uses Apple's MLX library to run models locally on Apple hardware. Being able to run models under your own control is obviously very interesting, but it does mean storing gigabytes of model weights; if you're running a Mac you probably don't want those on your expensive, not-so-large internal hard drive. I, for example, would much rather they live on my external drive, which is much roomier.

llm-mlx gets its models from the mlx-community group on Hugging Face; we can manage them accordingly:

The external drive will likely be a little slower to load than the onboard one, but as with any performance question you'll have to measure for your specific case and choose the tradeoffs that work for you.