10 comments

  • pornel 34 minutes ago

    It looks nice. I've been searching for something like this recently, and was frustrated with rankings that lack latest models or don't clearly distinguish quantizations.

    Showing quality loss per quantization is nice.

    I'd prefer this as a website, since I'd handle running of the model with a dedicated inference server anyway.

    It would be nice to see what's the maximum context length that can fit on top of the baseline.

    I was surprised how much token generation speed tanks when using very long context. 30/s can drop down to 2/s. A single speed metric didn't prepare me for that.

    I was also positively surprised that some models scale well with batch parallelism. I can get 4x speed improvement by running 8 requests in parallel. But this affects memory requirements, and doesn't apply to all models and inference engines. It would be nice to show that. Some sites fold it into "what's your workflow", but that's too opaque.

    KV cache quantization also makes a difference for speed, VRAM usage and max usable context.

    On Apple Silicon MLX-compatible model builds make a difference, so I'd like to see benchmarks reassure they're based on the fastest implementation.

    Multi-token-prediction is another aspect that may substantially change speed.

  • jordiburgos 23 minutes ago

    This is very helpful too: https://www.canirun.ai/

  • sleepyeldrazi 32 minutes ago

    I love this community, I started building a simple website for this exactly a couple of hours ago and you made an even more advanced version already. Hats off to you sir.

    If i ever decide to actually publish the site, is it alright if I mention you somewhere as a "If you want a more accurate estimation, check out this project:<your repo>", as i think there is value in having a simple website estimate this information for you, and give you instructions/ common flags on how to start it yourself (also a prompt crafted for you to optionally give to an llm to set it up for you), but im going off simple "choose an os, gpu/vram, here's a list of options" and not actually scanning (which is a lot more accurate).

  • llagerlof 43 minutes ago

    What’s new regarding llmfit?

    https://github.com/AlexsJones/llmfit

    • rvz 40 minutes ago

      Other than it (whichllm) being written in Python, nothing else.

      I just use llmfit.

  • Bigsy an hour ago

    Brew install is broken

    It seems pretty rubbish I have to say, its recommending me loads of qwen 2.5 which are really old and I'm easy running qwen3.5 and 3.6 models on this mac at decent quants

  • Jasssss an hour ago

    The plan command is clever. How do you handle the VRAM estimation for models with sliding window attention vs full context? Something like Mistral at 32k context uses way less KV cache than Llama at the same context length, but from the README it looks like the estimation is based on a fixed context size. Does it account for that?

  • kramit1288 an hour ago

    accurate memory estimation is key here. it will crash if that accurate and it cant be generic for all local llm. each local llm has different context estimates.

  • macwhisperer 42 minutes ago

    can you add in the other quants like IQ3_M?

    also my personal simple rule of thumb for local ai sizing is:

    max model size (GB) = ram (GB) / 1.65

  • pbronez 19 minutes ago

    Cool, but it looks like it doesn’t actually test anything on your machine? It does hardware detection and then some lookups. Maybe I missed it but I really want a tool like this to actually run a model on my machine to get the speed numbers.

    I’ve been using RapidMLX for this. The integrated speed tests matter because the quality of the backend is a moving target and the quantization / MLX format conversion also matter. It’s not enough to say “oh use this model family with X parameters” you have to add the architecture specific quantization too.

    https://github.com/raullenchai/Rapid-MLX