• Paradox@lemdro.id
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    5 months ago

    Can I download their model and run it on my own hardware? No? Then they’re inferior to deepseek

    • Teanut@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 months ago

      In fairness, unless you have about 800GB of VRAM/HBM you’re not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.

      I’m really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.

      • Padit@feddit.org
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 months ago

        Well, honestly: I have this kind of computational power at my university, and we are in dire need of a locally hosted LLM for a project, so at least for me as a researcher, its really really cool to have that.

        • Teanut@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 months ago

          Lucky you! I need to check my university’s current GPU power but sadly my thesis won’t be needing that kind of horsepower, so I won’t be able to give it a try unless I pay AWS or someone else for it on my own dime.