inari@piefed.zip to Technology@lemmy.worldEnglish · 2 days agoDeepSeek ditches Nvidia for Huawei chips in V4 launchcybernews.comexternal-linkmessage-square86linkfedilinkarrow-up1404arrow-down19
arrow-up1395arrow-down1external-linkDeepSeek ditches Nvidia for Huawei chips in V4 launchcybernews.cominari@piefed.zip to Technology@lemmy.worldEnglish · 2 days agomessage-square86linkfedilink
minus-squaresp3ctr4l@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up1·1 day agoSorry, I’m not entirely sure what you mean. Did you mean to say: “And need to have the best consumer GPU on the market, to run an LLM.” … likely alluding to an RTX 5090? So you would be saying that basically it is bullshit, the idea that everyone needs extremely expensive hardware, to run an LLM?
minus-squareDiurnambule@jlai.lulinkfedilinkEnglisharrow-up2·1 day agoHello, no sorry auto correction and going fast do it to my posts. I wanted to say that NVIDIA is already the worst option for consumer graphic card since AMD made a card with 20go ram which is able to run most open weight models.
minus-squaresp3ctr4l@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up1·1 day agoAha! Ok, that makes sense as well.
Sorry, I’m not entirely sure what you mean.
Did you mean to say:
“And need to have the best consumer GPU on the market, to run an LLM.”
… likely alluding to an RTX 5090?
So you would be saying that basically it is bullshit, the idea that everyone needs extremely expensive hardware, to run an LLM?
Hello, no sorry auto correction and going fast do it to my posts. I wanted to say that NVIDIA is already the worst option for consumer graphic card since AMD made a card with 20go ram which is able to run most open weight models.
Aha! Ok, that makes sense as well.