China modded GPU (eg. 4090 48gb) --> I'm gonna figure it out. IS THERE NO ONE ELSE CURIOUS??
Mirrored from r/LocalLLaMA for archival readability. Support the source by reading on the original site.
There's a dearth of information (in the english world) about these cards.
The good recent video is probably this one:
https://www.youtube.com/watch?v=TcRGBeOENLg
even in this subreddit, there's seems to be few reviews of these cards.
Last couple of decent threads:
https://www.reddit.com/r/LocalLLaMA/comments/1s62b23/bought_rtx4080_32gb_triple_fan_from_china/
https://www.reddit.com/r/LocalLLaMA/comments/1nifajh/i_bought_a_modded_4090_48gb_in_shenzhen_this_is/
Is there really NOONE else who has tried these?
In particular
- Software / bios / quirks that make them NOT run as per unmodded card
- Short term consistency, does it run fast for a test, but hang / die when stressed?
- Long term reliability - does the whole thing fail within 2 months of regular usage?
- Are the benchmarks good? Where are the results??
- source and price?
chinese video site blibli has ton of videos, and taobao (and other ecomm) sites also lots of sellers.
If i can piece together enough research, i may also visit shenzhen to pick up a few.
If you're interested in this space, DM me . hope to form a group to split up research efforts.
Also any native chinese speakers who are familiar in this space also please join in.
EDIT:
Some downvotes going on. Unclear if its some larger suppression of this topic, or just angry people.
[link] [comments]
More from r/LocalLLaMA
-
club-5060ti: practical RTX 5060 Ti local LLM notes and configs
May 15
-
MiniMax M2.7 ultra uncensored heretic is Out Now with 4/100 Refusals, Available in Safetensors and GGUFs Formats!
May 15
-
Need a second pair of eyes, this Qwen3.6 27B quant recipe consistently thinks less and is correct
May 15
-
RDNA3 Flash Attention fix just dropped by llama.cpp b9158
May 15
Discussion (0)
Sign in to join the discussion. Free account, 30 seconds — email code or GitHub.
Sign in →No comments yet. Sign in and be the first to say something.