Kicking off a thread here to start working on a model for rewarding farmers with GPUs. Here’s what I started writing on Telegram before thinking to make this a forum post …
GPU support is now planned for the 3.8 release, so two releases out. We still need a spec for how this fits into the farming model. I’m not very knowledgeable about GPUs, but with just a little research I can see this is going to be tricky. Seems that evaluating GPU performance is probably more complex than evaluating the performance of an entire computer without GPU, and outcomes are highly application specific. Incorporating a benchmark could be helpful, rather than just using an algorithm based on various specs. Blender’s benchmark looks like maybe one of the better options, in terms of Linux support and open licensing
A few links to resources informing what I wrote above:
- Benchmark lists - Passmark, Tom’s Hardware (these tests don’t run on Linux)
- Reddit thread on how GPU specs relate to performance (it’s complicated, architecture and application dependent, vendor and drivers matter a lot)
- List of benchmarks that run on Linux (many of these are not open source or even free to run on the command line)
A few other thoughts:
- Memory generation seems very important. GDDR4 performs much worse than 5/6
- I guess we’ll call the derived GPU score GU for “graphics units”
- Based on what I read in the implementation issue here, a workload must reserve a GPU entirely. This means GUs could be based entirely off a benchmark, for example, without needing a notion that you can reserve so many cuda cores and so much VRAM. Phewph.
- Probably this process should involve spot checking some GPUs against the formula to see what their ROI is and see how it compares to non GPU hardware investments (recall that our formulas involve assigning a dollar normalized monthly reward for each resource unit, then determining token rewards from there based on the entry price)