GPU farming rewards

Kicking off a thread here to start working on a model for rewarding farmers with GPUs. Here’s what I started writing on Telegram before thinking to make this a forum post :slight_smile:

GPU support is now planned for the 3.8 release, so two releases out. We still need a spec for how this fits into the farming model. I’m not very knowledgeable about GPUs, but with just a little research I can see this is going to be tricky. Seems that evaluating GPU performance is probably more complex than evaluating the performance of an entire computer without GPU, and outcomes are highly application specific. Incorporating a benchmark could be helpful, rather than just using an algorithm based on various specs. Blender’s benchmark looks like maybe one of the better options, in terms of Linux support and open licensing

A few links to resources informing what I wrote above:

  • Benchmark lists - Passmark, Tom’s Hardware (these tests don’t run on Linux)
  • Reddit thread on how GPU specs relate to performance (it’s complicated, architecture and application dependent, vendor and drivers matter a lot)
  • List of benchmarks that run on Linux (many of these are not open source or even free to run on the command line)

A few other thoughts:

  • Memory generation seems very important. GDDR4 performs much worse than 5/6
  • I guess we’ll call the derived GPU score GU for “graphics units”
  • Based on what I read in the implementation issue here, a workload must reserve a GPU entirely. This means GUs could be based entirely off a benchmark, for example, without needing a notion that you can reserve so many cuda cores and so much VRAM. Phewph.
  • Probably this process should involve spot checking some GPUs against the formula to see what their ROI is and see how it compares to non GPU hardware investments (recall that our formulas involve assigning a dollar normalized monthly reward for each resource unit, then determining token rewards from there based on the entry price)
3 Likes

Generally it would be interesting in which direction threefold wants to go.

  1. Combining the threefold nodes with additional GPU’s thus enhancing computation power with more use cases - for instance deep learning AI
  2. GPU heavy loads like rendering
  3. both
    Different customers - different specs for the servers.

The reward system for GPU’s will reflect some kind of benchmark of the GPU. But will this be enough or should some kind of usage play a part of the size of the reward?

If we inlcude option 2 and give out fixed rewards we could get flooded from the miner community.
I think the first option would be better for the threefold network, adding GPU’s to the existing servers where it is possible.

If half of the servers receive 1-2 gpu - what would this mean for the tokenomics? What is the worth of a cpu compared to a gpu? In commercial clouds like aws, azure the addition of gpu’s is very pricey…

I think the intention is to cover a variety of use cases, including async number crunching jobs where having the GPU in an already powerful box would be beneficial, and live rendering for games or metaverse where edge positioning and latency are more important.

Having written that, I see the crux of what you’re getting at. What should be the minimum compute/storage spec (likely CU quantity) needed for GPU farming? Indeed, GPU mining rigs typically only have minimal RAM and disk.

My preference is to encourage well rounded nodes, at least to start. As utilization grows, we can consider tweaking the system to serve the use cases we’re seeing in practice.

Just to record some notes from discussion around this on Telegram:

  • Suggestion from community to scale rewards based on published gflop figures
  • My feeling is that this is a solution that would be actually difficult to implement (need to find reliable data source with compatible license, can’t test matching algorithm without access to variety of actual cards)
  • Gflops can be easily calculated based on cores and clock speed, but this information is difficult to query, especially in a Linux environment
  • A very simple “benchmark” would be simply measuring gflop capacity by running a dummy calculation a set number of times and measuring the elapsed time
  • We plan to use Passmark for CPUs because it provides a better picture of real performance than just core count and clockspeed. I hope we can find a similar solution here
  • Every approach will favor certain hardware over others. There’s no “one right answer” when catering to various applications. Changing the rewards system after launch is best avoided if at all possible
1 Like