That’s where this gets weird, I have multiple times, the drives are fully functional under windows and Linux,
I had a couple boots into Ubuntu where performance wasn’t right and using hpssacli to pull the drive details fixed it. Literally just querying the details. Zero malfunction under windows.
I did a ton of digging and found that the 845dc and the 840evo have very similar hardware with the key difference of a non active com port on the consumer line.
Both nodes also have pny and wd ssds running along side them that do not malfunction.
I have flashed the raid controller, Drive firmware, system board firmware, ilo, swapped processors, swapped all the ram (known goods). Swapped the physical controller, controller batteries, Both controllers are in hba mode including forced hba at one point.
And the drives work fine in other devices,
they are detected everytime by zos, but sometimes as hdd.
A relevant fact, the 410i leaves “fingerprints” of other drives on a slot, if that drive doesn’t overwrite the value. I’m drawing that conclusion based on,
1.) another fix I found was putting a different ssd brand in the slot and then swapping them, the next boot was fine. But again only for that boot
2.) I then did this with a 3.5” 7200rpm sas drive suddenly I got writes speeds of about 100 mb/s and terrible seek times 4-5 ms range m, these were the time doing the detail fixed it, did this 2-3 cycles to make sure it repeated.
- Yet another single boot fix was wiping the entire controller configuration and it works work again for that boot.
I’ve chased this gremlin pretty far. I can reliably fix it everytime, for that time. There has to be an answer here there’s too many variables available. Just need a bigger brain to stroll by 
Ubuntu is doing something that Keeps them performing, and I’m booting straight from an install image no drivers installed, non persistent install, the only time Ubuntu fails to perform is if the ssd just replaced a sas drive.