As I understand it, the problem is that plotting (the initialization process) requires a lot of temporary space. A 100 GiB plot requires about 240 GiB of temporary space to be generated. That's almost 260 GB in storage terms. Bear in mind that how much space gets taken up is not the same as how much data is written (= how much it costs you in write endurance). And that's exactly what's causing the excessive wear.
The solution isn't to use enterprise-grade, expensive SSDs (which do offer much higher endurances than consumer-grade). The best tool for the job is SDRAM. It can take it all year around and it's incredibly fast by comparison (as I understand it, speed of the temporary storage is a bottleneck, at least with SSDs). The second best solution is volatile flash. Volatile means it doesn't retain information when powered off, just like SDRAM. Which has much, much higher endurance. Most of the wear in flash-based SSDs is caused by making a write persistent (meaning information stays there even without power). An example of this is Optane DIMM which can be configured as volatile (it has other tricks up its sleeve). It's cheaper than RAM and faster than SSD. You'd have to perform cost analysis but as far as technology is concerned, flash-based SSD is a bad fit for this purpose. You are playing into the weakness of the technology, not its strength.
As a side note, 1 TB drive will fit 9 ~100 GiB (k=32) plots. Such a plot takes up almost 110 GB. 9 of them take up about 980 GB. Leaving maybe 9 free.