It seems to me like some (maybe most) of the pressure is actually external from companies that might release something that dramatically increases "adoption" & transaction rates (and that the data on historic rate of adoption & slumps is somewhat disconnected from their interests in a quick roll-out)?
It seems like the question actually becomes what is our maximum acceptable cost (hardware capex & bandwidth & power opex) associated with running a full node without hardware acceleration and with hardware acceleration (something which presumably "doesn't exist" yet)? Are we making the assumption that hardware acceleration for confirmation will become broadly available and that the primary limiter will become anonymous bandwidth?
Excuse my ignorance, but I imagine somebody must have already looked at confirmation times vs. block size for various existing hardware platforms (like at least 3 or 4? maybe a minnowboard, old laptop, and modern desktop at least?)? Is there an easy way to setup bitcoind or some other script to test this? (happy to help)
Re Moore's law: yeah, some say stuff like 5nm may never happen. We're already using EUV with plasma emitters, immersed reflective optics, and double-patterning... and in storage land switching to helium. Things may slow A LOT over the next couple decades and I'd guess that a quadratic increase (both in storage & compute) probably isn't a safe assumption.