The main idea i was trying to describe was that the actual difficulty for the block could be adjusted according to how much the size of the proposed block differ compared to the average size of blocks in the previous difficulty period. Unless I’m being very dense atm your gist is just about dynamically adjusting the blocksize?
I’ll give a numeric example to clarify a bit.
Assume the current difficulty was calculated to be 1000, and the average size of the blocks in the period used to calculate the difficulty was 500kb.
Example 1:
I’m now attempting to find a new block with a size of 450 kb, or 450/500 = 10% smaller than average. The difficulty would then be 1000 * 110% = 1100
Example 2:
If I instead was trying to make a block sized 10000 kb, or 10000/500 = 2000% bigger than average the difficulty would be adjusted to 1000*20 = 20000
Why I find this interesting is in a possible future when the block reward is insignificant compared to the transactions fees miners would make bigger blocks as fees rise. A miner could include more transactions into blocks as long as the fees are high enough to offset the reduced chance of actually finding the block. However, I now realize that there wouldn’t be any downward pressure below the average size if the price shrinks (using the particular numbers i have in my examples) though. Maybe this method is only useful on the upside of the blocks, meaning blocks smaller than the average size doesn’t get adjusted difficulty. I need to go for a walk and think this through :)