I think there a few different possible ways to go here.
One is to try and simplify the setup of all the components so it all gets installed together. That might be feasible in some quite restricted setups but the installation instructions for Graphite look kind of terrifying.
Another is to export stats over regular TCP and make them public so literally anyone can listen to the stats feed for any node. Then people who dig stats and graphs could work on stats aggregators that give global network visibility independently, effectively crawling the p2p network for data. It'd have the advantage of having zero setup for the node operators and not require much in the way of resources.
For what it's worth, although the environment is a bit different inside Google the latter approach is used. Monitoring servers locate servers of interest via a discovery service, connect to them and start streaming stats data into a database service that can then be queried later to get graphs.
The stats are also run through various rules to obtain alerts about problematic conditions. For example, if a subset of the network splits it might be hard to notice that if the node operators aren't paying attention and Matt's fork alert/emailing code isn't set up. But if there was a site crawling nodes and aggregating chain heights by version, that could trigger an alert to people who are paying attention.
I know from practical experience that monitoring and analysis tends to appeal more to certain types of people than others. So I quite like the "let anyone monitor" approach. However, it may not be appropriate in a P2P network, I did not think about it much.
Obviously I'm assuming none of the stats expose privacy sensitive data.