Load balancing for large installations can be achieved by offloading sensors to remote probes as far as I can tell.
Now let's say that I have a remote installation that I'd like to monitor and I need 500 sensors at that installation.
I can monitor those sensors directly from the local sensor at the core server, and that counts as using the CPU, RAM and other resources for 500 sensors on that server. If I instead set up a remote sensor at the remote installation and monitor the 500 sensors using that device, which is sending info back to the core, then is the core using only the resources required to monitor one sensor or 500?
If I then duplicate that setup across 500 locations, is the core server using up resources to monitor 500 sensors or 250000? (500 locations monitoring 500 sensors each)
I'm looking into monitoring a very large number of devices at many different locations.
Article Comments
So does the use of remote sensors actually offload some of the work from the core server?
Aug, 2014 - Permalink
No, they don't. You can see remote probes as a tunnel into a different network. The core server simply requests through that tunnel.
Aug, 2014 - Permalink
Calculations are always done by the core server. When you have 500 sensors on a remote probe, the core server has to do calculations for 500 sensors.
I reccomend using multiple core servers, regardless of the sensor type you're using. Of course you could try it with one core server; but it really depends on the sensor types. I don't think that you'll only use ping sensors?
Please be aware that if you have a 500 sensor license, you'll have 500 sensors across your whole PRTG infrastructure. Let's say that you have 300 sensors on your remote probe, you have 200 left to create on another probe or your core server.
Aug, 2014 - Permalink