We use auto-scaling in Amazon (AWS) to scale-out new machines (or scale back) depending on CPU load. The machines that auto-scale out receive a known IP address from a pool of IPs. So I know all potential IPs. There is always 1 server running but if CPU load is heavy it would auto scale to 2 or 3 or up to 6 machines.

If I add each machine as a device in PRTG, I'll see critical warnings because the machines aren't always UP. Typically 1 or 2 will be up. They're only UP based on current CPU load. So I don't want to receive warnings/e-mails/texts once a scaled machine scales back down (turns off). I'm trying to figure out how to cleanly monitor machines that may or may not be present at a given moment: 2 machines may be up but all 6 could be up too for a while while the load is heavy.

I'm not sure if some sort of scripting or flag file could be used. Or maybe PRTG can be told to "pause" monitoring when a machine scales back down (off).

I'm sure others must have encountered this scenario while trying to monitor scaled instances.


Article Comments

Hello,

Turns out that at this moment PRTG does not have this ability to be pause automatically for the scaled instances.

Hope that clarifies your question/concern.


Apr, 2021 - Permalink

I'm aware PRTG doesn't have a native method to pause automatically for scaled instances. I would think other clients using the product would have come up against the same scenario (scaled machines) I'm trying to resolve. It requires a bit of thinking outside the box to script / construct a solution. I'm looking for those people who have done this already to reply to the thread.


Apr, 2021 - Permalink

Hi, any update on this? I'm on the same situation. I want to monitor a pool of auto scaled VMs and track performance for each one. Being agentless in that situation is a little bit tricky. Any thoughts?


Mar, 2022 - Permalink

Hi Marcelo,

Unfortunately this is still not possible with PRTG.

Best regards,

Miguel Aikens


Mar, 2022 - Permalink

Hi, any update on this please? same situation here...


Jul, 2023 - Permalink