I updated to 13.1.2.1448 from 13.1.1.1182 today and am now having issues with the PRTG API. I have custom sensors/scripts that access historic data (xml/avg=540) that had no issues prior to updating. Now, however, some of the calls result in blanks in my script. So I pasted the URLs in Firefox and it was hit and miss when I hit refresh. Some times the xml came right up others it said "Firefox can't find the file at (url)". I then tried the URLs in Internet Explorer and had the same issue, though the error message there was from PRTG stating "Load limit exceeded / Please try again later!".
I don't know why there's an issue all of a sudden. They worked just fine prior to the update, and are kind of important to our company. Will running the 13.1.1.1182 setup exe downgrade PRTG, or will that cause problems?
Article Comments
Would downgrading be a possibility? At least until I get my sensors/scripts changed?
Mar, 2013 - Permalink
A rollback is always tricky I'm afraid, but possible. With this minor version it's basically just running the old installer on your PRTG installation. But please keep in mind, that it will be necessary to "downgrade" all Remote Probes manually (download and run the "Remote Probe Installer" on each Remote Probe). Also, if you are running a Cluster, that would complicate things a lot. Then it would be necessary to shut down the Failover Node, run the downgrade installer on the Master Node, and then manually on the Failover Node as well.
If you'd like to do a rollback and need an installer for the old 13.1.1.1182 version, please get in contact with us via email to support@paeesler.com and also forward us the Core Log.
Mar, 2013 - Permalink
Would it be possible to make the maximum number of API calls per minute a setting under "Setup" so that users can configure it themselves? You could default it to 5 and have a message that tells users changing the number can impact PRTG performance. 5 is just way too low for me/my needs. If I was to try to space the calls out to fit the limit, it'd take over 6 minutes for each of my sensors to complete. I can't be the only one affected by this change.
Mar, 2013 - Permalink
Sorry, but we really don't want to change this, or implement a setting for it. First, a setting for this would be quite 'complicated', as to many users wouldn't know what it's actually for. That may sound plain, but it's a valid reason for keeping the number of switches and options as low as possible.
Secondly, it would open up the problem again, potentially bringing PRTGs Webserver into an overload situation. Which is something we have to avoid by all means.
Mar, 2013 - Permalink
Hm.
Ok. Many users wouldn't know what it's actually for. Try to configure 10 Daily pdf-reports, set schedule 0:00 and your can see pdf with "Load limit exceeded" text.
Reports - often used function.
Schedule does not use minutes. Only hours. 10 reports - 10 hours.
Mar, 2013 - Permalink
Yes, this can happen. Please distribute the reports then a bit, so that not all are executed at the same time.
Mar, 2013 - Permalink
Of course this is the first thing I thought. But schedule granularity - 1 hours without minutes. Can it is necessary consider introducing minutes in scheduler?
Mar, 2013 - Permalink
Hi,
I am in the same situation as MattG - 5 API calls per minute is simply not enough for our needs. We are a MSP who have thousands of sensors which need to be reported on. Doing 5 call per minute on thousands of sensors is impractical - I know you stated that you will not be adding a setting to change this limit - however there should be a workaround implemented for users who require more than 5 API calls a minute, otherwise this will be a massive issue for users with thousands of sensors.
Mar, 2013 - Permalink
I´m having the same problem too, when exporting to CVS with the CVSexport API, it has a delay but is not working. Hope you can help us with this. My server run perfectly before with this reports, so I don't thing I was overloading or at least not for more than a minute.
Apr, 2013 - Permalink
Hi,
I agree with Nav. 5 API calls per minute is definitive not enough. Our Monitoring- and Reportingapplication intensively uses the API and will be impacted seriously when limiting the number of API calls to 5.
Please remove this limit with the next release.
thanks thomas
Apr, 2013 - Permalink
We do hear you! We will keep the limit, but add a pipeline to it. So that requests "over the limit" are queued but not blocked/refused. This will be available as per the next stable release, in hopefully the next two weeks. Please bear with us.
Apr, 2013 - Permalink
The "Load limit exceeded" message was shown by PRTG versions 13.x.1-13.x.2 whenever more than 5 reporting or historic-data requests were sent to the web server in less than 1 minute.
Version 13.x.3 and later do not display this message any more. Now requests are pipelined (everything over 5 requests per minute is delayed).
May, 2013 - Permalink
Is there any chance we will be able to disable this feature on our installs?
This is an absolute disaster for us.
Each night we need to make a couple thousand calls to the API to get metrics across our system. This only takes a minute or two to execute.
These metrics are extracted for two reasons - To make automated adaptive changes to how we divide up our service allocations - To determine accumulation of pay for performance payments to our employees based on the availability and performance of our different systems
We updated to version 13, and suddenly we are no longer to manage these portions of our buisness. We had to go to veeam and restore our dedicated PRTG server as we could no longer manage this portion of our business under version 13.
I did see the argument:
"it would open up the problem again, potentially bringing PRTGs Webserver into an overload situation. Which is something we have to avoid by all means."
but quite honestly, you don't need to worry about this for me. I am happy throttling myself to ensure I don't take down my own, paid for and on premise, PRTG instance.
If I am throttled to 5 calls per minute, it will take roughly 7 hours to get through the 2000 API calls we make each day. Not to mention we will have to rewrite all of our connectors to presumably wait hours for "pipelined" responses to requests.
This is a massive regression in capabilities of your product, and its absolutely devistating us where we allowed ourselves to rely on Paesler and PRTG.
If there is any way we can get around this new limitation, please let us know.
Thank you, Chris
Jul, 2013 - Permalink
Chris, I am very sorry, but currently there are no plans for an option to disable the pipelining, nor are there any ways to bypass it.
Jul, 2013 - Permalink
Torsten, did Paessler change their mind in the last 9 months? Did you decide to remove pipelining again or do you still keep it in the lastest releases?
Best Regards thomas
Apr, 2014 - Permalink
The Pipelining is still implemented, as we still need to protect the webserver. However it is transparently made for the user, and will not cause error-responses in reports or historic data requests anymore.
Apr, 2014 - Permalink
Oh man, I came into work this morning and saw responses in this thread resurrected in my email. For a brief moment my heart skipped a beat thinking we could start using PRTG again as our main monitoring solution, but alas, it was just false hope.
I find it funny that this feature was added to "protect" PRTG for us users, but because of it, instead of our PRTG being protected, we had to abandon PRTG almost completely and move to Foglight at nearly 100x the cost!!! Even our annual maintenance for Foglight is about twenty times the one time fee we used to pay for PRTG. Yikes!
If you guys ever change your mind and re-enable us to use your product the way we need to (we accept the risk that if we abuse the API, we will slow down the core and jeopardize our scanning schedules), please be sure to post here and we'd be back in a heartbeat.
Apr, 2014 - Permalink
Chris, I talked to the developer and he says that if we'd remove the queuing your API requests would not be faster, they would simply run parallel instead if serialized, which might even slow them down.
One other solution is:
- You could make a copy of the API endpoint that you use (e.g. historicdata_html.htm)
- remove the <#loadlimit> placeholder
- target your API calls to the new filename
This will remove the queuing, but we do not recommend/support this approach.
Apr, 2014 - Permalink
Dear Matt,
I'm very much afraid this is due to a change in PRTGs webserver. Such API requests for historic data could possibly put a huge load on PRTGs Core Server, and so the number of such API Requests were limited to 5 per minute. It would be necessary to delay the requests now, so that this limit is not hit.
best regards.
Mar, 2013 - Permalink