Welcome to the Inedo Forums! Check out the Forums Guide for help getting started.

If you are experiencing any issues with the forum software, please visit the Contact Form on our website and let us know!

Why does a recurring job trigger when the last execution is still running?



  • We have jobs scheduled on a recurring basis. If a job runs long, the next scheduled job will still trigger, and will interfere with the currently running job. In our setup, it is not desirable for a job to trigger if the last execution is still running. Is this the expected behavior and if so, is there a way to prevent it?

    Product: BuildMaster
    Version: 5.6.8



  • The triggers are "fire and forget" so they wouldn't know if an execution was in progress.

    Depending on what the underlying issue is, you could use locks (i.e. with lock = !tokenName, note when tokenName is prefixed with ! then it is held system-wide) and Resource Pools.

    As an extreme measure, you could always write a quick custom extension or PowerShell script that queries the BuildMaster DB for current executions (besides the 1 that's querying for them of course) and halts when the others complete or something like that.



  • Thanks Tod, but shouldn't the release itself know that it has a current execution that hasn't finished? For example, what should happen if a release is currently executing and I try to create a new package via the web interface for that release? I would expect the new package to be prevented from executing since there is a currently executing instance. (Anecdotally, I believe that was the behavior prior to 5.x release)



  • It was always possible to create a build/package and deploy to a step/stage regardless if there was an execution going or not. In fact, prior to v5, working directories were application-specific instead of execution-specific... which caused some odd behavior if deploying to different stages at the same time as you can imagine.

    There's no good reason to prevent multiple executions for many reasons, but the main reason is that deploying to multiple targets in a pipeline stage is actually implemented using simultaneous executions.



  • We are talking about different situations. I understand that multiple targets could execute simultaneously (and that can be desirable), but those are for a single "run" of a job. I'm talking about a scenario where a second "run" is started before the first "run" is finished.

    Say I click "Create Package" for a release. The release enters the Build stage (which is the first stage in this scenario). The Build stage takes 15 minutes. While it is building, I accidentally click "Create Package" again for the same release. I don't want this second click to start the build process again.

    This gets even worse if the release is deploying to an environment. A second instance of the same release trying to run at the same time as a prior instance could have all sorts of negative consequences.



  • The "Create Package" option was never blocked because of a deployment already happening to the Build environment/step/stage in any version that I can recall back to v2. If anything, there was a time where it cancelled the one that was already happening, but it never prevented anyone from clicking the button in the UI.

    I understand that it could have negative consequences, I'm just saying that it doesn't always, so we shouldn't disallow it.



  • Would you consider adding an option to prevent the second execution, since it could have negative consequences?



  • Definitely; this feature is certainly slated for redesign, to bring it more in the line of a "scheduled job" concept, and unifying implementation across our products (ProGet, Otter, Hedgehog as well).

    This concept of "don't run this job if an execution associated with this job is currently running" is how it could be implemented/described I suppose


Log in to reply
 

Inedo Website HomeSupport HomeCode of ConductForums GuideDocumentation