Collecting the 5-minute-average every minute would be aggressive and probably not as helpful as you might like. Yes, the 5 minute average will change (since it's a rolling 5 minutes - actually, it's the average of the 1-minute load average stat), but not a ton (unless you see a huge spike in load in the last minute).
Better in your situation would be to collect the 1 minute load average (every minute), and then have your alert trigger if that value (along with the other variables) are over threshold for xx minutes.
The "problem" with that plan is that you are collecting stats every minute. I try to avoid doing that for all but the most critical of systems, because it's such a drain on poller resources. I've also found that collecting stats so often introduces a level of sensitivity (in terms of alerts) that support teams are unhappy to experience (meaning: it triggers too often, and by the time support gets to the system the problem has disappeared.
It comes down to what I affectionately call "the prozac moment" - that point when the MANAGER of the system realizes it's not as rock-solid steady as they imagined it. That the system actually does have frequent spikes (and valleys). They didn't notice it before because the metric collection wasn't that granular. But now that they have the data, it takes time to come to grips with it. The first urge is to UN-ALERT ALL THE THINGS!!
After a while they realize that all systems behave this way, and they are willing to ratchet down the polling cycles and/or extend the trigger timing so that only the actionable issues come through.