Agile VS monumental performance testing: my considerations

WOPR7 theme Agile Performance Testing(and some discussion of the theme) encouraged for multiple considerations. It seems to me that my previous posts attacking the validation test approach are actually caused by my agile instead of monumental context.

According to Martin Fowler Agile methods are adaptive rather than predictive and people-oriented rather than process-oriented

What I’m going to say that performance testing is still seen by a lot of practitioners as process-oriented predictive method:
1. Predict the load
2. Define the context in which software will work (hardware, software, data)
3. Emulate the load, find (predict) bottlenecks and validate the pre-planned pass/fail criteria
4. Fix any issues and return to step3 (for more adaptive but process-oriented methods there are option to change the context and return to step 2)
The way I see agile performance testing means that we should for any new feature added to software analyze it’s impact of performance. Agile methodologies suppose that software should be ready and shippable any time (or at least at and of any iteration).

This should apply to non-functional requirements, shouldn’t it? It means the following:
1. The typical usage scenarios and even user count may vary from iteration to iteration "Often the most valuable features aren't at all obvious until customer have had a chance to play with the software. "
2. Load testing should be done in parallel with (instead of after) functional testing
More over usual agile approach is to fix time and price, and to allow the scope to vary in a controlled manner this should apply to non-functional requirements as well. As far as I understand this yields to:

3. Performance tests only aim is highlighting the possible “performance improvements”* for each feature to be considered: compare the cost and need (impact on the final system performance). Sometimes you should ask customer, but in most cases you could trust your project people.

* yes, I mean even obvious performance bugs like thread-safety or memory leaking should be considered: sometimes the costs of fixing is too high and is better to remove the feature that introduced the need of improvement or implement a workaround.

So how about validation and real load?
Instead of validating system performance under predicted load we are evaluating and improving non-functional quality. It still means we need at the end of the day to try what happens under actual load. However, thanks to iterative approach and thanks to high non-functional quality the need for fixing the issues after this activity is so small that we could even assume risk to test it in production under actual instead of predicted load.

Comments

Just want to add that, by my opinion, we should do all these steps with the real load and context in mind. You are checking performance of the new feature? How many times it will be invoked? How much resources it consumes? What would be projections for the production environment?

If you don't have the exact usage and production environment, for example, you develop COTS software, you need to figure out some possible usages and environments. That, of course, is more difficult. But you need to have some idea about that if you talk about the mapping feature performance and business value.