Model Workloads for Performance Testing: FIBLOTS

This is the third installment of a currently unknown number of posts about heuristics and mnemonics I find valuable when teaching and conducting performance testing.
Other posts about performance testing heuristics and mnemonics are:
For years, I have championed the use of production logs to create workload models for performance testing. During the same period, I've been researching and experimenting with methods to quickly create "good enough" workload models without empirical data that increase the value of the performance tests. I recently realized that these two ideas are actually complimentary, not exclusionary, and that with or without empirical usage data from production logs, I do the same thing, I:
While the play on words makes this mnemonic particularly memorable, I'm not saying that I just make it up. Rather the acronym represents the following guideword heuristics that have served me well in deciding what to include in my workload models over the years.
  • Frequent: Common application usage.
  • Intensive: i.e. Resource hogging activities.
  • Business Critical: Even if these activities are both rare and not risky
  • Legal: Stuff that will get you sued or not paid.
  • Obvious: Stuff that is likely to earn you bad press
  • Technically Risky: New technologies, old technologies, places where it’s failed before, previously under-tested areas
  • Stakeholder Mandated: Don’t argue with the boss (too much).
Scott Barber
President & Chief Technologist, PerfTestPlus, Inc.
Vice President & Executive Director, Association for Software Testing
"If you can see it in your mind...
     you will find it in your life."


Its true to use Production logs to create workload models. But in situations where the application is being freshly built and its not moved to production yet, in that case what would be the strategy to build a workload model.

Your thoughts will be helpful.

I simply couldn't figure out how the answer wasn't obvious... I get it now.

I use FIBLOTS in both cases. The only difference is that if there are log files, I can "ask" some of those questions of the logs. If it's a brand new application, I "ask" a variety of stakeholders (and "season" their answers with a healthy serving of my own experience).


Scott Barber
CTO, PerfTestPlus
Executive Director, AST