I have this theory in mind that goal modeling helps understanding security requirements of a software system situated in an organization. There is an underlying theory, which is not proven, but claims in general, goal modeling is useful for understanding actual requirements of stakeholders. This theory claims that by asking “why” questions from stakeholders, analysts may reveal the real reasons that stakeholders need a feature and build what is actually required not what stakeholders guess might help.
This is neat and reasonable, but can I show that goal modeling theory actually works this way? What kind of evidences I should look for to show (not prove) that goal modeling leads to discovering actual requirements (or better quality requirements)? How can I objectively judge those requirements are better and more realistic? Once I find a way to show evidences for usefulness of goal modeling in requirements engineering, I’d like to expand it to security requirements engineering as well.
You may think it is straightforward to examine how well a goal model conveys information about the system requirements comparing to a textual description: run an experiment: one with a group of people that are provided with a system description, one with a goal model of the description. Ask the participants to read the text and teach them how to read the goal model. Then quiz them about the description of the system to check which one better understood it. Such a study has been recognized an invalid study, doomed from the beginning, because the relationship between understandability and usefulness of the model and what we measure is not direct. In such a study, we measure understanding ability of individuals, the modeler ability to express the description in an abstract model, pre-knowledge of individual about the domain of the model, how well the model is graphically represented, how well the syntax of the language is designed, if the conceptual modeling language includes all sufficient and necessary concepts, and if those concepts are mapped to proper visual symbols. No need to mention that the design of questions that we ask from the individual who read the model is the most problematic part. What we should ask to reflect understanding of the individual from a model or text?!
I am looking for ways to reduce the impact of all these biases.