I heard a talk recently about how requirements engineering can help climate change prediction. I realized formal methods people are so optimistic in formulating such a complex system and using super computers to analyze the models. But practitioners were worry that developing such models of the system is not feasible. I insist on stating that if we finally do the hard job and formulate whole aspects into a formal model, then the formal verification can answer many questions. But my point is that “complete formal model that captures all aspects” is not any more a model. It is equal with the problem space, and it is not possible to have such model. This argument actually is valid for any modeling approach.
For example, I have seen works in which people try to develop a formal framework for modeling trust, trust requirements, impact of trust assumption on requirements or design choices. But how formality can capture and express the interactions of social human agents in a trust chain? Many other aspects in software engineering (and may be other disciplines) are not easy to formulate with formal techniques, since they are tightly coupled with human agents and social issues.
So more narrowed down to my own research problems, I am trying to model security requirements, security protocols, and secure systems with context of presence of human agents with individual goals, machines, social dependencies, malicious behavior, and attack, etc. How much deep should the analysis of these factors go? How much deep should I dig into the individual goals of an attacker to extract my security requirements? (Like personality of people in organization may have impacts on security requirements they have, butterfly effect) How much formal should I express them? Should I develop patterns of behavior? Is that possible (Do we have such a thing as Behavior Pattern?) How much abstract should the models be developed to be understandable yet amenable to analyze them?
I’ll come back one day, with robust answers