Friday, March 28, 2008

How much deep, formal or abstract?

Seriously, I have no expertise to judge if formal methods are going to work in software engineering or not. But I don’t like formal methods, because most of the time, I face problems that formal methods claim to model and verify, but ignore the complexity of the system and factors. Here is why I don’t like formal methods, because some complex situations are even hard to understand, are even hard to sketch an abstract model of it. Then, formally expressing such problems is just very close to impossible.

I heard a talk recently about how requirements engineering can help climate change prediction. I realized formal methods people are so optimistic in formulating such a complex system and using super computers to analyze the models. But practitioners were worry that developing such models of the system is not feasible. I insist on stating that if we finally do the hard job and formulate whole aspects into a formal model, then the formal verification can answer many questions. But my point is that “complete formal model that captures all aspects” is not any more a model. It is equal with the problem space, and it is not possible to have such model. This argument actually is valid for any modeling approach.

For example, I have seen works in which people try to develop a formal framework for modeling trust, trust requirements, impact of trust assumption on requirements or design choices. But how formality can capture and express the interactions of social human agents in a trust chain? Many other aspects in software engineering (and may be other disciplines) are not easy to formulate with formal techniques, since they are tightly coupled with human agents and social issues.

So more narrowed down to my own research problems, I am trying to model security requirements, security protocols, and secure systems with context of presence of human agents with individual goals, machines, social dependencies, malicious behavior, and attack, etc. How much deep should the analysis of these factors go? How much deep should I dig into the individual goals of an attacker to extract my security requirements? (Like personality of people in organization may have impacts on security requirements they have, butterfly effect) How much formal should I express them? Should I develop patterns of behavior? Is that possible (Do we have such a thing as Behavior Pattern?) How much abstract should the models be developed to be understandable yet amenable to analyze them?

I’ll come back one day, with robust answers

2 comments:

Jorge Aranda said...

Nice post. I suggest you read "Mechanizing Proof", by Donald MacKenzie; it discusses many of these points in great detail.

Steve Easterbrook said...

My favourite definition of a "model" is "any incorrect description of some phenomena that you are interested in understanding". The point being that if the model is correct in every aspect then it is indistinguishable from the phenomena itself. But I can't get anyone in the software modeling community to take this definition seriously.

In addition to Jorge's suggestion, you might also enjoy reading Weinberg's "Introduction to General Systems Thinking"