Monday, March 10, 2008

Ambitious Science or Feasible Engineering

Working on secure protocols and modeling approaches to them is still under progress. However, a more abstract question draws my attention: Will goal modeling really work for us? The idea is to develop a comprehensive model of goals and intentions of actors, roles, and stakeholders, tasks they need to achieve their goals, resources they need, dependencies of actors on each other, and finally consequences of each task or goal or dependency on other goals of all actors. Suppose we have such a complicated chain of goals and dependencies and consequences. Then, this model could do magic. It can answer many questions: will a particular actor achieve his final goals? And if not, why? Who causes the failure or success. But, we supposed we can have such a goal model. When it comes to real human agents and their goals, it may be very ambitious to say we can develop such a model. Here is the problem: can goal modeling be developed to a feasible engineering process?

And what happened that this question came to my mind? I am developing models of knowledge processes in a hypothetical organization. I’d like to express who generates a body of knowledge and how stores it, and with whom the knowledge is shared, and “Why”. I tried goal oriented notations to model the knowledge market relationships, and then I realized I am going deeper and deeper into the goals of the actors. I think one may never stop digging the goals of a human agent for sharing or withholding a piece of knowledge (or any thing else), and you will never have the ideal comprehensive model.

1 comment:

Christian Muise said...

It seems like you want to construct a framework where the goals of all agents (human or otherwise) are known.

Is that really all that realistic? I would expect the most practical examples are when the goals of other agents are unknown to any other agent. They can reveal things, or behaviour modeling can be used to try and infer things, but a global knowledge of what everyone is thinking seems unrealistic to me.