Tuesday, January 24, 2012

What are the main challenged in enterprise security?

Talking to a senior partner at one of the big four professional services firms, I was pointed out that a weblog that is not getting updated, is better to be let go.
There was actually a good advice in there, and made me think of a series of daily short notes that I was taking during preparation for job hunting in the area of enterprise security and risk:

In the last two months of studying and preparations, I looked at very diverse changes happening in the security area. If I was asked in a job interview "what are going to be main challenges in enterprise security"? I would say business models and enterprises are changing fast, the main change drivers are mobility, cloud computing, and social networking. I will get back to those, but before that, I need to mention organizations are not the only group changing. Attackers are getting seriously sophisticated (examples: Anonymous hacking group, LilzSec, etc.) They are organized, goal-oriented, and not only opportunistic, but actually have exact plans and targets.

Those big changes I listed above are business enablers, can help security, or can open the doors of organizations to critical risks. The use of portable devices and mobiles, outside the traditional enterprise network perimeter is now a norm. Mobile applications have not yet face huge security breaches, while what I have understood is already mobile devices are less secure (no physical security, no secure data storage, to malware protection, poor keyboard and thus shorter passwords showing up again, lack of multi-user on each device, and easier phishing on mobile browsing). Web 2.0 applications and social networking is now part of many firms' daily routines. Many enterprises have not actually thought about the security implications and threats of their employees chit chatting on Facebook and updating work-related statuses. Cloud computing is exciting, promising, and is going to help firms bounce services and computations on someone else's machines, but have organizations thoroughly analyzed who is going to guarantee the confidentiality and integrity of data and computations they bounce off on the cloud?

Seems a DMZ and nicely configured firewalls are not enough anymore. What I understood, out there in the wild, Data Loss Prevention products, Security Event and Information Management tools and Identity Management Solutions are the hot hot areas of investment, beyond what is already being spent on traditional network security. I think to the must-to-have list, we need to add: training and awareness, policy development, enforcement, management, and compliance, and well as routine vulnerability assessment, penetration testing, and risk assessment.

Friday, September 25, 2009

Why and how stakeholders change their mind?

We all know requirements change, and there are classic reasons why that happens: change of environment, change of management, change of budgeting, new users may be added to the stakeholders, .... On top of all that, users and customers change their mind about the system requirements. In the early development cycles they may show interest in some features, later, as soon as they see the real system implemented, they may change their mind and express interest in some other features.

Well, the story comes from the psychology of human minds. Even if I ask you sort four available fruits in the order you like to have them, you may say: Orange, apple, banana, watermelon, and then I take out watermelon from the list of available fruits, you may change the sort and say, well, if watermelon is not available any more I'll rather have apple, banana, and then orange ! Why preferences sometimes depend on what are the available options?

It is wrong to assume stakeholders have some fixed preferences about the requirements. They have some hidden criteria for evaluating the elements and requirements that we may not know about. In the case of fruits, you may not know, but someone wants to have watermelon at the end because it is juicy, but once watermelon is not in the list, the individual rather to have the other juicy fruit (orange) at the end, so changes the sort for you ! The same thing happens when dealing with software requirements. Users show interests in some features while having some hidden goals and criteria in their mind.

Well, now the question is when asking the stakeholders to rank the requirements, features, or alternative solutions, do we need to uncover their rationale behind these preferences? And if we do, the more the criteria of comparisons and rationale of preferences are more explicit, the more accurate are the preferences we extracted?

Wednesday, May 6, 2009

Bruce Schneier at U of T

Today Bruce Schneier was at University of Toronto, and he was the key note speaker of the IPSI symposium. I sneaked out of the rest of symposium because of the huge excitement of meeting him face too face and talking to him. I need to write down the main points NOW:

He started with a lot of examples of how IT systems produce and collect massive data, which can be used for control and business purposes, that ultimately violate privacy. (he called the situation data pollution). One of his main points was, people do not own their data anymore, and they do not have any control over keeping security and privacy of their data. (While in the past, privacy used to start with things you own, like your house, car, wallet, etc.) Specially, the price of storage is getting down and the marginal value of storing all the data justifies keeping all the transaction data for ever and selling it and ... .

A very interesting point he made was: both privacy and security are balances. For example, attacks are not zero, but they are at a level that the society is Ok with. Security is not vs of privacy, but actually, privacy is part of security. When we face identity-based security, then that affects privacy.

So following the discussion that privacy is part of security, he mentioned, privacy is what protects us from the people at power, and if there is any trade-off is between liberty vs control, not privacy vs security.

He reviewed the other approaches to the problem: like mutual disclosure. But mutual disclosure does not work, because when the parties are not at the same level of power, still the mutual disclosure does not solve the privacy violation issue. His argument was asking for the same information that a police officer asks us, does not prevent the privacy violation of our own personal data. Or if you go to a doctor and he asks please take you cloths off, you can't say, you take yours off first.

Finally, he mentioned two references for further reading:
1) The science of fear, as a good book to understand the psychology of security and privacy.
2) Say Every Thing: about the different perspective of new generation on privacy.


I had this great great chance to hunt him in the break. I was so shy, but managed to put myself together and told him very briefly in two sentences that based on his 5 steps of security trade-off analysis method, I'd like to develop a practical and qualitative method for making software security trade-offs in the early stages of the development, when we do not have much numbers and exact measures. He told me to read this book, which might help me to formalize the security decision analysis process:
"The New School of Information Security"

He even said if it did not help, I can e-mail him for further help. I am like walking on some far far away clouds now !!

Thursday, November 6, 2008

How to show goal modeling is actually useful?

I have this theory in mind that goal modeling helps understanding security requirements of a software system situated in an organization. There is an underlying theory, which is not proven, but claims in general, goal modeling is useful for understanding actual requirements of stakeholders. This theory claims that by asking “why” questions from stakeholders, analysts may reveal the real reasons that stakeholders need a feature and build what is actually required not what stakeholders guess might help.

This is neat and reasonable, but can I show that goal modeling theory actually works this way? What kind of evidences I should look for to show (not prove) that goal modeling leads to discovering actual requirements (or better quality requirements)? How can I objectively judge those requirements are better and more realistic? Once I find a way to show evidences for usefulness of goal modeling in requirements engineering, I’d like to expand it to security requirements engineering as well.

You may think it is straightforward to examine how well a goal model conveys information about the system requirements comparing to a textual description: run an experiment: one with a group of people that are provided with a system description, one with a goal model of the description. Ask the participants to read the text and teach them how to read the goal model. Then quiz them about the description of the system to check which one better understood it. Such a study has been recognized an invalid study, doomed from the beginning, because the relationship between understandability and usefulness of the model and what we measure is not direct. In such a study, we measure understanding ability of individuals, the modeler ability to express the description in an abstract model, pre-knowledge of individual about the domain of the model, how well the model is graphically represented, how well the syntax of the language is designed, if the conceptual modeling language includes all sufficient and necessary concepts, and if those concepts are mapped to proper visual symbols. No need to mention that the design of questions that we ask from the individual who read the model is the most problematic part. What we should ask to reflect understanding of the individual from a model or text?!

I am looking for ways to reduce the impact of all these biases.

Saturday, October 25, 2008

How do humans make security decisions?

With a long delay, I am again back on the track. I still repeat these questions: how one can resolve conflicts when we do not have exact measures of the conflicts?

But to be less boring and repetitive, I’d like to discuss a little bit of stuff I learned from a great paper by Bruce Schneier, titled The Psychology of Security.

He talks about security as a personal subjective feeling which involves trade-offs with other requirements and needs. Security engineering people or cryptography specialists may not need or may not feel the need to look at security from this viewpoint. But as a software developer or designer, I hope users will actually use the security solution or mechanisms enforced in the software. So psychology of security is of interest and importance.

So the first interesting argument in this paper is about the difference between actual security and feeling secure. As humans, we may mislead ourselves and feel secure, when actually there is no real secure solution applied or we may live in total fear for unknown attacks. “The feeling and reality of security are certainly related to each other, but they're just as certainly not the same as each other.

Another discussion in this paper is about the way people decide on security. Schneier concludes that the approaches for designing security systems and deciding on security trade-offs need to take the feeling and psychology of security into account. Humans do not evaluate security trade-offs mathematically, by examining the relative probabilities of different events. Instead, they use shortcuts, stereotypes, biases, generally known as heuristics.

Empirical studies on trade-off decisions show that unlike the utility theory, people have subjective values for gain and losses, which is described by prospect theory. I repeat an example from the paper here:

Subjects were divided into two groups. One group was given the choice of these two alternatives:

  • Alternative A: A sure gain of $500.
  • Alternative B: A 50% chance of gaining $1,000.

The other group was given the choice of:

  • Alternative C: A sure loss of $500.
  • Alternative D: A 50% chance of losing $1,000.

These two trade-offs aren't the same, but they're very similar. And traditional economics predicts that the difference doesn't make a difference. Traditional economics is based on something called "utility theory," which predicts that people make trade-offs based on a straightforward calculation of relative gains and losses. Alternatives A and B have the same expected utility: +$500. And alternatives C and D have the same expected utility: -$500. But experimental results contradict this. When faced with a gain, most people (84%) chose Alternative A (the sure gain) of $500 over Alternative B (the risky gain). But when faced with a loss, most people (70%) chose Alternative D (the risky loss) over Alternative C (the sure loss).


In addition, peoples' choices are affected by how a trade-off is framed. By framing a choice as gain, people tend to be risk-averse; in opposite, when the choice is framed as loss, people tend to be risk-seeking. This is called framing effect.

These are interesting psychological results. As humans, we have a completely different world in our minds, we have budget accounts for security and privacy in our minds, we make decisions that machines won’t make this way.

Now: another question, should we incorporate these aspects into the decision maker machines and algorithms?

Friday, September 19, 2008

Why security and privacy decisions are hard to make

I am trying to find a way for making security decisions. And by security decisions I mean decisions that involve trade-offs. We “always” trade something for security. We give up a little bit of privacy for more security, or a little bit of convenience. But the question is how we decide to give that a little bit?

Consider this situation: Your school decides to install surveillance camera in the hall ways to control the thefts. I kind of do not like it, because “They” can see me whenever I am going to washroom or going to talk on my cell phone. You are Ok with it, and ask why someone might think it is a problem that “They” know I am going to washroom. Well, because it is an individual thing: privacy has different aspects and levels. But it is very personal. Any ways, they install the cameras, because not many people object. Then, a month later they announce, but we need to install the cameras inside the classrooms and grad student offices, because we can see the thieves picking the valuable stuff and jump to arrest them, and also thieves know we have the cameras, and they won’t dare to try to steal things. Now, more people may object the privacy violation, but not all of the people. In the extreme situation, they may decide to install cameras even in washrooms. Probably, all people object that it is so against privacy and morality and blah.

Obviously, privacy is a personal matter, and so security. The level that I feel I am secure and private is completely different by others. It is true that in the extremes we are all common in objections, but we do not live in the extreme world. So, I ask my question again: how we can decide to install the cameras (or apply a security solution), considering the privacy violation, costs, increase in the level of security, usability of that technology or tool, … ?

And it is a hard problem.

First of all, if “They” decide only based on the cost and level of security, they might end of installing it every where, if they believe the cost of cameras are less than the cost of thefts. Therefore, security decisions need considering opinion of multiple individuals that benefit from the system.

Secondly, many factors need to considered. But these factors (that are coming from multiple individuals) are hard to measure. Can you suggest a way for measuring the level of privacy violation by installing surveillance cameras inside washrooms? We are faced with many qualities that are inherently qualitative and subjective. A major portion of research needs to be dedicated to find ways to quantify them. (If I am using a correct term: quantifize) Although there have always been a war between qualitative and quantitative reasoning, and people hiding behind numbers believe reasoning is solely limited to working with exact numbers, currently the only way to solve security decision making to me seems qualitative reasoning.

Third, security and risk analysis needs looking at psychology of fear, feeling secure and safe, and psychology of risk tolerance. As human, we do very fast and interesting qualitative risk analysis that is kind of beyond the numbers and math, which in field of economics is called “Prospect Theory”. I am going to read and report back here about all these stuff in coming days.

Wednesday, May 21, 2008

Experiments on Self

Empirical studies have become fashionable in software engineering, and this makes sense. In software engineering discipline, most of the time, we do not develop an algorithm or a new solution for a problem. We are looking for ways to facilitate solution development, and those “ways” need to be somehow evaluated, that’s were empirical studies play a significant role.

In these kind of studies, to my knowledge, a researcher or group of researchers perform some experiments in a controlled environment to study phenomena. Sometimes, the researches go to the real world to observe the real software developers, interview them, and survey them to find out how they think, they develop a system, how they manage development progress, and blah blah.

But, as far as I have seen, no one has ever dared to study things on him or herself. Seems reasonable, if I am a researcher, I have developed a notation, and then I use it, and announce it was useful, who is going to believe it? And that’s reasonable, if I am a desperate PhD student, so greedy to publish positive results, I am so biased that I may feel the notation is working very well for me!

So based on many ethical issues and biases, self experiments are not used in software engineering empirical studies at all. But experiments on one self is not such a huge mistake in other discipline, and in history we see a lot of example, where biologists experienced a vaccine on themselves. A very interesting example is the story of discovering that a bacteria causes stomach ulcers, not stress or spicy food. Everyone disagreed with Dr. Warren and Dr. Marshall’s (Nobel Prize winners in 2005 for this discovery) theory because they didn’t believe bacteria could survive in acid. So, Dr. Marshall drank a Petri-dish filled with Helicobacter pylori and developed gastritis, which was soon resolved with antibiotics.

So I started wondering how we can have a self experiment in empirical studies in software engineering? How can I report studying a development method which I followed myself and I discovered its advantages and disadvantages, and how I can overcome the biases? Such study is valuable since by observing and interviewing a practitioner who employs a method, the researcher may not capture the exact insights into that “method” under study.

On the other hand, a self experiment, even if it is unbiased and more informative, is limited to one report. It is a (sad) true fact that the best person for describing a routine method is not the person who performs it. Finally, individual differences make a self study less applicable.

That’s why I continue observing instead of drinking a Petri-dish filled with Helicobacter pylori

Sunday, April 20, 2008

When we should stop swimming in the opposite direction of the river?

These days, I am doing an extremely tedious job to look into a number of models that students in a course developed. (again using the i* model, which is not the important part. The point is to study how modeling notations and processes are applied) I am looking through their assignments that they developed by the i* notation, to find the mistakes they made (syntactical ones) and investigate why they made those mistakes. And the “why” part is actually the interesting part: Did they break the syntax of the notation, because they wanted to express something that the notation does not support? Was it visualization limitations on the screen or paper? Or they make the mistakes, because the way the models are developed with mistakes is more intuitive?

If one goes over enough number of models developed in real world practices, in academic papers, and in students' assignments, which we are doing it currently in our team as a large team work project, interesting patterns of mistakes emerge. If you realize the syntax and even the semantics of your notation is unconsciously or purposefully was broken, then you may need to change it in a way that is more acceptable and useable. On the other hand, there is a point that you need to stop, and decide after this decision line, the modelers and developers need more training and the notation was Ok. Finding where that line is located is hard. The harder job is to decide whether and how the notation and modeling process need to be changed based on the mistakes' pattern.

So, here we are, looking for common mistakes in one particular modeling notation (i*). I am wondering how we can generalize it for other modeling notations such as UML or even programming languages, to find out, how common mistakes can refine a notation.

Wednesday, April 2, 2008

How models are adopted in real world?

Recently, I got access to some files from a maintenance company of a very well known and successful European communication company. The maintenance team tried to improve the performance by some knowledge management practices, which were basically, reallocating some maintenance experts and putting them in double positions for sake of knowledge sharing. The knowledge management leader tried to explain why the new process works this well and why the performance increases this dramatically. She used i* modeling notation to show and reason why new knowledge dependencies work and how the maintenance knowledge is now distributed.

I am not going to dig into how i* (or any other modeling approach) worked for them. Things that are very interesting are about how they adopted a modeling notation. The results that now I am reading show that practitioners may ignore the syntax; they kind of model things with any syntax they feel comfortable with. So if the results of the analysis on the model highly depend on correct syntax, the modeling notation may just fail. The other interesting observation I had is that practitioners use the high intuitions of a tool, notation, or technique instead of details. This rings some bells again that syntax and details should be as simple as possible.

So in the i* models I received from industry, syntax of the model are almost wrong, but they have understood the high level intuitions behind the goal-oriented models. An interesting research would compare such results for various modeling notations and from several companies to draw a robust conclusion out of it: How models are adopted in real world?

Friday, March 28, 2008

How much deep, formal or abstract?

Seriously, I have no expertise to judge if formal methods are going to work in software engineering or not. But I don’t like formal methods, because most of the time, I face problems that formal methods claim to model and verify, but ignore the complexity of the system and factors. Here is why I don’t like formal methods, because some complex situations are even hard to understand, are even hard to sketch an abstract model of it. Then, formally expressing such problems is just very close to impossible.

I heard a talk recently about how requirements engineering can help climate change prediction. I realized formal methods people are so optimistic in formulating such a complex system and using super computers to analyze the models. But practitioners were worry that developing such models of the system is not feasible. I insist on stating that if we finally do the hard job and formulate whole aspects into a formal model, then the formal verification can answer many questions. But my point is that “complete formal model that captures all aspects” is not any more a model. It is equal with the problem space, and it is not possible to have such model. This argument actually is valid for any modeling approach.

For example, I have seen works in which people try to develop a formal framework for modeling trust, trust requirements, impact of trust assumption on requirements or design choices. But how formality can capture and express the interactions of social human agents in a trust chain? Many other aspects in software engineering (and may be other disciplines) are not easy to formulate with formal techniques, since they are tightly coupled with human agents and social issues.

So more narrowed down to my own research problems, I am trying to model security requirements, security protocols, and secure systems with context of presence of human agents with individual goals, machines, social dependencies, malicious behavior, and attack, etc. How much deep should the analysis of these factors go? How much deep should I dig into the individual goals of an attacker to extract my security requirements? (Like personality of people in organization may have impacts on security requirements they have, butterfly effect) How much formal should I express them? Should I develop patterns of behavior? Is that possible (Do we have such a thing as Behavior Pattern?) How much abstract should the models be developed to be understandable yet amenable to analyze them?

I’ll come back one day, with robust answers