LikeLike

]]>LikeLike

]]>Many thanks for reading the post and taking the time to write a detailed comment. I’ll attempt to address the points you have made in roughly the same order that you have made them. I should also state that I’m not an expert in estimation so please feel free to correct my line of thinking.

First, my key assumption is that the estimates are made independently, so it does apply to at least some variants of the Delphi method (perhaps the pure form that you mention in your comment). However, as I have noted, the assumption of independence may well be invalid in many situations.

Second, my group consists of two estimators who make estimates independently. Therefore, the conjunctive probability (insofar as the model is concerned) is indeed given by the product of their individual estimates.

As you have stated in your excellent post, one needs two events in order to use Bayes Theorem: an independent event and an event that has (or is hypothesised to have) a dependency on the first event. In the case above, the first is the event that both estimators concur and the second is the event that they are both correct (or incorrect). The dependency in this case is not hypothesised, it is a fact – if they are both correct (or incorrect) they must concur. In effect this is a degenerate case of Bayes Theorem; one where the dependency is known.

Finally, I reiterate that my model is simplistic and have noted several caveats to this effect in my post.

Many thanks again for taking the time to read and comment.

Regards,

Kailash.

LikeLike

]]>the posting, Bayes theorem, as given, does not apply to the situation

described for two reasons: First, Delphi, in it’s purest form,

is not a group discussion or collaboration. In the Delphi

method, each estimator works independently without collaboration. Results are

combined by any number of means: consensus, by vote, or by a autocrat

decision-maker deciding. Groups and teams, on the other hand,

engaged in collaboration to solve a problem or make an

estimate, engage in quite complex and subjective discussion,

some of which is analytic and some of which is subject to any

number of biases most of which are not subject to mathematical

probabilities.

Second, the conjunctive probability of two esitmators (.9 x.9)

does not model a group. It models a business rule that says

both must agree to have an aggreeable outcome. Groups don’t

work that way. One person getting the right answer and convincing the decider is all you need.

To pose a Bayesian situation, three elements are needed: an ‘a

priori’ estimate of an outcome (some say guesstimate); an

independent condition that affects the outcome; and

observations (or forecasts) of the effects of the condition on

the a priori estimate. Given 2 of these 3, we can solve for

the third.

So what do we have here? The independent condition is that

each estimator, call them Tom and Harry, form a team, TH.

Whether they form a team or not is like will it rain or not?

It’s a condition that affects outcome but is an action

entirely driven independently of what happens after the team

is formed. By the way, rain affects mood, and mood affects

decision making, so whether it rains or not could be another

independent condition.

What’s the a priori estimate, E? It’s 0.9, not .81. It’s the

data we have before the condition kicks in. For the decision

maker to be successful, only one estimator needs to be

correct. But we hypothesize that the decision process might be

more successful if Tom and Harry actually work together in

some way in a group (Probability Harry and Tom form a group p

(TH)).

What Bayesians can say is: P(E given TH) is the posterior

improvement on a priori p(E). In equation form

p(E given TH) = P(E = 0.9) * P(TH given E) / P(TH). That is:

the updated posterior knowledge of E given there is a group TH (E given

TH) = a priori knowlege of E (.9) modified by information we

develop about TH. Notice that for the posterior group

performance (E given TH) to be better than the a priori

performance (E =.9), p(TH given E)/p(TH) > 1. That is, if

group’s estimate is only E (no improvement), it is less likely

group really formed and achieved synergy.

Now, like the rain, we should know p(TH). Afterall it’s

independent. If we don’t, we might be able to figure out from

observations: p(TH) = p(TH given E)*P(E) + p(TH given not E)

*p(Not E)

I’ve found that the easiest way to set up and work Bayesian

problems is with a “Bayes Grid”

(http://www.johngoodpasture.com/2010/08/our-friend-bayes-part

-i.html)

LikeLike

]]>Thanks for a detailed (and entertaining) example that raises some very good points.

I agree that if a group works collaboratively and if the members are open-minded then there is a reasonable chance that they will arrive at a better estimate than any individual would *providing the group can distinguish between a good estimate and a bad one*. Having the right answer on the table is no guarantee that it will actually be recognised as the right answer. This is especially true if the estimators aren’t particularly competent. For instance, if the group uses a majority opinion as the criterion they will get it wrong more often than they get it right. Using your example to illustrate: if two of the six estimators arrive at the same number, they have only 15% chance of being right (as per my the discussion in the post).

Regards,

K.

LikeLike

]]>