Tim: In case it helps our conversation, here are some observations... Slide 3, bullet 1, sub-bullet 2 [Typo] "Verificataion" should be "Verification" Slide 3, bullet 1, sub-bullet 2 [Typo] "(at Fairmont WVU)" should be "(at Fairmont, WV)" Slide 11, boxes 1 and 2, function CO1 uses non-rounded values for CO1 and function EP1 uses non-rounded values for EP1. You may do this elsewhere but to select tasks from the tasking tables you can only use rounded values {1, 2, 3, 4, 5} Slide 11, box 3, function PR2 has a comment "// Performance" but should be "// Process" Slide 12, bullet 1, sub-bullet 2, I am not sure what A1b really means...can it be made more clear? Slide 12, bullet 1, sub-bullet 3, A1c is a nice kudo but is it an answer to Q1? Slide 13, page bottom right, task 2.1 Reuse Analysis is selected if CO13, CO14, or CO15 have an "X" or if RA3 > 1 (or equivalently, RA3 >= 2) Slide 13, page bottom right, just to clarify human rated means HS2 > 0 (or equivalently, HS2 >= 1) Slide 15, upper left notes 433 records--note that if you have experiments that limit their attention to CO1 (already computed in data set), EP1 (already computed in data set), HS2 (if blank, assume zero), and RA3 since these elements directly select tasks then you can use all 500 rows of the data set. (naturally, not all experiments lend themselves to this) Slide 17, possible new grouping is to take EO - {X7} and see if anything significant changes Slide 17, are any of the other categorizations I sent you considered at all (they are not reported). That is the "Profile" {SS, SU, ES, OP, HS} or "Prime" {P1, P2, P3, P4}? Slide 17, bullet 5, SILAP is supposed to say what is the right thing to do regardless of funding rather than what is the cheapest thing we can get away with. please exapnd this point? Slide 21, clusters are combinations of factors (e.g. DT3 > 1 and CL3 <= 4 and US3 > 2 and EX3 <= 2)? Is the example exhaustive here? That is, does EX3 not form a cluster by itself? All clusters depend on DT3? we need to talk about decision tree learning. recusively, apply the following algorithm. 1) find the most informative attribute (i.e. the one that most seperates the different classes) 2) make it the root of the current tree 3) split the data on each attribute of the root 4) apply the algoritgm recursively on the data that falls down each split note that 4 will build sub-trees for each branch of the most informative attribute. so, in answer to your question, DT3 is the attribute that most seperates the clusters.