need data mining - too much data - local lessons - need fast ways to learn local truths - need ways to tell when local truth is no longer useful - need to talk humans and learners (so high elvel symbolic descriptions, please) - need to be able to initialize the theory with expectations (how am i going with top10 algorithms) theory of everything agents know what thye know, know what they don't know. know when then they go from know to dont know. not just prediction. not "what is". but "what to do next". technically, not classifiers/predictors but contrast sets. differences between thing repeated result: m here=stats under=stats stats=if numeric, then n,sum,sumSq else if discrete then uniques from= row ids equal width equal freq bore supervised variance-based min-ent min-var learner3: simple bayes not naive visualization bore nomograms b-squared shrinkage infogain feature seletion instance selection - data generation - with noise added - noise reduction - show with knn and nb - privacy learner4: bam bam = var pruning + neighborhood ranking (no bagging) learner5: prism prism + range ordering learner6: tar should run fast with the indexing of part 1 learner7: which anomaly detection belief revision multi-untility optimzation importance sampling bagging, boosting, etc, all "importance sampling"