Write a learner that uses a bunch of cocomin variations and makes a combined estimate using a weighted average. Thinking about this, why not write a bagged version of all the learners in coseekmo and see how it works (rather than compete and use the winner exclusively) For baggedCocomin -Don't use a horizon cutoff because every attribute should be considered. different search (backward elim, forward search) different ordering (native, random, corr, dev, ...) // could do random multiple times here different eval methods another thought - this is really just bagging togethor a bunch of different attribute combinations - could try cocomost bagged for each attribute combination - if that works well could make a version that scales up to more attributes by using heuristics to decide which attributes to not consider (either ditch in all or keep in all), and them exhaustively bag the rest another thought - in evaluation it would simplify things to evaluate against the strawman so it would be either LC vs New Method or Basic Method (ie cocomin) vs Complicated Method (ie bagged cocomin)