intro
	background (motivation)
		Model-based vs Model-lite reasoning
		Software quality optimization and effort estimation/reduction
		case-based reasoning, stopping at estimation
		need for simplicity, flexibility, and understanding
	current state of field
		COCOMO tuning variance
		Model-dependancy
		CBR and model lite
	statement of thesis
	contribution (research questions)
		1.) Can W perform as well as model-based approaches?
		2.) Is W effective across a wide variety of datasets and goals
		3.) Can we reduce data collection through discretization?
		4.) Are our recommendations reliable? (Stability)
	structure of this thesis
related work
	seesaw
	standard cbr
design (implementation, optimization)
	contrast set learning
	W algorithm
	optimization (dropping KNN, discretization)
	???Meta-W (multi-runs, weighing recommendations by stability)
results (certification)
	W across multiple datasets
	W vs SEESAW (software quality optimization)
	W as an alternative to drastic changes
	Discretization's affect on performance.
	Stability of W's recommendations (after optimization)
	???Meta-W performance
discussion (application)
	W as a flexible business tool
	W as a small statement on model-based vs model-lite
	CSL as a framework for reasoning
conclusion
	research questions
	future work