Paper on model vs case-based reasoning

Norvig: grudgingly acknowledge ibr, ignore "cbr"

pro: in out experience, users have found more satifiying justifications then mathematics
simplicity, fewer assumptions, agnostic
anti-model: if local data doesn't fit the model
no model calibration

model-based pro: if you have teh model but not the data, can still use the model
models can show general trends, and extrapolate trends, nice abstractions
long to build the  model, but once you have it you're set.
Experts can inject their own expertise (Data + intuition)

09nodata
	beam: local calibration
	model variance
	
	
runtimes: meh if data small


Section 2: general debate on model vs instance

Conclusion: comes back to section 2 with specific result points

======
Start with shepphard:
Shepphard (why cbr is great): 3.1.2: these induced prediction systems are model based or instance based
FSS paragraph: implicit range selection
"History repeats itself, but not exactly"

3.4: uncertainty: main reason for not using model-based: slopes are insane (tunings)
     avoid our thing with variance

Calibration

COCOMO fuck you: main limited factor...
=====

teak paper: world's shortest description of CBR. 
     GAC v3
     	 ABE: ABE0, (text for stealing)

=====
drastic code for seesaw: 
local search: see drastic paper for shortest description of seesaw
sellman and mitchell, 1992, gsat
smallest possible change for best possible era, half the time it does random mutation
seesaw: maxwalksat (conjuctice normal form)
	vector extreme ranges (highs/lows)

value v2, shortest description of nova, others
=======

8-10 pages (10 max)

reuse has good quartiles:
      rowcolor, (colortbl package)
      wisp/var/andres/reuse/paper1/results2.tex
      \baselinestretch \scriptsize (make charts smaller)
=====

start at results:

refs for seesaw:
seesaw comparison, as reported by green and menzies, ase 2009 (me09i)
seesaw started with start ase2007 with ous(me07g)
used by orrego  "relative merits of reuse"
ous also: 2009 promise: BFC paper

one thing to write: these papers haven't been faulted
as said at the begnning of the ase paper: rpevious thing only tested against itself.

delta from last w paper:
auto stopping rule
multiple goals (optimizing effort can be bad)

abstrac 5/28
paper 6/4

intro:
method overload: so many ways to build predictive models

talk about machine learning research has been too successful: we're downing in choice
     Are there inherit properties that let us pick one thing over another
     	 is it possible to declare some better than others?
	    cant' really fully answer, but here are some results
	    	  use model-absed if your local data doesn't exist
     general -> particular

Research questions: conclusion answers the research questions
	 Ekrem's rhetorical device: ASE-2010-v3

results:
seesaw vs w
mean and vaariance, effort, defects, months
about 1.5 pages of result charts

one fo the ongoing issues in effort estimatino is model vs instance. Whle we have no statement on the general, we offer the following comparison.

background: instance vs cbr in general, end with 3 research question


dataset size shouldn't matter: see 06 deviations

start out high-level gentle: then descend into teh technical


may 12-19
may 25-30