DEVELOPMENT NOTES
-----------------
16 Jan 2003	Paul Gettings

Testing synthetic data, everything works just as it should, since
waveform is nice.

Compute time is <1min for a shot with 168 chan, so we can do _LOTS_ of
runs if we need.

With real field data, life is harder-
  Waveform is critical, and clipping in the data is an evil thing.
  Once the data starts clipping, everything goes straight to hell in
  the fits.  Why can't geophones and seismometers have infinite dynamic
  range?

  Length of wavelet matters a lot.  Need probably 20+ ms of waveform
  for good-sh results.

  Since waveforms near the shot point are clipped (AAARRGGGG), looks to
  be better results from using offset point and then picking near-source
  points by hand....
  	Even adjacent near-source traces don't fit well, because the
  	clipping destroys too much information for the fitting.

  Amplitude falls off like 1/r^2, as expected from a homogenous
  half-space, etc.

  Getting the expected velocity right is a really, really good thing.
  Limits on the max/min are less important, since they just window the
  data to speed compute time.

  Threshhold needs to be set "high", so we don't keep bad picks.
    Threshhold depends on weighting factors and data....

  Need to use all 3 terms: r, dt, da
  	Some waveforms fit nicely, but have large dt and/or da.
  	Still keep more weight on r, but don't ignore da or dt

  This is not going to remove the human, but everyone expected that.

Each experiment, possibly each shot, will require tweaking velocities
and weights, I suspect.

Perhaps need to subset data into a couple of chunks - using trace 64 as
known, can get good results between ~51 and 77, with increasingly worse
results as the offset from the known pick increases.
  So, cut data into chunks of ~25-50 traces and pick a known in that
  chunk and autopick.  Have to break data apart for near-source shots
  anyway...

      LATE 16 JAN 2003

NOTE: need to tweak the decay constants for da and dt:
  da constant now autoscales for each trace; parameter is multiplied by
  the predicted amplitude.

  dt should be set so that dt<~3 ms is not so bad, which can do by
  setting constant to ~6.  Therefore, dt=3 ==> fn is exp(-1/2) which is
  not too small.

Perhaps also look at doing 2+ rounds of fitting-
  take dt and if dt < 0 and no acceptable fit, reset end time to fit
  time - 1 sample interval and try again
  	      if dt > 0, reset start time to fit time + 1 sample
  interval and try again

Also, could rebuild wavelet with each fit, so we are trying to fit the
wavelet of the last trace to the current trace.  This requires
re-ordering the data loop so we move from known pick towards both ends
of the line....

