One dimensional kCSD (week 3)

During this week I am finishing the kCSD implementation that works with data from 1D measurements, moving the code to the main repository and writing tests.

Some thoughts from the last days:

  1. I thought there were some problems with cross validation in 1D pykCSD because it was returning high regularization parameters for model data, where it should return very low values. My mentor, Daniel, suggested that my model sources or potentials could be wrong. And yeah1, it turned out that I was indeed calculating the initial potentials incorrectly. The correct relation in 1D is:

    This is done assuming cylindrical symmetry

    and is the measured CSD

  2. What is the added value so far?

    More generalized scheme for cross validation. The existing script used something like randomized Leave One Out cross validation (which, to confuse me, was commented and called as a KFold cross validation). My current version is able to accept any combination of indices (LeaveOneOut, KFold, ShuffleSplit and so on) which can be obtained with the use of index generators from the Scikit-learn cross_validation module.

  3. Already looking more deeply into 2D kCSD code, it turns out that 1D and 2D implementations have more in common than I originally thought. If I recognized it properly, only the basis function management changes among the different method dimensionalities. Everything else is based on kernel operations which look all the same both in 1D and 2D.

  1. I think everything Daniel told me so far turned out to be true and helpful :P