These are chat archives for menpo/menpo

24th
May 2014
Piotr Bar
@piotrbar
May 24 2014 21:04
ok. im here
James Booth
@jabooth
May 24 2014 21:04
hey
So the SDM actually just holds onto smaller Regressor objects
one Regressor per level
the actual regression is currently provided as a closure
you'll have to check, but I think we normally use mlr() by default
def mlr(X, T):
    r"""
    Multivariate Linear Regression

    Parameters
    ----------
    X: numpy.array
        The regression features used to create the coefficient matrix.
    T: numpy.array
        The shapes differential that denote the dependent variable.


    Returns
    -------
    mlr_fitting: function/closure
        The closure of the regression method.
    """
    XX = np.dot(X.T, X)
    XX = (XX + XX.T) / 2
    XT = np.dot(X.T, T)
    R = np.linalg.solve(XX, XT)

    def mlr_fitting(x):
        return np.dot(x, R)

    return mlr_fitting
the problem for you is that themlr_fitting(x) function that is returned is a closure that enjoys access to the R variable
R is the matrix that you want
James Booth
@jabooth
May 24 2014 21:09
I would suggest you change out this closure for a callable, so that you can get access to R afterwards
Piotr Bar
@piotrbar
May 24 2014 21:11
yeah, mlr is the default. ok I'll change it then and get a file with R matrices
James Booth
@jabooth
May 24 2014 21:12
class SDMTrainer(SDTrainer):
    def __init__(self, regression_type=mlr, regression_features=sparse_hog,
                 patch_shape=(16, 16), feature_type=None, n_levels=3,
                 downscale=1.5, pyramid_on_features=False, noise_std=0.04,
                 rotation=False, n_perturbations=10,
                 normalization_diagonal=None, interpolator='scipy'):
        # check regression features
        regression_features_list = self.check_regression_features(
            regression_features, n_levels)
        super(SDMTrainer, self).__init__(
            regression_type=regression_type,
            regression_features=regression_features_list,
            feature_type=feature_type, n_levels=n_levels, downscale=downscale,
            pyramid_on_features=pyramid_on_features, noise_std=noise_std,
            rotation=rotation, n_perturbations=n_perturbations,
            interpolator=interpolator)
        self.patch_shape = patch_shape
        self.normalization_diagonal = normalization_diagonal
        self.pyramid_on_features = pyramid_on_features
it's here you'll want to switch out mlr
should return a callable object
and on that object you can just store self.R
and access it after SDM training
save them out and you're good!
Make sense?
Piotr Bar
@piotrbar
May 24 2014 21:13
ok, I understand
thanks a lot :)
James Booth
@jabooth
May 24 2014 21:14
np! Let me know how you get on :)
Piotr Bar
@piotrbar
May 24 2014 21:14
ok, I'll keep you posted