You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For all Mahalanobis metric learners, we should be able to reduce the dimension. For those that optimize the transformation matrix L, this can be done explicitely by setting the matrix at init to have shape (num_dims, n_features). For the others (that optimize the metric M), we could provide the user with num_dims which could be set to:
a number k > n_features: in this case we would do the eigendecomposition of M and would only keep the k components with highest eigenvalues
similar to scikit-learn's PCA, it could also be a value between 0 and 1 for say some threshold on the eigenvalues, or even a string, for some custom strategy (for instance the elbow rule)
This is the current state in the package:
All metric learners that use transformer_from_metric (Covariance, LSML, MMC, and SDML) do not have a num_dims argument
All others optimize explicitely L, and have a num_dims argument (LFDA, MLKR, NCA, RCA) except LMNN, that could have one
Also, should we replace num_dims by n_components, like this is the case in scikit learn linear transformers ? This is also what we did for this PR on NCA in scikit-learn scikit-learn/scikit-learn#10058
This is also related to #124, since we should check that in the case of a custom matrix for initializing the explicit transformer it is consistent with the desired dimension
The text was updated successfully, but these errors were encountered:
We should definitely do this for all algorithms learning the transformation matrix L
For M I am not sure because it would not change the learning algorithm, it is only post-processing the solution and the impact on the quality can be very large, and hard to understand for the user. A better way is to add trace regularization to encourage the learned M to be low rank (in which case one can safely ignore the eigenvectors corresponding to eigenvalues equal 0). In this case one cannot choose num_dims explicitly but indirectly by varying the strength of the regularization
For all Mahalanobis metric learners, we should be able to reduce the dimension. For those that optimize the transformation matrix
L
, this can be done explicitely by setting the matrix at init to have shape (num_dims, n_features). For the others (that optimize the metricM
), we could provide the user withnum_dims
which could be set to:This is the current state in the package:
transformer_from_metric
(Covariance, LSML, MMC, and SDML) do not have anum_dims
argumentL
, and have anum_dims
argument (LFDA, MLKR, NCA, RCA) except LMNN, that could have oneAlso, should we replace
num_dims
byn_components
, like this is the case in scikit learn linear transformers ? This is also what we did for this PR on NCA in scikit-learn scikit-learn/scikit-learn#10058This is also related to #124, since we should check that in the case of a custom matrix for initializing the explicit transformer it is consistent with the desired dimension
The text was updated successfully, but these errors were encountered: