Skip to content
This repository has been archived by the owner on May 21, 2022. It is now read-only.

Laundry List of Losses and Penalties #1

Open
6 of 20 tasks
ahwillia opened this issue Jul 1, 2016 · 1 comment
Open
6 of 20 tasks

Laundry List of Losses and Penalties #1

ahwillia opened this issue Jul 1, 2016 · 1 comment

Comments

@ahwillia
Copy link
Contributor

ahwillia commented Jul 1, 2016

In our other comically long conversation (JuliaML/META#8 (comment)), we came up with the following types:

abstract Cost
    abstract Loss <: Cost
        abstract SupervisedLoss <: Loss
        abstract UnsupervisedLoss <: Loss
    abstract Penalty <: Cost
    abstract DiscountedFutureRewards <: Cost
  • Cost is a function f(X, y, c) where c can be anything
  • Loss is a function f(X, y, g(X)), so c is some function of the data
  • SupervisedLoss doesn't need X, so its a function f(y, g(X))
  • UnsupervisedLoss doesn't need y, so its a function f(X, g(X))
  • Penalty doesn't need X or y, so its a function f(W) where W are some model params/weights
  • DiscountedFutureRewards... @tbreloff can you edit this and

IMPORTANT: everybody should edit this comment to check off Costs as they are done and also to add new Costs

This post a work-in-progress...

Supervised Losses: (implemented in Losses.jl)

  • L1 error, L1DistLoss
  • L2 error, L2DistLoss
  • LP error, LPDistLoss
  • Logit distance, LogitDistLoss , L(y, t) = -ln(4 * exp(y - t) / (1 + exp(y - t))²)
  • Logistic loss, L(y, t) = log(1+exp(-t*y))
  • Hinge loss
  • Multinomial logit, i.e. softmax
  • Poisson loss,
  • Huber loss
  • Cosine distance loss

Unsupervised Losses:

  • Density level detection loss?

Penalties: (implemented in ObjectiveFunctions perhaps?)

  • L2 penalty
  • L1 penalty
  • L0 penalty? (nonconvex)

Constrants: These are actually penalties. The penalty is "infinitely bad" when the constraint is violated.

  • Box constraint (lower[i] < x[i] < upper[i] for all i)
  • Non-negative constraint
  • Unit length constraint
  • Orthogonality constraint (e.g. in PCA)... Not sure about this one.

Combinations of Penalties: I would like an easy/automatic way of combining instead of defining by hand, but this may not be feasible.

  • Elastic Net
  • Non-negative constraint with L1 penalty (good for sparsity)

Discounted Future Rewards:

@tbreloff -- halp!

Some sources:

@Evizero
Copy link
Member

Evizero commented Jul 1, 2016

An example for a unsupervised loss is the density level detection loss (DLD) L(x, t) = 1_{(-\infty, 0)} ((g(x) - p) sign t)), where t = f(x) and g is the unknown density. Obviously this loss cannot be computed because g is unknown, but that's what surrogate losses are for. Interestingly a lot of surrogate losses for DLD are supervised losses as you observed.

However, I think even then practically speaking the types would be different to implement, no? x is in general a matrix here, while in the supervised case y wouldn't be, or in the cases where y would be (multivariate regression) the matrix would be interpreted differently I think.

All in all I am not sure about this as I haven't spend a lot of time dwelling on unsupervised losses. I simply wanted to keep the option open

@ahwillia ahwillia changed the title Laundry List of Laundry List of Losses and Penalties Jul 1, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants