Skip to content

BUG. LMNN loops when a bad update is calculated. #88

Closed
@all2187

Description

@all2187

There seems to be a relatively large bug in the code for LMNN. In lines 157-164 we have:

# update step size
  if delta_obj > 0:
    # we're getting worse... roll back!
    learn_rate /= 2.0
    df = df_old
    a1 = a1_old
    a2 = a2_old
    objective = objective_old

But in an iteration the objective is calculated as (line 149-150):

objective = total_active * (1-reg)
      objective += G.flatten().dot(L.T.dot(L).flatten())

Where none of total_active, reg, G, or L depend on the learn_rate. Since that is the only thing changed when the above error occurs, the result of the iteration following the roll back will be the same as teh one that caused the roll back. Hence as soon as lines 157 to 164 are executed, the algorithm will just keep halving the learning rate and re calculating the same values until the max iteration is hit (as can be seen below):
image
The solution most likely is to either make the calculation of the new objective dependent on the learning rate, or revert L to its previous value also (as in the value it had when the previous iteration that did not cause a roll back started).

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions