Skip to content

Add Losses #129

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 27 commits into from
Nov 17, 2020
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
c57a2e7
Merge pull request #3 from tensorflow/master
JimClarke5 Oct 8, 2020
9cc2675
Initial checkin to rebase to Initialziers to pick up changes to ndarr…
JimClarke5 Oct 5, 2020
2508f5e
Initial Checkin for losses
JimClarke5 Oct 8, 2020
17e96b5
Fix reshape in sparseCategoricalCrossentropy()
JimClarke5 Oct 8, 2020
ee1c48a
Apply various fixes to JavaDoc
JimClarke5 Oct 11, 2020
287c96e
Change Tuple to LossTuple
JimClarke5 Oct 11, 2020
642069c
Repair JavaDOx
JimClarke5 Oct 11, 2020
249b651
Fixed AllAxis to hanlde dynamic shape when static shape rank is unknown.
JimClarke5 Oct 11, 2020
794cfdc
change method name allAxis to allAxes
JimClarke5 Oct 11, 2020
fb26c59
change private method binaryCrossentropy to binaryCrossentropyHelper
JimClarke5 Oct 13, 2020
928ef06
Fixed squeezeOrExpandDimensions to make sure the updated labels, pred…
JimClarke5 Oct 13, 2020
2bc54dd
Fix JavaDoc,
JimClarke5 Oct 27, 2020
951443b
Fix unused imports and add @SuppressWarnings("unchecked") for casts.
JimClarke5 Oct 27, 2020
ebac9e8
Add copyright
JimClarke5 Oct 29, 2020
d8f3254
Add CastHelper and used that for all casts
JimClarke5 Oct 29, 2020
02573b5
Fix JavaDoc, change snake case to camel case.
JimClarke5 Nov 9, 2020
0bf49fe
Change class LossesImpl to LossesHelper
JimClarke5 Nov 11, 2020
0eae9ee
Remove commented out JavaDoc
JimClarke5 Nov 12, 2020
b211937
Changed method name from smoothLabelsBinaryX to smoothBinaryLabels,
JimClarke5 Nov 13, 2020
3e0669e
Fixed JavaDoc for labelSmoothing
JimClarke5 Nov 13, 2020
914f16f
Fixed JavaDoc to change label_smoothing to labelSmoothing.
JimClarke5 Nov 13, 2020
7eefbb7
Fix formatting
JimClarke5 Nov 13, 2020
b87ad16
replace label_smoothing with labelSmoothing.
JimClarke5 Nov 13, 2020
c43cd21
Add copyright to test cases
JimClarke5 Nov 16, 2020
4d9fd24
Fix copyright to attribute TensorFlow Authors.
JimClarke5 Nov 16, 2020
d56d8d9
Fix typo on broadcast in JavaDoc
JimClarke5 Nov 16, 2020
744e324
Fix typo on broadcast in JavaDoc
JimClarke5 Nov 16, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,179 @@
package org.tensorflow.framework.losses;

import org.tensorflow.Operand;
import org.tensorflow.framework.losses.impl.LossesImpl;
import org.tensorflow.op.Ops;
import org.tensorflow.types.family.TNumber;

/**
* Computes the cross-entropy loss between true labels and predicted labels.
*
* <p>Use this cross-entropy loss when there are only two label classes (assumed to be 0 and 1). For
* each example, there should be a single floating-point value per prediction.
*
* <p>Standalone usage:
*
* <pre>
* Operand&lt;TFloat32&gt; labels =
* tf.constant(new float[][] {{0.f, 1.f}, {0.f, 0.f}});
* Operand&lt;TFloat32&gt; predictions =
* tf.constant(new float[][] {{0.6f, 0.4f}, {0.4f, 0.6f}});
* BinaryCrossentropy bce = new BinaryCrossentropy(tf);
* Operand&lt;TFloat32&gt; result = bce.call(labels, predictions);
* // produces 0.815
* </pre>
*
* <p>Calling with sample weight:
*
* <pre>
* Operand&lt;TFloat32&gt; sampleWeight = tf.constant(new float[] {1.f, 0.f});
* Operand&lt;TFloat32&gt; result = bce.call(labels, predictions, sampleWeight);
* // produces 0.458f
* </pre>
*
* <p>Using <code>SUM</code> reduction type:
*
* <pre>
* BinaryCrossentropy bce = new BinaryCrossentropy(tf, Reduction.SUM);
* Operand&lt;TFloat32&gt; result = bce.call(labels, predictions);
* // produces 1.630f
* </pre>
*
* <p>Using <code>NONE</code> reduction type:
*
* <pre>
* BinaryCrossentropy bce = new BinaryCrossentropy(tf, Reduction.NONE);
* Operand&lt;TFloat32&gt; result = bce.call(labels, predictions);
* // produces [0.916f, 0.714f]
* </pre>
*
*/
public class BinaryCrossentropy extends Loss {
public static final boolean FROM_LOGITS_DEFAULT = false;
public static final float LABEL_SMOOTHING_DEFAULT = 0.0f;
public static final Reduction REDUCTION_DEFAULT = Reduction.AUTO;

private final boolean fromLogits;
private final float labelSmoothing;

/**
* Creates a Binary Crossentropy Loss using {@link Class#getSimpleName()} as the loss name, {@link
* #FROM_LOGITS_DEFAULT} for fromLogits, {@link #LABEL_SMOOTHING_DEFAULT} for labelSmoothing and a
* Loss Reduction of {@link * Reduction#AUTO}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extraneous *

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Deleted

*
*
*
* @param tf the TensorFlow Ops
*/
public BinaryCrossentropy(Ops tf) {
this(tf, null, FROM_LOGITS_DEFAULT, LABEL_SMOOTHING_DEFAULT, REDUCTION_DEFAULT);
}

/**
* Creates a Binary Crossentropy loss using {@link Class#getSimpleName()} as the loss name, {@link
* #FROM_LOGITS_DEFAULT} for fromLogits, and {@link #LABEL_SMOOTHING_DEFAULT} for labelSmoothing
*
* @param tf the TensorFlow Ops
* @param reduction Type of Reduction to apply to the loss.
*/
public BinaryCrossentropy(Ops tf, Reduction reduction) {
this(tf, null, FROM_LOGITS_DEFAULT, LABEL_SMOOTHING_DEFAULT, reduction);
}

/**
* Creates a Binary Crossentropy loss using using {@link Class#getSimpleName()} as the loss name,
* labelSmoothing of {@link #LABEL_SMOOTHING_DEFAULT}, a reduction of {@link #REDUCTION_DEFAULT},
*
* @param tf the TensorFlow Ops
* @param fromLogits Whether to interpret predictions as a tensor of logit values
*/
public BinaryCrossentropy(Ops tf, boolean fromLogits) {
this(tf, null, fromLogits, LABEL_SMOOTHING_DEFAULT, REDUCTION_DEFAULT);
}

/**
* Creates a Binary Crossentropy loss using labelSmoothing of {@link #LABEL_SMOOTHING_DEFAULT} a
* reduction of {@link #REDUCTION_DEFAULT}.
*
* @param tf the TensorFlow Ops
* @param name the name of the loss
* @param fromLogits Whether to interpret predictions as a tensor of logit values
*/
public BinaryCrossentropy(Ops tf, String name, boolean fromLogits) {
this(tf, name, fromLogits, LABEL_SMOOTHING_DEFAULT, REDUCTION_DEFAULT);
}

/**
* Creates a Binary Crossentropy loss using using {@link Class#getSimpleName()} as the loss name,
* and a reduction of {@link #REDUCTION_DEFAULT}.
*
* @param tf the TensorFlow Ops
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing A number in the range, [0, 1]. When 0, no smoothing occurs. When &gt; 0,
* compute the loss between the predicted labels and a smoothed version of the true labels,
* where the smoothing squeezes the labels towards 0.5. Larger values of label_smoothing
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

label_smoothing -> labelSmoothing, here and elsewhere in this file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

* correspond to heavier smoothing.
*/
public BinaryCrossentropy(Ops tf, boolean fromLogits, float labelSmoothing) {
this(tf, null, fromLogits, labelSmoothing, REDUCTION_DEFAULT);
}

/**
* Creates a Binary Crossentropy loss using a reduction of {@link #REDUCTION_DEFAULT}.
*
* @param tf the TensorFlow Ops
* @param name the name of the loss
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing A number in the range, [0, 1]. When 0, no smoothing occurs. When &gt; 0,
* compute the loss between the predicted labels and a smoothed version of the true labels,
* where the smoothing squeezes the labels towards 0.5. Larger values of label_smoothing
* correspond to heavier smoothing.
*/
public BinaryCrossentropy(Ops tf, String name, boolean fromLogits, float labelSmoothing) {
this(tf, name, fromLogits, labelSmoothing, REDUCTION_DEFAULT);
}

/**
* Creates a Binary Crossentropy loss
*
* @param tf the TensorFlow Ops
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing A number in the range, [0, 1]. When 0, no smoothing occurs. When &gt; 0,
* compute the loss between the predicted labels and a smoothed version of the true labels,
* where the smoothing squeezes the labels towards 0.5. Larger values of label_smoothing
* correspond to heavier smoothing.
* @param reduction Type of Reduction to apply to the loss.
*/
public BinaryCrossentropy(
Ops tf, boolean fromLogits, float labelSmoothing, Reduction reduction) {
this(tf, null, fromLogits, labelSmoothing, reduction);
}

/**
* Creates a Binary Crossentropy loss
*
* @param tf the TensorFlow Ops
* @param name the name of the loss
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing A number in the range, [0, 1]. When 0, no smoothing occurs. When &gt; 0,
* compute the loss between the predicted labels and a smoothed version of the true labels,
* where the smoothing squeezes the labels towards 0.5. Larger values of label_smoothing
* correspond to heavier smoothing.
* @param reduction Type of Reduction to apply to the loss.
*/
public BinaryCrossentropy(
Ops tf, String name, boolean fromLogits, float labelSmoothing, Reduction reduction) {
super(tf, name, reduction);
this.fromLogits = fromLogits;
this.labelSmoothing = labelSmoothing;
}

/** {@inheritDoc} */
@Override
public <T extends TNumber, U extends TNumber> Operand<T> call(
Operand<U> labels, Operand<T> predictions, Operand<T> sampleWeights) {
Operand<T> losses =
Losses.binaryCrossentropy(tf, labels, predictions, fromLogits, labelSmoothing);
return LossesImpl.computeWeightedLoss(tf, losses, getReduction(), sampleWeights);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inconsistency between accessing the superclass's tf directly and accessing its reduction via getReduction.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to getTF()

}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,219 @@
package org.tensorflow.framework.losses;

import org.tensorflow.Operand;
import org.tensorflow.framework.losses.impl.LossesImpl;
import org.tensorflow.op.Ops;
import org.tensorflow.types.family.TNumber;

/**
* Computes the crossentropy loss between the labels and predictions.
*
* <p>Use this crossentropy loss function when there are two or more label classes. We expect labels
* to be provided in a one_hot representation. If you want to provide labels as integers, please use
* {@link SparseCategoricalCrossentropy} loss. There should be <code># classes</code> floating point
* values per feature.
*
* <p>Standalone usage:
*
* <pre>
* Operand&lt;TFloat32&gt; labels =
* tf.constant(new float[][] {{0, 1, 0}, {0, 0, 1}});
* Operand&lt;TFloat32&gt; predictions =
* tf.constant(new float[][] {{0.05f, 0.95f, 0f}, {0.1f, 0.8f, 0.1f}});
* CategoricalCrossentropy cce = new CategoricalCrossentropy(tf);
* Operand&lt;TFloat32&gt; result = cce.call(labels, predictions);
* // produces 1.177
* </pre>
*
* <p>Calling with sample weight:
*
* <pre>
* Operand&lt;TFloat32&gt; sampleWeight = tf.constant(new float[] {0.3f, 0.7f});
* Operand&lt;TFloat32&gt; result = cce.call(labels, predictions, sampleWeight);
* // produces 0.814f
* </pre>
*
* <p>Using <code>SUM</code> reduction type:
*
* <pre>
* CategoricalCrossentropy cce = new CategoricalCrossentropy(tf, Reduction.SUM);
* Operand&lt;TFloat32&gt; result = cce.call(labels, predictions);
* // produces 2.354f
* </pre>
*
* <p>Using <code>NONE</code> reduction type:
*
* <pre>
* CategoricalCrossentropy cce =
* new CategoricalCrossentropy(tf, Reduction.NONE);
* Operand&lt;TFloat32&gt; result = cce.call(labels, predictions);
* // produces [0.0513f, 2.303f]
* </pre>
*/
public class CategoricalCrossentropy extends Loss {
public static final boolean FROM_LOGITS_DEFAULT = false;
public static final float LABEL_SMOOTHING_DEFAULT = 0.0f;
public static final Reduction REDUCTION_DEFAULT = Reduction.AUTO;
public static final int DEFAULT_AXIS = -1;

private final boolean fromLogits;
private final float labelSmoothing;
private final int axis;

/**
* Creates a categorical cross entropy Loss using {@link Class#getSimpleName()} as the loss name,
* {@link #FROM_LOGITS_DEFAULT} for fromLogits, {@link #LABEL_SMOOTHING_DEFAULT} for
* labelSmoothing, a Loss Reduction of {@link * Reduction#AUTO}, and an axis of {@link
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Extraneous *

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed all Extraneous @link *

* #DEFAULT_AXIS}
*
* @param tf the TensorFlow Ops
*/
public CategoricalCrossentropy(Ops tf) {
this(tf, null, FROM_LOGITS_DEFAULT, LABEL_SMOOTHING_DEFAULT, REDUCTION_DEFAULT, DEFAULT_AXIS);
}

/**
* Creates a categorical cross entropy Loss using {@link #FROM_LOGITS_DEFAULT} for fromLogits,
* {@link #LABEL_SMOOTHING_DEFAULT} for labelSmoothing, a Loss Reduction of {@link *
* Reduction#AUTO}, and an axis of {@link #DEFAULT_AXIS}
*
* @param tf the TensorFlow Ops
* @param name the name of this loss
*/
public CategoricalCrossentropy(Ops tf, String name) {
this(tf, name, FROM_LOGITS_DEFAULT, LABEL_SMOOTHING_DEFAULT, REDUCTION_DEFAULT, DEFAULT_AXIS);
}

/**
* Creates a categorical cross entropy Loss using {@link Class#getSimpleName()} as the loss name,
* {@link #FROM_LOGITS_DEFAULT} for fromLogits, {@link #LABEL_SMOOTHING_DEFAULT} for
* labelSmoothing and an axis of {@link #DEFAULT_AXIS}
*
* @param tf the TensorFlow Ops
* @param reduction Type of Reduction to apply to loss.
*/
public CategoricalCrossentropy(Ops tf, Reduction reduction) {
this(tf, null, FROM_LOGITS_DEFAULT, LABEL_SMOOTHING_DEFAULT, reduction, DEFAULT_AXIS);
}

/**
* Creates a categorical cross entropy Loss {@link #FROM_LOGITS_DEFAULT} for fromLogits, {@link
* #LABEL_SMOOTHING_DEFAULT} for labelSmoothing, and an axis of {@link #DEFAULT_AXIS}
*
* @param tf the TensorFlow Ops
* @param name the name of this loss
* @param reduction Type of Reduction to apply to loss.
*/
public CategoricalCrossentropy(Ops tf, String name, Reduction reduction) {
this(tf, name, FROM_LOGITS_DEFAULT, LABEL_SMOOTHING_DEFAULT, reduction, DEFAULT_AXIS);
}

/**
* Creates a categorical cross entropy Loss using {@link Class#getSimpleName()} as the loss name,
* {@link #LABEL_SMOOTHING_DEFAULT} for labelSmoothing, a Loss Reduction of {@link *
* Reduction#AUTO}, and an axis of {@link #DEFAULT_AXIS}
*
* @param tf the TensorFlow Ops
* @param fromLogits Whether to interpret predictions as a tensor of logit values
*/
public CategoricalCrossentropy(Ops tf, boolean fromLogits) {
this(tf, null, fromLogits, LABEL_SMOOTHING_DEFAULT, REDUCTION_DEFAULT, DEFAULT_AXIS);
}

/**
* Creates a categorical cross entropy Loss using {@link #LABEL_SMOOTHING_DEFAULT} for
* labelSmoothing, a Loss Reduction of {@link * Reduction#AUTO}, and a channel axis of {@link
* #DEFAULT_AXIS}
*
* @param tf the TensorFlow Ops
* @param name the name of this loss
* @param fromLogits Whether to interpret predictions as a tensor of logit values
*/
public CategoricalCrossentropy(Ops tf, String name, boolean fromLogits) {
this(tf, name, fromLogits, LABEL_SMOOTHING_DEFAULT, REDUCTION_DEFAULT, DEFAULT_AXIS);
}

/**
* Creates a categorical cross entropy Loss using {@link Class#getSimpleName()} as the loss name,
* a Loss Reduction of {@link * Reduction#AUTO}, and a channel axis of {@link #DEFAULT_AXIS}
*
* @param tf the TensorFlow Ops
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing Float in [0, 1]. When 0, no smoothing occurs. When > 0, we compute the
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does labelSmoothing = 1.0 mean the true label distribution is set to 1/n? I'm not sure what "squeezing the values towards 0.5" means, because it would only be 0.5 in a binary problem.

Copy link
Contributor Author

@JimClarke5 JimClarke5 Oct 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually this is the comment for BinaryCrossentropy. It should be:

Float in <code>[0, 1]</code>. When <code>&gt; 0</code>, label values are smoothed, meaning the
confidence on label values are relaxed. e.g. <code>label_smoothing=0.2<code> means that we will use a
value of </code>0.1<code> for label </code>0<code> and </code>0.9<code> for label </code>1<code>

I'll fix it.

* loss between the predicted labels and a smoothed version of the true labels, where the
* smoothing squeezes the labels towards 0.5. Larger values of label_smoothing correspond to
* heavier smoothing.
*/
public CategoricalCrossentropy(Ops tf, boolean fromLogits, float labelSmoothing) {
this(tf, null, fromLogits, labelSmoothing, REDUCTION_DEFAULT, DEFAULT_AXIS);
}

/**
* Creates a categorical cross entropy Loss using a Loss Reduction of {@link * Reduction#AUTO},
* and a channel axis of {@link #DEFAULT_AXIS}
*
* @param tf the TensorFlow Ops
* @param name the name of this loss
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing Float in [0, 1]. When 0, no smoothing occurs. When > 0, we compute the
* loss between the predicted labels and a smoothed version of the true labels, where the
* smoothing squeezes the labels towards 0.5. Larger values of label_smoothing correspond to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one's still got the doc from BinaryCrossEntropy wrt label_smoothing. And it's snake_case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

* heavier smoothing.
*/
public CategoricalCrossentropy(Ops tf, String name, boolean fromLogits, float labelSmoothing) {
this(tf, name, fromLogits, labelSmoothing, REDUCTION_DEFAULT, DEFAULT_AXIS);
}

/**
* Creates a categorical cross entropy Loss using {@link Class#getSimpleName()} as the loss name
* and a channel axis of {@link #DEFAULT_AXIS}
*
* @param tf the TensorFlow Ops
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing Float in [0, 1]. When 0, no smoothing occurs. When > 0, we compute the
* loss between the predicted labels and a smoothed version of the true labels, where the
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect doc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

* smoothing squeezes the labels towards 0.5. Larger values of label_smoothing correspond to
* heavier smoothing.
* @param reduction Type of Reduction to apply to loss.
*/
public CategoricalCrossentropy(
Ops tf, boolean fromLogits, float labelSmoothing, Reduction reduction) {
this(tf, null, fromLogits, labelSmoothing, reduction, DEFAULT_AXIS);
}

/**
* Creates a categorical cross entropy Loss
*
* @param tf the TensorFlow Ops
* @param name the name of this loss
* @param fromLogits Whether to interpret predictions as a tensor of logit values
* @param labelSmoothing Float in [0, 1]. When 0, no smoothing occurs. When > 0, we compute the
* loss between the predicted labels and a smoothed version of the true labels, where the
* smoothing squeezes the labels towards 0.5. Larger values of label_smoothing correspond to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Incorrect doc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

* heavier smoothing.
* @param reduction Type of Reduction to apply to loss.
* @param axis The channels axis. <code>axis=-1</code> corresponds to data format `Channels Last'
* and <code>axis=1</code> corresponds to data format 'Channels First'.
*/
public CategoricalCrossentropy(
Ops tf,
String name,
boolean fromLogits,
float labelSmoothing,
Reduction reduction,
int axis) {
super(tf, name, reduction);
this.fromLogits = fromLogits;
this.labelSmoothing = labelSmoothing;
this.axis = axis;
}

/** {@inheritDoc} */
@Override
public <T extends TNumber, U extends TNumber> Operand<T> call(
Operand<U> labels, Operand<T> predictions, Operand<T> sampleWeights) {
Operand<T> losses =
Losses.categoricalCrossentropy(tf, labels, predictions, fromLogits, labelSmoothing, axis);
return LossesImpl.computeWeightedLoss(tf, losses, getReduction(), sampleWeights);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tf versus getReduction (but I'll stop mentioning these)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to getTF()

}
}
Loading