Skip to content

Add param to Diffing+bitmap to allow perceptualTolerance between pixels using Delta E 1994 #1

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 24 commits into from

Conversation

JoelWhitney
Copy link
Owner

@JoelWhitney JoelWhitney commented Oct 22, 2024

Differences from swift-snapshot-testing

Similar to what was implemented in swift-snapshot-testing, with some slight differences

To match the established pattern of tolerance, this PR adds an optional perceptualTolerance parameter that allows you to specify the percentage each pixel can be different from source pixel and still considered a match. This parameter complements the existing tolerance parameter that determines the percentage of pixels that must be considered matching in order to consider the whole image matching.

The default value of 0.0 means pixels must match perfectly (and performs existing equality check). Any other value greater than 0.0 and less than or equal to 1.0, will be used when comparing each pixel where 0.0 (0%) means no difference and 1.0 (100%) means completely opposite.

A suggested value of 0.01-0.02 is similar to the precision of the human eye.

Formula resource: http://www.brucelindbloom.com/index.html?Eqn_XYZ_to_Lab.html

Perceptual tolerance value Description
0.0 Must be exactly equal
≤ 0.01 Allows differences not perceptible by human eyes
≤ 0.02 Allows differences possibly perceptible through close observation
≤ 0.1 Allows differences perceptible at a glance
≤ 0.5 Allows differences when more similar than opposite
≤ 1.0 Allows any differences

Description of pointfreeco/swift-snapshot-testing#628...

Problem

The existing image matching precision strategy is not good at differentiating between a significant difference in a relatively small portion of a snapshot, and imperceivable differences in a large portion of the snapshot. For example, these snapshots below show that a 99.5% precision value fails an imperceivable background color change while allowing noticeable changes (text and color) to pass:

Reference Fails Passes
 image  image  image
  Imperceivable background color difference Significant text and color changes

Solution

This PR adds a new optional perceptualPrecision parameter to image snapshotting which determines how perceptually similar a pixel must be to consider it matching. This parameter complements the existing precision parameter that determines the percentage of pixels that must be considered matching in order to consider the whole image matching.

This approach is similar to pointfreeco/swift-snapshot-testing#571 and pointfreeco/swift-snapshot-testing#580 but uses perceptual distance rather than Euclidean distance of sRGB values. This is significant because the sRGB color space is not perceptually uniform. Pairs of colors with the same Euclidean distance can have large perceptual differences. The left and right colors of each row have the same Euclidean distance:

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant