You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Random data within the snapshot is a huge problem and might need a lot of mocking or manual post processing of the actual test result in order to be able to reliably use snapshot assertions. This is kind of in the way of our design goal of "making it super simple to get a lot of assertions for free".
Few points for the discussion:
At what level should normalization be applied? Normalize the actual test result before serializing, normalize the snapshot after normalization (string based), do not normalize the persisted data but normalize "on the fly" during comparison?
Adding to that, should we provide reflection based normalizing for actual objects?
Should we provide serialized structure (json/xml) based normalization for serialized string data (XPath, Json Path)?
Either way, I still think that normalizing the data is an anti-pattern. As stated in the readme, you should design your code in way that you can provide mocks returning deterministic instead of reandom data
The text was updated successfully, but these errors were encountered:
Random data within the snapshot is a huge problem and might need a lot of mocking or manual post processing of the actual test result in order to be able to reliably use snapshot assertions. This is kind of in the way of our design goal of "making it super simple to get a lot of assertions for free".
Few points for the discussion:
Either way, I still think that normalizing the data is an anti-pattern. As stated in the readme, you should design your code in way that you can provide mocks returning deterministic instead of reandom data
The text was updated successfully, but these errors were encountered: