Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow marking tests Pending when they fail audits #22

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

isen0011
Copy link

@isen0011 isen0011 commented May 8, 2024

Works on #21. This is a draft, mostly to start the conversation for one way this could work.

This doesn't completely fix the problem that is identified in #21, but it is a step in that direction. The idea is that, if the configuration setting accessibility_audit_skip_on_error is set to true, if an axe violation is detected, instead of generating an error it will instead mark the test as skipped with the axe output as the message in the skip output.

There is a problem with this approach, which is the same problem identified in issue #7 - as soon as a single test is marked skipped or failed, then the rest of the test is skipped. This means that if a test has both failing application logic and failing accessibility validation, the first of those tests to fail will be the one that is reported. For a system test, since any interaction with the page will trigger the accessibility tests, it is likely that the test will be marked as skipped and the application logic failure will be hidden.

This can be solved by running the tests a second time with accessibility_audit_enabled = false and comparing the results, but that does require two runs through the test suite.

Works on thoughtbot#21.

This doesn't completely fix the problem that is identified in thoughtbot#21, but
it is a step in that direction.  The idea is that, if the configuration
setting `accessibility_audit_skip_on_error` is set to true, if an `axe`
violation is detected, instead of generating an error it will instead
mark the test as `skipped` with the `axe` output as the message in the
skip output.

There is a problem with this approach, which is the same problem
identified in issue thoughtbot#7 - as soon as a single test is marked skipped or
failed, then the rest of the test is skipped.  This means that if a
test has both failing application logic and failing accessibility
validation, the first of those tests to fail will be the one that is
reported.  For a system test, since any interaction with the page will
trigger the accessibility tests, it is likely that the test will be
marked as skipped and the application logic failure will be hidden.

This can be solved by running the tests a second time with
`accessibility_audit_enabled = false` and comparing the results, but
that does require two runs through the test suite.
@seanpdoyle
Copy link
Contributor

Thank you so much for opening #21 along with this PR. We really appreciate your contributions.

At the moment, I think the difficulty you're experiencing is signaling a larger need. I think the project would benefit from the concept of a "reporter" for controlling whether or not (or when) to raise failures for violations.

I'm imagining something like RSpec's configuration to control "failing fast". I'd characterize the current behavior as failing fast. To work toward deferred failures, I'm imagining a "reporter" with at least two hooks: 1) an "on violation" hook and 2) an "on completion" hook.

A backwards compatible reporter could flunk the tests immediately in its "on violation" hook, then do nothing in its "on completion" hook.

A reporter that behaves like you've described might collect the violation into an internal data structure (maybe as simple as an Array), then fail the test and output the descriptions in its "on completion" hook.

A third reporter might not fail the test "on completion", but could instead output a report to a file, or do something else entirely.

I've pushed up a violation-reporter branch to explore where the seams might be to insert a reporter. It isn't merge-ready, but might provide some insight.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants