Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make it easier to share workflow steps with other repositories #26

Open
ScottTodd opened this issue Sep 12, 2024 · 0 comments
Open

Make it easier to share workflow steps with other repositories #26

ScottTodd opened this issue Sep 12, 2024 · 0 comments
Labels
quality of life 😊 Nice things are nice; let's have some

Comments

@ScottTodd
Copy link
Member

ScottTodd commented Sep 12, 2024

Forking this issue from nod-ai/SHARK-TestSuite#288.

This workflow https://github.com/iree-org/iree-test-suites/blob/main/.github/workflows/test_onnx_ops.yml has some duplicated code at https://github.com/iree-org/iree/blob/main/.github/workflows/pkgci_test_onnx.yml.

Most of the duplication is boilerplate and keeping it synced across commits is a bit of a chore. See https://docs.github.com/en/actions/using-workflows/avoiding-duplication for the options available to us.

I'm imagining something downstream like

jobs:
  test_onnx:
    steps:
      - name: Check out external TestSuite repository
        uses: actions/[email protected]
        with:
          repository: iree-org/iree-test-suites
          ref: v1
          path: iree-test-suites
          submodules: false
          lfs: false
      - name: Run onnx tests
         uses: iree-test-suites/.github/workflows/test_onnx.yml
         with:
           pytest_flags: ...
           config_file: ...

or

test_onnx:
  uses: iree-org/test-suite@v1
  with:
    pytest_flags: ...
    config_file: ...    

where the workflow would include running tests, checking for diffs in the config file and uploading the new file(s), etc.


New thoughts since filing that:

For local and CI usage, an entry point script (could be pytest) that runs all test suites here would be helpful.

Some things that change from run to run or machine to machine:

  • Available and/or selected hardware and APIs to use (GPUs connected / visible / requested, use Vulkan, use CUDA, use ROCm, use CPU, etc.
  • Test expectations (which should be passing vs which are known failing, are deviations from that okay or should they fail the run?)
  • How to shard, parallelize, log, etc.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
quality of life 😊 Nice things are nice; let's have some
Projects
None yet
Development

No branches or pull requests

1 participant