You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes in order to test a PR we release it by commenting out this line, after which we install the released package via e.g. pnpm create fuels@pr-number or pnpm add fuels@pr-number and then do manual steps to verify the PR fixes the bug at hand. The problem with this is that it's a manual process and we don't have testing infrastructure in our SDK via which we can emulate these manual steps in a test and guarantee protection against regressions.
A way to fully emulate these manual steps can be something like this:
Create a temporary folder (e.g. /tmp/fuels/unique-id/)
Run pnpm init in it (or bun init, npm init - depending on what we're testing for)
Run pnpm link to link the local fuels package that's under test to the temporary package.json, thereby simulating a pnpm add fuels@pr-number,
Test the fuels package however you see fit, e.g. if it's problems with the cli, run the appropriate cli package after you've set the test folder's contents up to reproduce the bug.
All the testing would be done via spawn(commandUnderTest, { detached: true, cwd: tempDirRootPath }) which would guarantee that the process is not part of the vitest test runner process. We'd have to make sure to clean those processes up as well after the test is finished because a detached process isn't killed when its initiator process exits (vitest in our case).
An example implementing the above can be found here.
We already have some infrastructure around testing the cli like bootstrapProject so we can integrate that as well. Also, there might not be a need to run some commands like pnpm link or pnpm init - maybe we can do some copy/pastes which would be faster. It's a thing to investigate.
The text was updated successfully, but these errors were encountered:
Perhaps I am missing something but I am trying to envision a use case for this.
From my experience, whenever we utilize the pr-release workflow it's often to use the subsequently released npm package in one of the TS-SDK ecosystem repos (such as connectors or fuel-wallet) and so the tests are done there, any breakages should be detected in those repo's respective testing workflows.
As for any potential regressions that such a change may introduce, why wouldn't our current test workflows suffice? I would imagine any changes someone wants to introduce would be accompanied by some kind of test (whether unit/integration).
It may be useful to give an example of such an instance to provide more clarity here.
@maschad you can see an example of the need for this in #3508. Essentially whenever we have issues related to package managers we'd have a need for these kinds of tests as we'd have control over the package manager and could emulate the actual problem. Also, if we have problems with fuels cli processes themselves and the way they're ran/killed/etc, this would be the way to test it. If we had this infra, we might have written the tests in #3503 differently, etc.
Besides that, IMO generally having our fuels CLI tests be written in this way would be good as well because then we'd fully emulate our users' behavior. Currently our fuels CLI tests are running the commands under test (fuels dev/init/build/...) as part of the vitest process which is okay, but doing it like this would be better.
Sometimes in order to test a PR we release it by commenting out this line, after which we install the released package via e.g.
pnpm create fuels@pr-number
orpnpm add fuels@pr-number
and then do manual steps to verify the PR fixes the bug at hand. The problem with this is that it's a manual process and we don't have testing infrastructure in our SDK via which we can emulate these manual steps in a test and guarantee protection against regressions.A way to fully emulate these manual steps can be something like this:
/tmp/fuels/unique-id/
)pnpm init
in it (orbun init
,npm init
- depending on what we're testing for)pnpm link
to link the localfuels
package that's under test to the temporarypackage.json
, thereby simulating apnpm add fuels@pr-number
,fuels
package however you see fit, e.g. if it's problems with the cli, run the appropriate cli package after you've set the test folder's contents up to reproduce the bug.All the testing would be done via
spawn(commandUnderTest, { detached: true, cwd: tempDirRootPath })
which would guarantee that the process is not part of thevitest
test runner process. We'd have to make sure to clean those processes up as well after the test is finished because adetached
process isn't killed when its initiator process exits (vitest
in our case).An example implementing the above can be found here.
We already have some infrastructure around testing the cli like bootstrapProject so we can integrate that as well. Also, there might not be a need to run some commands like
pnpm link
orpnpm init
- maybe we can do some copy/pastes which would be faster. It's a thing to investigate.The text was updated successfully, but these errors were encountered: