-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent (Python) Unittest result reporting #21913
Comments
Thanks for creating this issue! It looks like you may be using an old version of VS Code, the latest stable release is 1.81.1. Please try upgrading to the latest version and checking whether this issue remains. Happy Coding! |
oops, that was a typo. I am using VSCode v1.81.1 |
Hi @mhanner, any chance you have logs for this issue? For your logs, can you first set your log level to trace via the I am realizing now that I'm unsure of where the bug occurred and the logs will give me a look at the payload sent back from the unittest run. Thanks |
@eleanorjboyd , which log file(s) should I include? Where can I find the log file(s)? |
Hi sorry about that, your "python" log file. To get it just go the "output" in the workbench (like where the terminal is) and then select python from the dropdown on the right. Thanks! |
21913_trace_output.txt Attached the trace output files.
|
Hi! In your logs from the extension I am seeing the following: Do you have any settings on your device that would interrupt or not allow socket connections? We use sockets to transfer information between the extension and a shell python process. Thanks |
Hi. I do have a firewall but am unaware of any setting that will block socket connections. FWIW, I see the server I added the following test to this sample This test ran fine and the As I mentioned, this issue occurs when directories and files are created inside the test. |
hm interesting, if you are getting some response then that is not the problem. Could you send me that test file that creates directories and files during the test? I can run it on my machine and then make the fix. Thanks |
Here is the sample test case that fails:
Run test_create from the UI. I get the following test results ~ 80% of the time:
I expect the following test results 100% of the time:
|
Hi @mhanner, I am seeing this test run the same in the command line as in the test explorer. I see in your logs Can you try again and see if you can get it using the new experiment? You should see the following in your logs which will signal the use of the new one If you see one of those in your logs and it still isn't working, please send your logs again and I can take a look. Thanks |
Hi @eleanorjboyd , I opted out of the new experiment (settings.json: "python.experiments.optOutFrom": ["pythonTestAdapter"]) [I came across based on https://github.com//issues/21648 when researching this issue] I uncommented that line. It appears to be working. It appears to be working. The UI is consistent; however, the TEST RESULTS window no longer prints 'Total number' metrics (too bad).
Unfortunately, all my tests that use the Running the test: There appears to be a clash with the version module. My app requires Running the test in Debug mode: Import error due to |
Hi @mhanner, sorry for the delay. Could you opt into the experiment and let me know what happens? We are switching it soon to get rid of the old code and make the rewrite the only experience so I want to get this resolved for when that happens. Thanks! |
Because we have not heard back with the information we requested, we are closing this issue for now. If you are able to provide the info later on, then we will be happy to re-open this issue to pick up where we left off. Happy Coding! |
Does this issue occur when all extensions are disabled?: Yes/No
see: sys_info.txt and enabled_extensions.txt
Steps to Reproduce:
Expected Result:
The UI and all related unittest output should report that test failed.
Observed Results:
a. The UI shows the Test ran successfully: see green checkmark in attached screenshots
b. The Python Test Log's TEST RESULTS console shows the test is expected to run; however, the test was not run nor failed as expected: see TEST_RESULT screenshot
Running tests (unittest): c:\Users\hannerm\Documents\test\simple_unittest.py
Running tests: c:\Users\hannerm\Documents\test\simple_unittest.py::SimpleTest::test_create
Total number of tests expected to run: 1
Total number of tests run: 0
Total number of tests passed: 0
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0
Finished running tests!
c. The Python Test Log's OUTPUT console reports that the test was run and failed as expected: see OUTPUT screenshot
test_create (simple_unittest.SimpleTest) ... FAIL
NoneType: None
======================================================================
FAIL: test_create (simple_unittest.SimpleTest)
Traceback (most recent call last):
File "c:\Users\hannerm\Documents\Mitel\src\python\mitel\mji\test\simple_unittest.py", line 62, in test_create
self.assertFalse(True)
AssertionError: True is not false
Ran 1 test in 0.005s
FAILED (failures=1)
This is an intermittent problem and occurs most frequently on tests that create files and directories.
FWIW, python -m unittest -v -f test/*.py will always run the tests.
simple_unittest.py.txt
sys_info.txt
enabled_extensions.txt
The text was updated successfully, but these errors were encountered: