Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inconsistent (Python) Unittest result reporting #21913

Closed
mhanner opened this issue Aug 31, 2023 · 14 comments
Closed

Inconsistent (Python) Unittest result reporting #21913

mhanner opened this issue Aug 31, 2023 · 14 comments
Assignees
Labels
area-testing info-needed Issue requires more information from poster triage-needed Needs assignment to the proper sub-team

Comments

@mhanner
Copy link

mhanner commented Aug 31, 2023

Does this issue occur when all extensions are disabled?: Yes/No

  • VS Code Version: v1.21.1
  • OS Version: Windows 10 Enterprise

see: sys_info.txt and enabled_extensions.txt

Steps to Reproduce:

  1. Run the attached Python unittest in vscode.

Expected Result:
The UI and all related unittest output should report that test failed.

Observed Results:
a. The UI shows the Test ran successfully: see green checkmark in attached screenshots
b. The Python Test Log's TEST RESULTS console shows the test is expected to run; however, the test was not run nor failed as expected: see TEST_RESULT screenshot
Running tests (unittest): c:\Users\hannerm\Documents\test\simple_unittest.py
Running tests: c:\Users\hannerm\Documents\test\simple_unittest.py::SimpleTest::test_create
Total number of tests expected to run: 1
Total number of tests run: 0
Total number of tests passed: 0
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0

Finished running tests!

c. The Python Test Log's OUTPUT console reports that the test was run and failed as expected: see OUTPUT screenshot
test_create (simple_unittest.SimpleTest) ... FAIL

NoneType: None

======================================================================
FAIL: test_create (simple_unittest.SimpleTest)

Traceback (most recent call last):
File "c:\Users\hannerm\Documents\Mitel\src\python\mitel\mji\test\simple_unittest.py", line 62, in test_create
self.assertFalse(True)
AssertionError: True is not false


Ran 1 test in 0.005s

FAILED (failures=1)

This is an intermittent problem and occurs most frequently on tests that create files and directories.

FWIW, python -m unittest -v -f test/*.py will always run the tests.

simple_unittest.py.txt
OUTPUT
TEST_RESULTS
sys_info.txt
enabled_extensions.txt
vscode_version

@vscodenpa
Copy link

Thanks for creating this issue! It looks like you may be using an old version of VS Code, the latest stable release is 1.81.1. Please try upgrading to the latest version and checking whether this issue remains.

Happy Coding!

@mhanner
Copy link
Author

mhanner commented Aug 31, 2023

oops, that was a typo. I am using VSCode v1.81.1

@eleanorjboyd eleanorjboyd transferred this issue from microsoft/vscode Sep 5, 2023
@github-actions github-actions bot added the triage-needed Needs assignment to the proper sub-team label Sep 5, 2023
@eleanorjboyd
Copy link
Member

Hi @mhanner, any chance you have logs for this issue? For your logs, can you first set your log level to trace via theDeveloper: set log level command in the command palette?

I am realizing now that I'm unsure of where the bug occurred and the logs will give me a look at the payload sent back from the unittest run. Thanks

@eleanorjboyd eleanorjboyd added the info-needed Issue requires more information from poster label Dec 5, 2023
@mhanner
Copy link
Author

mhanner commented Dec 5, 2023

@eleanorjboyd , which log file(s) should I include? Where can I find the log file(s)?

@github-actions github-actions bot removed the info-needed Issue requires more information from poster label Dec 5, 2023
@eleanorjboyd
Copy link
Member

Hi sorry about that, your "python" log file. To get it just go the "output" in the workbench (like where the terminal is) and then select python from the dropdown on the right. Thanks!

@github-actions github-actions bot added the info-needed Issue requires more information from poster label Dec 5, 2023
@mhanner
Copy link
Author

mhanner commented Dec 5, 2023

21913_trace_output.txt
21913_trace_output_cmdline.txt

Attached the trace output files.

  1. Developer: set Log Level = Trace
  2. Clear the python output screen
  3. Run the test via GUI
  4. Copy python output screen to 21913_trace_output.txt
  5. Clear the python output screen
  6. Run the test via command line (python -m unittest ...)
  7. Copy python output screen to 21913_trace_output_cmdline.txt

@github-actions github-actions bot removed the info-needed Issue requires more information from poster label Dec 5, 2023
@eleanorjboyd
Copy link
Member

Hi! In your logs from the extension I am seeing the following: server socket error Error: read ECONNRESET.
The error message "server socket error Error: read ECONNRESET" typically indicates a network-related issue where a connection was unexpectedly closed by the remote server or some intermediary network equipment.

Do you have any settings on your device that would interrupt or not allow socket connections? We use sockets to transfer information between the extension and a shell python process.

Thanks

@eleanorjboyd eleanorjboyd added the info-needed Issue requires more information from poster label Dec 6, 2023
@mhanner
Copy link
Author

mhanner commented Dec 7, 2023

Hi. I do have a firewall but am unaware of any setting that will block socket connections.
How can I get additional info on the socket error (e.g., server URL, port, etc)?

FWIW, I see the server socket error Error: read ECONNRESET in the logs for any tests I run.

I added the following test to this sample
def test_assert_true(self):
self.assertTrue(True)

This test ran fine and the socket error Error: read ECONNRESET is in the log.

As I mentioned, this issue occurs when directories and files are created inside the test.

@github-actions github-actions bot removed the info-needed Issue requires more information from poster label Dec 7, 2023
@eleanorjboyd
Copy link
Member

hm interesting, if you are getting some response then that is not the problem.

Could you send me that test file that creates directories and files during the test? I can run it on my machine and then make the fix. Thanks

@github-actions github-actions bot added the info-needed Issue requires more information from poster label Dec 7, 2023
@mhanner
Copy link
Author

mhanner commented Dec 7, 2023

Here is the sample test case that fails:

import csv
import os
import shutil
import time
import unittest
from pathlib import Path

def create_csvfile(filename, data, headers=None):
    if headers:
        data = f"{','.join(headers)}\n{data}"

    basename = os.path.basename(filename)
    relative_name = os.path.join(TEST_DIR, basename)
    relative_name, _ = os.path.splitext(relative_name)
    timestamp = int(time.time())
    relative_name = f'{relative_name}_{timestamp}.csv'

    lines = data.split('\n')
    lines = [line.strip() for line in lines if line.strip()]

    # Write the data to the CSV file
    with open(relative_name, 'w', newline='') as csvfile:
        csv_writer = csv.writer(csvfile, delimiter=',', quotechar='"', quoting=csv.QUOTE_MINIMAL)
        for line in lines:
            values = [value.strip() for value in line.split(',')]
            csv_writer.writerow(values)

    if not csvfile.closed:
        raise Exception( f'{relative_name} not closed.')
    return Path(relative_name)



TEST_DIR = 'test_dir'
class SimpleTest(unittest.TestCase):

    @classmethod
    def setUpClass(cls):
        if not os.path.exists(TEST_DIR):
            os.makedirs(TEST_DIR)
        assert os.path.exists(TEST_DIR), f"{TEST_DIR} should exist"

    @classmethod
    def tearDownClass(cls):
        if os.path.exists(TEST_DIR):
            shutil.rmtree(TEST_DIR)
        assert (not os.path.exists(TEST_DIR)), f"{TEST_DIR} should not exist"

    def tearDown(self):
        if os.path.isdir(TEST_DIR):
            files = list(Path(TEST_DIR).glob("*.csv"))
            for file in files:
                os.remove(str(file))
        assert len(list(Path(TEST_DIR).glob("*.csv"))) == 0, f"{TEST_DIR} should be empty"

    def test_create(self):
        data = "A, B, C, D"
        csv_file = create_csvfile("myfile.csv", data, data.split(','))
        self.assertTrue(os.path.isfile(csv_file))
        self.assertFalse(True)

Run test_create from the UI.

I get the following test results ~ 80% of the time:

Running tests (unittest): c:\Users\hannerm\Documents\mji\test\simple_unittest.py::SimpleTest::test_create
Running tests: c:\Users\hannerm\Documents\mji\test\simple_unittest.py::SimpleTest::test_create
Total number of tests expected to run: 1
Total number of tests run: 0
Total number of tests passed: 0
Total number of tests failed: 0
Total number of tests failed with errors: 0
Total number of tests skipped: 0

I expect the following test results 100% of the time:

Total number of tests expected to run: 1
Total number of tests run: 1
Total number of tests passed: 0
Total number of tests failed: 1
Total number of tests failed with errors: 0
Total number of tests skipped: 0

@github-actions github-actions bot removed the info-needed Issue requires more information from poster label Dec 7, 2023
@eleanorjboyd
Copy link
Member

Hi @mhanner, I am seeing this test run the same in the command line as in the test explorer.

I see in your logs 2023-12-05 14:40:58.531 [info] > .\.venv\Scripts\python.exe ~\.vscode\extensions\ms-python.python-2023.20.0\pythonFiles\visualstudio_py_testlauncher.py --us=./test --up=*test.py --uvInt=2 --result-port=56171 -tsimple_unittest.SimpleTest.test_create --testFile=.\test\simple_unittest.py. The path .vscode\extensions\ms-python.python-2023.20.0\pythonFiles\visualstudio_py_testlauncher.pyis the old test logic not the new rewrite we have. The rewrite should be enabled for all users, do you have any settings opting out of experimentation? (see more about experiments here: https://github.com/microsoft/vscode-python/wiki/AB-Experiments). Please check on this or you can try doing"python.experiments.optInto": ["pythonTestAdapter"],` explicitly in your user.json.

Can you try again and see if you can get it using the new experiment? You should see the following in your logs which will signal the use of the new one [info] Experiment 'All' is active, Running discovery for unittest using the new test adapter. or something like this Running unittests for workspace /Users/eleanorboyd/testingFiles/v2/workspace with arguments: /Users/eleanorboyd/.vscode-insiders/extensions/ms-python.python-2023.23.13391009/pythonFiles/unittestadapter/execution.py,--udiscovery,-v,-s,.,-p,*test*.py where the file referenced is pythonFiles/unittestadapter/execution.py.

If you see one of those in your logs and it still isn't working, please send your logs again and I can take a look. Thanks

@eleanorjboyd eleanorjboyd added the info-needed Issue requires more information from poster label Dec 8, 2023
@mhanner
Copy link
Author

mhanner commented Dec 8, 2023

Hi @eleanorjboyd ,

I opted out of the new experiment (settings.json: "python.experiments.optOutFrom": ["pythonTestAdapter"]) [I came across based on https://github.com//issues/21648 when researching this issue]

I uncommented that line. It appears to be working. It appears to be working. The UI is consistent; however, the TEST RESULTS window no longer prints 'Total number' metrics (too bad).

test_create (simple_unittest.SimpleTest) ... ok  
----------------------------------------------------------------------  
Ran 1 test in 0.006s  
OK  
Finished running tests!  

Unfortunately, all my tests that use the packaging.version now fail. Furthermore, running the tests and debugging the tests generate different errors.

Running the test: There appears to be a clash with the version module. My app requires packaging==21.3. Looks like the test framework is pulling a different version.
21913_run_log.txt

Running the test in Debug mode: Import error due to from test.mji_utils import * .
21913_debug_log.txt

@github-actions github-actions bot removed the info-needed Issue requires more information from poster label Dec 8, 2023
@eleanorjboyd
Copy link
Member

Hi @mhanner, sorry for the delay. Could you opt into the experiment and let me know what happens? We are switching it soon to get rid of the old code and make the rewrite the only experience so I want to get this resolved for when that happens. Thanks!

@github-actions github-actions bot added the info-needed Issue requires more information from poster label Jun 28, 2024
Copy link

Because we have not heard back with the information we requested, we are closing this issue for now. If you are able to provide the info later on, then we will be happy to re-open this issue to pick up where we left off.

Happy Coding!

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jul 29, 2024
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 30, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area-testing info-needed Issue requires more information from poster triage-needed Needs assignment to the proper sub-team
Projects
None yet
Development

No branches or pull requests

4 participants