publish-test-results
Parses test result files (TRX, NUnit XML, JUnit, Cucumber JSON, Playwright JSON, CTRF JSON) and publishes them to an Azure DevOps Test Run, linking results back to Test Cases either directly by TC ID (when available in the result file) or by AutomatedTestName matching.
Usage
ado-sync publish-test-results \
--testResult results/test-results.trx \
--runName "CI run #42"
# Multiple result files
ado-sync publish-test-results \
--testResult results/unit.trx \
--testResult results/integration.xml \
--testResultFormat junit
# Dry run — parse and summarise without publishing
ado-sync publish-test-results --testResult results/test.trx --dry-run
# Associate with a build
ado-sync publish-test-results \
--testResult results/test.trx \
--buildId 12345
# Publish to a specific planned suite/configuration
ado-sync publish-test-results \
--testResult results/test.trx \
--testPlan "Smoke Plan" \
--testSuite "BDD" \
--testConfiguration "Windows 10"
Options
| Option | Description |
|---|---|
--testResult <path> | Path to a result file. Repeatable. |
--testResultFormat <format> | trx · nunitXml · junit · cucumberJson · playwrightJson · ctrfJson. Auto-detected when omitted. |
--attachmentsFolder <path> | Folder to scan for screenshots/videos/logs to attach to test results. |
--runName <name> | Name for the Test Run in Azure DevOps. Defaults to ado-sync <ISO timestamp>. |
--buildId <id> | Build ID to associate with the Test Run. |
--testConfiguration <nameOrId> | Azure test configuration name or numeric ID for the published run. |
--testSuite <nameOrId> | Azure test suite (name or ID) for planned run publication. Enables TC linkage. |
--testPlan <nameOrId> | Azure test plan (name or ID). Used with --testSuite. Falls back to testPlan.id from config. |
--dry-run | Parse results and print summary without creating a run in Azure. |
--create-issues-on-failure | File GitHub Issues or ADO Bugs for each failed test after publishing. |
--issue-provider <github|ado> | Issue provider. Default: github. |
--github-repo <owner/repo> | GitHub repository to file issues in. |
--github-token <token> | GitHub token. Supports $ENV_VAR references. |
--bug-threshold <percent> | If more than this % of tests fail, one environment-failure issue is filed instead of per-test issues. Default: 20. |
--max-issues <n> | Hard cap on issues filed per run. Default: 50. |
--analyze-failures | Use AI to analyse each failed test and post a root-cause + suggestion comment on the Azure test result. |
--ai-provider <provider> | AI provider for failure analysis: ollama, openai, or anthropic. |
--ai-model <model> | Model name (e.g. gpt-4o-mini, claude-haiku-4-5-20251001, gemma-4-e4b-it). |
--ai-url <url> | Base URL for Ollama or an OpenAI-compatible endpoint. |
--ai-key <key> | API key. Supports $ENV_VAR references. |
--config-override | Override config values (repeatable, same as other commands). |
Planned runs (TC linkage)
Azure DevOps silently ignores testCase.id on unplanned test runs. To link published
results to Test Cases in the Test Plans UI, you must create a planned run that uses
test points.
Pass --testPlan and --testSuite to enable planned-run mode:
ado-sync publish-test-results \
--testPlan 32953 \
--testSuite 32954 \
--testResult results/junit.xml
How it works:
- ado-sync resolves the test suite's test points (each point links a Test Case to a configuration).
- A planned run is created with those point IDs — ADO pre-populates result slots linked to each TC.
- Parsed results are matched to TCs by the
tcproperty/tag in the result file. - Matched results are patched with outcome, duration, and error message.
Results without a TC ID are skipped with a warning (the run still succeeds). Test Cases without a matching result keep the default "Active" outcome.
Config equivalent
publishTestResults:
testSuite:
id: 32954 # or name: "My Suite"
testPlan: "32953" # or plan name
testConfiguration:
id: 1 # optional — filter points by configuration
AI failure analysis
When --analyze-failures is set, ado-sync calls the configured AI provider for each failed test result and posts a comment directly on the Azure DevOps test result containing:
- Root cause — a concise one-line explanation of why the test failed
- Suggestion — a concrete fix recommendation
The comment appears in Azure DevOps under the test result's Comments tab, alongside the error message and stack trace.
CLI examples
# Analyse failures with OpenAI (gpt-4o-mini by default)
ado-sync publish-test-results \
--testResult results/test.trx \
--analyze-failures \
--ai-provider openai \
--ai-key $OPENAI_API_KEY
# Analyse with Claude (Haiku is fast and cost-effective)
ado-sync publish-test-results \
--testResult results/playwright.json \
--analyze-failures \
--ai-provider anthropic \
--ai-model claude-haiku-4-5-20251001 \
--ai-key $ANTHROPIC_API_KEY
# Analyse with a local Ollama server (no cloud cost)
ado-sync publish-test-results \
--testResult results/junit.xml \
--analyze-failures \
--ai-provider ollama \
--ai-model gemma-4-e4b-it
# Analyse with Docker Model Runner (local, OpenAI-compatible, no API key)
ado-sync publish-test-results \
--testResult results/junit.xml \
--analyze-failures \
--ai-provider docker \
--ai-model ai/llama3.2
Config-based (no CLI flags needed)
{
"sync": {
"ai": {
"provider": "anthropic",
"model": "claude-haiku-4-5-20251001",
"apiKey": "$ANTHROPIC_API_KEY",
"analyzeFailures": true
}
}
}
With this in place, every publish-test-results run automatically analyses failures — no extra CLI flags needed.
Supported providers
| Provider | Flag value | Notes |
|---|---|---|
| OpenAI | openai | Default model: gpt-4o-mini. Works with any OpenAI-compatible endpoint via --ai-url. |
| Anthropic | anthropic | Default model: claude-haiku-4-5-20251001. Fast and cost-effective. |
| Ollama | ollama | Default model: gemma-4-e4b-it. Runs locally — no cloud cost or data egress. |
| Docker Model Runner | docker | Default endpoint: http://localhost:12434/engines/llama.cpp/v1. Default model: ai/llama3.2. OpenAI-compatible local inference via Docker Desktop. No API key required. |
heuristicandlocal(node-llama-cpp) providers are not supported for failure analysis — they are suited for step generation, not conversational reasoning.
Supported formats
| Format | Extension | Auto-detected | TC ID in file? | Attachments extracted |
|---|---|---|---|---|
| TRX (MSTest / SpecFlow / VSTest) | .trx | Yes (<TestRun> root) | Yes — via [TestProperty("tc","ID")] | <Output><StdOut> + <Output><ResultFiles> |
| NUnit XML (native) | .xml | Yes (<test-run> root) | Yes — via [Property("tc","ID")] | <output> + <attachments> |
| JUnit XML | .xml | Yes (<testsuites> / <testsuite> root) | Optional — via <property name="tc" value="ID"/> | <system-out>, <system-err>, [[ATTACHMENT|path]] (Playwright) |
| Cucumber JSON | .json | Yes (JSON array, Cucumber format) | Yes — via @tc:ID tag on scenario | step.embeddings[] (base64 screenshots/video) |
| Playwright JSON | .json | Yes (JSON object with suites key) | Yes — via test.annotations[{ type: 'tc', description: 'ID' }] (preferred) or @tc:ID in test title | test.results[].attachments[] (screenshots, videos, traces) |
| Robot Framework XML | output.xml | Yes (<robot> root element) | Yes — via <tags><tag>tc:ID</tag></tags> | — |
| CTRF JSON | .json | Yes (results.tests array) | Yes — via tags: ["@tc:ID"] or @tc:ID in test name | attachments[].path files, stdout/stderr arrays |
NUnit via TRX: when NUnit tests are run through the VSTest adapter (
--logger trx),[Property]values are not included in the TRX output. Use--logger "nunit3;LogFileName=results.xml"to get the native NUnit XML format, which does include property values.
TRX
<ResultFiles>nesting: In TRX format,<ResultFiles>is a child of<Output>, not a direct child of<UnitTestResult>. ado-sync reads fromUnitTestResult > Output > ResultFiles > ResultFile— paths are resolved relative to the result file's directory.
Attachment paths: All file paths embedded in result files (
<ResultFile path="...">in TRX,<filePath>in NUnit XML,[[ATTACHMENT|path]]in JUnit,attachments[].pathin Playwright JSON) are resolved relative to the result file's directory, not the working directory. Ensure screenshots and other artifacts stay in the same folder hierarchy as your test runner produces.
Automated vs planned runs: ado-sync creates standalone automated runs without a test plan association. Do not add
plan.idto the run model — doing so makes Azure DevOps treat the run as "planned", requiringtestPointIdandtestCaseRevisionfor every result (which ado-sync doesn't provide). TC linking is done viatestCase.idon individual results, which works for automated runs without a plan association.
Valid attachment types: Azure DevOps only accepts
GeneralAttachmentandConsoleLogasattachmentTypevalues. ado-sync maps screenshots, images, and binary files toGeneralAttachment; plain text and log files toConsoleLog. Other type names (e.g.Screenshot,Log,VideoLog) will cause a 400 error.
How TC linking works
Results are linked to Azure Test Cases in priority order:
- TC ID from file (preferred) — when the result file contains a TC ID (
[TestProperty],[Property],<property name="tc">,@tc:tag, or Playwrighttest.annotations[{ type: 'tc' }]), the result is posted withtestCase.idset directly. This is robust to class/method renames. - AutomatedTestName matching (fallback) — when no TC ID is found, the result is posted with
automatedTestName= the fully-qualified method name. Azure DevOps links it to a TC whoseAutomatedTestNamefield matches. Requiressync.markAutomated: trueon push.
Per-framework guide
C# MSTest
dotnet test --logger "trx;LogFileName=results.trx"
ado-sync publish-test-results --testResult results/results.trx
TC IDs are read from [TestProperty("tc","ID")] embedded in the TRX — no extra config needed.
C# NUnit
# Use native NUnit XML (includes [Property] values)
dotnet test --logger "nunit3;LogFileName=results.xml"
ado-sync publish-test-results --testResult results/results.xml
# TRX via VSTest adapter (TC IDs NOT included — uses AutomatedTestName matching)
dotnet test --logger "trx;LogFileName=results.trx"
ado-sync publish-test-results --testResult results/results.trx
C# SpecFlow
SpecFlow generates TRX output via the VSTest adapter. ado-sync reads @tc:ID tags from the Gherkin @tc:ID tag embedded as [TestProperty("tc","ID")] in the TRX by SpecFlow's runner.
# 1. Push feature files to create TCs — @tc:ID is written back into .feature files
ado-sync push
# 2. Run SpecFlow tests (generates TRX)
dotnet test --logger "trx;LogFileName=results.trx"
# 3. Publish results
ado-sync publish-test-results --testResult results/results.trx
SpecFlow uses local.type: gherkin (same as Cucumber). TC IDs are @tc:ID Gherkin tags, which SpecFlow embeds into the TRX as TestProperty values automatically.
Java JUnit 4 / JUnit 5 (Maven Surefire)
Maven Surefire generates JUnit XML with classname = FQCN and name = method name. ado-sync builds automatedTestName as FQCN.methodName on push, which matches the classname.name format in the JUnit XML automatically.
# Run tests (Surefire writes target/surefire-reports/TEST-*.xml)
mvn test
# Publish — TC linking uses AutomatedTestName matching
ado-sync publish-test-results \
--testResult "target/surefire-reports/TEST-*.xml" \
--testResultFormat junit
Recommended config:
{ "sync": { "markAutomated": true } }
Optional — write TC IDs into JUnit XML for direct linking (more reliable):
Add a JUnit 5 extension or JUnit 4 rule that reads the @Tag("tc:ID") / // @tc:ID value and calls recordProperty to embed it into the XML. With Surefire, test properties are written as <property> elements inside each <testcase>.
Java TestNG
TestNG's Surefire reporter generates the same JUnit XML format. Same commands as JUnit above.
mvn test # or: ./gradlew test
ado-sync publish-test-results \
--testResult "target/surefire-reports/TEST-*.xml" \
--testResultFormat junit
Python pytest
# Run tests and generate JUnit XML
pytest --junitxml=results/junit.xml
# Publish — uses AutomatedTestName matching (classname.name from JUnit XML)
ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
Recommended config:
{ "sync": { "markAutomated": true } }
Optional — embed TC IDs into JUnit XML for direct linking.
Add the following to your conftest.py:
# conftest.py
def pytest_runtest_makereport(item, call):
"""Write @pytest.mark.tc(N) as a JUnit XML property for ado-sync to pick up."""
for marker in item.iter_markers("tc"):
if marker.args:
item.user_properties.append(("tc", str(marker.args[0])))
With this hook, pytest writes:
<testcase name="test_foo" classname="tests.module.TestClass">
<properties>
<property name="tc" value="1041"/>
</properties>
</testcase>
ado-sync will extract the tc property and link the result directly to TC 1041, without needing AutomatedTestName matching.
JavaScript / TypeScript — Jest
Install jest-junit:
npm install --save-dev jest-junit
Run tests:
JEST_JUNIT_OUTPUT_DIR=results JEST_JUNIT_OUTPUT_NAME=junit.xml \
npx jest --reporters=default --reporters=jest-junit
Publish:
ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
TC linking for Jest: jest-junit does not embed TC IDs in the XML. Linking uses
AutomatedTestNamematching. Setsync.markAutomated: trueand ensure theJEST_JUNIT_CLASSNAMEandJEST_JUNIT_TITLEenv vars match theautomatedTestNameformat stored in the TC ({fileBasename} > {describe} > {testTitle}).Set these env vars to align the format:
JEST_JUNIT_CLASSNAME="{classname}" # default: suite hierarchyJEST_JUNIT_TITLE="{title}" # default: test title
JavaScript / TypeScript — WebdriverIO
WebdriverIO supports JUnit XML via @wdio/junit-reporter:
# Install (if not already present)
npm install --save-dev @wdio/junit-reporter
Add to wdio.conf.ts:
reporters: [['junit', { outputDir: './results', outputFileFormat: () => 'junit.xml' }]]
Run tests:
npx wdio run wdio.conf.ts
Publish:
ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
Gherkin / Cucumber (JS)
Cucumber JS with Selenium captures screenshots/videos as base64 embeddings inside each step. These are extracted automatically and attached to the test result in Azure DevOps.
# Run with Cucumber JSON reporter (includes step embeddings)
npx cucumber-js --format json:results/cucumber.json
# Publish — TC IDs from @tc:ID tags, screenshots from embeddings
ado-sync publish-test-results --testResult results/cucumber.json
TC IDs from @tc:12345 tags are extracted directly. Screenshots embedded by Selenium/WebDriver hooks are uploaded automatically.
Playwright
Playwright supports two result formats — both extract attachments (screenshots, videos, traces):
Option A — Playwright JSON reporter (recommended, includes all attachments):
# playwright.config.ts
# reporter: [['json', { outputFile: 'results/playwright.json' }]]
npx playwright test
ado-sync publish-test-results --testResult results/playwright.json
Screenshots on failure, videos, and trace files are uploaded automatically from the test-results/ folder referenced in the JSON.
Option B — JUnit XML reporter (for CI systems that need JUnit format):
# playwright.config.ts
# reporter: [['junit', { outputFile: 'results/junit.xml' }]]
npx playwright test
ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
Playwright embeds [[ATTACHMENT|path]] markers in <system-out> — ado-sync reads these and uploads the referenced files (screenshots, videos).
Using --attachmentsFolder for extra files:
ado-sync publish-test-results \
--testResult results/junit.xml \
--attachmentsFolder test-results
TestCafe
TestCafe requires the testcafe-reporter-junit package to produce JUnit XML:
npm install --save-dev testcafe-reporter-junit
# Run tests with JUnit reporter
npx testcafe chrome tests/ --reporter junit:results/junit.xml
# Publish results
ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
TC linking uses AutomatedTestName matching — set sync.markAutomated: true on push. The automatedTestName format stored in the TC is {fileBasename} > {fixture} > {testTitle}, which matches the JUnit suiteName.testName format produced by testcafe-reporter-junit.
TC IDs are not embedded in JUnit output: TestCafe's
test.meta('tc', 'N')metadata is not written into the JUnit XML. Linking relies on AutomatedTestName matching only.
Cypress
Cypress has a built-in JUnit reporter via Mocha:
# Run tests with JUnit reporter (outputs one file per spec by default)
npx cypress run \
--reporter junit \
--reporter-options "mochaFile=results/junit-[hash].xml"
# Publish all result files
ado-sync publish-test-results \
--testResult "results/junit-*.xml" \
--testResultFormat junit
TC linking uses AutomatedTestName matching — set sync.markAutomated: true on push.
JUnit classname format: By default Cypress sets
classnameto the spec file path andnameto the test title. Ensure yourautomatedTestNameformat matches by settingreporterOptions: { suiteTitleSeparatedBy: ' > ' }incypress.config.js.
Detox (React Native)
Detox uses Jest as its runner — use jest-junit the same way as Jest:
npm install --save-dev jest-junit
# Run Detox tests
JEST_JUNIT_OUTPUT_DIR=results JEST_JUNIT_OUTPUT_NAME=junit.xml \
npx detox test --configuration ios.sim.release
# Publish results
ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
TC linking uses AutomatedTestName matching — set sync.markAutomated: true on push.
XCUITest (iOS / macOS)
Export results from Xcode's .xcresult bundle to JUnit XML using xcresulttool:
# Run tests and save result bundle
xcodebuild test \
-project MyApp.xcodeproj \
-scheme MyApp \
-destination 'platform=iOS Simulator,name=iPhone 15' \
-resultBundlePath TestResults.xcresult
# Export JUnit XML
xcrun xcresulttool get --path TestResults.xcresult --format junit > results/junit.xml
# Publish results
ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
TC linking uses AutomatedTestName matching — set sync.markAutomated: true on push. The automatedTestName format is {className}/{funcTestName} (e.g. LoginUITests/testValidCredentialsNavigateToInventory).
Attachments:
xcresulttooldoes not embed screenshots in the JUnit export. Use--attachmentsFolderto attach screenshot files produced by your test hooks separately.
Espresso (Android)
Gradle's connectedAndroidTest task writes JUnit XML to app/build/outputs/androidTest-results/connected/:
# Run instrumented tests
./gradlew connectedAndroidTest
# Publish results
ado-sync publish-test-results \
--testResult "app/build/outputs/androidTest-results/connected/TEST-*.xml" \
--testResultFormat junit
TC linking uses AutomatedTestName matching — set sync.markAutomated: true on push. The automatedTestName format is {packageName}.{ClassName}.{methodName}.
Robot Framework
Robot Framework writes test results to output.xml by default. ado-sync auto-detects this format from the <robot> root element.
# Run Robot Framework tests (generates output.xml)
robot --outputdir results tests/
# Publish results — TC IDs from [Tags] tc:N values
ado-sync publish-test-results --testResult results/output.xml
TC IDs are extracted directly from the <tags> element in output.xml — the same tc:N tag written back by ado-sync push. No --testResultFormat flag is needed; the format is auto-detected.
# Custom output file location
robot --outputdir results --output my-results.xml tests/
ado-sync publish-test-results --testResult results/my-results.xml
Recommended config:
{ "sync": { "markAutomated": true } }
TC linking for Robot:
output.xmlincludestc:Ntags in<tags>elements. ado-sync uses these for direct TC linking. If a test has notc:Ntag, it falls back toAutomatedTestNamematching using the suite.test name path (e.g.SuiteName.Test Case Name).
CTRF (Common Test Report Format)
CTRF is a framework-agnostic JSON report format supported by reporters for Playwright, Cypress, Jest, k6, and many others. ado-sync auto-detects CTRF from the results.tests array structure.
# Example: Playwright with CTRF reporter
npm install --save-dev playwright-ctrf-json-reporter
# playwright.config.ts:
# reporter: [['playwright-ctrf-json-reporter', { outputFile: 'results/ctrf.json' }]]
npx playwright test
ado-sync publish-test-results --testResult results/ctrf.json
# Example: Jest with CTRF reporter
npm install --save-dev jest-ctrf-json-reporter
# jest.config.ts:
# reporters: [['jest-ctrf-json-reporter', { outputFile: 'results/ctrf.json' }]]
npx jest
ado-sync publish-test-results --testResult results/ctrf.json
TC IDs are extracted from the tags array (e.g. ["@tc:1234", "@smoke"]) or, as a fallback, from @tc:ID in the test name. stdout/stderr arrays and attachments[].path files are uploaded automatically.
Status mapping: CTRF
passed→Passed,failed→Failed,skipped/pending/other→NotExecuted.
Flutter
Flutter can produce JUnit XML via the flutter_test_junit package or by piping --reporter junit:
# Option A — built-in reporter (Flutter ≥ 3.7)
flutter test --reporter junit > results/junit.xml
# Option B — flutter_test_junit package
flutter pub add --dev flutter_test_junit
dart run flutter_test_junit:main > results/junit.xml
# Publish results
ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
TC linking uses AutomatedTestName matching — set sync.markAutomated: true on push.
| Framework | Result format | TC ID in file | Attachments uploaded | Live-tested |
|---|---|---|---|---|
| C# MSTest | TRX | ✅ [TestProperty("tc","ID")] | <Output><StdOut> + <Output><ResultFiles> files | ✅ |
| C# NUnit | NUnit XML | ✅ [Property("tc","ID")] | <output> text + <attachments><filePath> files | ✅ |
| C# SpecFlow | TRX | ✅ @tc:ID → [TestProperty] | <Output><StdOut> + <Output><ResultFiles> files | ✅ |
| Java JUnit 4/5 | JUnit XML | ⚠️ optional <property name="tc"> | <system-out>, <system-err> | ✅ |
| Java TestNG | JUnit XML | ⚠️ optional <property name="tc"> | <system-out>, <system-err> | ✅ |
| Python pytest | JUnit XML | ⚠️ optional (conftest.py hook) | <system-out>, <system-err> | ✅ |
| Jest | JUnit XML | ⚠️ optional <property name="tc"> | <system-out>, <system-err> | ✅ |
| WebdriverIO / Jasmine | JUnit XML | ⚠️ optional <property name="tc"> | <system-out>, <system-err> | ✅ |
| Cucumber JS | Cucumber JSON | ✅ @tc:ID tag | step.embeddings[] (base64 screenshots/video) | ✅ |
| Playwright | Playwright JSON | ✅ native annotation: { type: 'tc', description: 'ID' }; or @tc:ID in test title | Files from attachments[].path (screenshots, videos, traces) | ✅ |
| Playwright | JUnit XML | ⚠️ @tc:ID in test title only (no annotation in JUnit format) | [[ATTACHMENT|path]] referenced files | ✅ |
| TestCafe | JUnit XML | ❌ AutomatedTestName matching only | <system-out>, <system-err> | |
| Cypress | JUnit XML | ❌ AutomatedTestName matching only | <system-out>, <system-err> | |
| Detox | JUnit XML | ❌ AutomatedTestName matching only | <system-out>, <system-err> | |
| XCUITest | JUnit XML | ❌ AutomatedTestName matching only | none (use --attachmentsFolder) | |
| Espresso | JUnit XML | ❌ AutomatedTestName matching only | <system-out>, <system-err> | |
| Flutter | JUnit XML | ❌ AutomatedTestName matching only | <system-out>, <system-err> | |
| Robot Framework | Robot XML (output.xml) | ✅ tc:N in <tags> | — | |
| CTRF (any framework) | CTRF JSON | ✅ tags: ["@tc:ID"] or @tc:ID in name | attachments[].path files + stdout/stderr |
Attachments
ado-sync uploads screenshots, videos, and logs from test results to the corresponding Azure DevOps test result entry. Attachments appear in the Azure Test Plans UI under each result.
What is extracted automatically per format
| Format | Extracted automatically |
|---|---|
| TRX | <Output><StdOut> → console log; <Output><ResultFiles><ResultFile path="..."> → files on disk |
| NUnit XML | <output> → console log; <attachments><attachment><filePath> → files on disk |
| JUnit XML | <system-out> → log; <system-err> → log; [[ATTACHMENT|path]] → Playwright files |
| Cucumber JSON | step.embeddings[] → base64-encoded screenshots/video |
| Playwright JSON | results[].attachments[].path → files on disk (screenshots, videos, traces) |
| CTRF JSON | tests[].attachments[].path → files on disk; tests[].stdout[] / tests[].stderr[] → console logs |
Note: All file paths are resolved relative to the result file's directory, not the process working directory. This matches how test runners (Playwright, MSTest, NUnit) write relative paths in their output.
Attachment types: Azure DevOps accepts only
GeneralAttachment(images, videos, binary files) andConsoleLog(text, logs) as attachment type values. ado-sync maps all file types to one of these two automatically.
--attachmentsFolder — folder-based attachment upload
For any framework, point ado-sync at a folder containing screenshots and videos:
ado-sync publish-test-results \
--testResult results/junit.xml \
--attachmentsFolder test-results/screenshots
Files are matched to individual test results by looking for the test method name in the filename (case-insensitive). For example, addItemAndCheckout_failed.png → matched to com.example.CheckoutTests.addItemAndCheckout.
Unmatched files are attached at the test run level.
Config-based attachment folder
{
"publishTestResults": {
"attachments": {
"folder": "test-results/screenshots",
"include": ["**/*.png", "**/*.mp4"],
"matchByTestName": true
},
"publishAttachmentsForPassingTests": "files"
}
}
publishAttachmentsForPassingTests
Controls how attachments are handled for passing tests (failing tests always get all attachments):
| Value | Behaviour |
|---|---|
"none" (default) | No attachments uploaded for passing tests |
"files" | Screenshots and videos uploaded; console logs skipped |
"all" | All attachments including logs uploaded for passing tests |
Framework-specific setup
C# MSTest — attach files from test code:
TestContext.AddResultFile("screenshots/mytest.png");
C# NUnit — attach files:
TestContext.AddAttachment("screenshots/mytest.png");
Java (JUnit/TestNG) — capture Selenium screenshot and write to system-out:
// In an @AfterMethod / @After hook:
File screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
// Surefire writes <system-out> to JUnit XML — log the path:
System.out.println("Screenshot: " + screenshot.getAbsolutePath());
// Or use --attachmentsFolder to pick up the file directly
Python pytest — capture screenshot via conftest.py:
# conftest.py
@pytest.fixture(autouse=True)
def screenshot_on_failure(request, driver):
yield
if request.node.rep_call.failed:
driver.save_screenshot(f"screenshots/{request.node.name}.png")
Then publish with:
ado-sync publish-test-results \
--testResult results/junit.xml \
--attachmentsFolder screenshots
Cucumber JS — embed screenshot in step hook:
// hooks.js
After(async function({ pickle, result }) {
if (result.status === Status.FAILED) {
const screenshot = await driver.takeScreenshot();
this.attach(Buffer.from(screenshot, 'base64'), 'image/png');
}
});
Screenshots are embedded as base64 in the Cucumber JSON and uploaded automatically.
Playwright — screenshots and videos are automatic when configured in playwright.config.ts:
use: {
screenshot: 'only-on-failure',
video: 'retain-on-failure',
trace: 'on-first-retry',
}
Use the Playwright JSON reporter — attachments are uploaded automatically.
Outcome mapping
| Source outcome | Azure outcome |
|---|---|
passed / pass / success | Passed |
failed / fail / failure / error | Failed |
skipped / ignored / pending / notExecuted | NotExecuted |
inconclusive | Inconclusive (or override with treatInconclusiveAs) |
Configuration
Results can also be configured in the config file under publishTestResults:
{
"publishTestResults": {
"testResult": {
"sources": [
{ "value": "results/unit.trx", "format": "trx" },
{ "value": "results/integration.xml", "format": "junit" }
]
},
"treatInconclusiveAs": "Failed",
"testRunSettings": {
"name": "My CI Run",
"comment": "Automated sync run",
"runType": "Automated"
},
"testResultSettings": {
"comment": "Published by ado-sync"
},
"testConfiguration": {
"name": "Default"
}
}
}
publishTestResults fields
| Field | Description |
|---|---|
testResult.sources | Array of { value, format } objects. value is a path relative to config dir. |
treatInconclusiveAs | Override inconclusive outcome. e.g. "Failed" or "NotExecuted". |
flakyTestOutcome | How to handle flaky tests: "lastAttemptOutcome" (default) · "firstAttemptOutcome" · "worstOutcome". |
testConfiguration.name | Name of the Azure test configuration to associate. |
testConfiguration.id | ID of the Azure test configuration. |
testSuite.name | Name of the Azure test suite to publish against. Requires every result to resolve to a test case ID. |
testSuite.id | ID of the Azure test suite to publish against. |
testSuite.testPlan | Optional test plan name or ID used when resolving the target suite. Defaults to testPlan.id from the main config. |
testRunSettings.name | Name for the Test Run. |
testRunSettings.comment | Comment attached to the Test Run. |
testRunSettings.runType | "Automated" (default) · "Manual". |
testResultSettings.comment | Comment applied to every test result. |
publishAttachmentsForPassingTests | "none" (default) · "files" · "all". |
When testSuite is configured, ado-sync creates a planned run and binds each published result to the matching test point in that suite. If a suite contains multiple configurations for the same test case, set testConfiguration.id or testConfiguration.name so the target point can be resolved unambiguously.
Creating issues on failure
--create-issues-on-failure automatically files a GitHub Issue or ADO Bug for each failed test
after the run is published. Multiple guards prevent flooding your tracker when the environment is
the problem rather than individual tests.
Guard logic (applied in order)
failures > threshold% of total?
└─ YES → 1 environment-failure issue, stop
└─ NO
└─ cluster by error signature
└─ cluster size > 1?
└─ YES → 1 issue per cluster (lists affected test names)
└─ NO → 1 issue per TC (up to maxIssues cap)
└─ cap hit? → 1 overflow summary issue
| Guard | Default | Description |
|---|---|---|
| Failure-rate threshold | 20% | Above this, one env-failure issue is filed instead of per-test |
| Error clustering | enabled | Tests with the same error message are grouped into one issue |
| Hard cap | 50 | No more than this many issues per run; one overflow summary when exceeded |
| Dedup | enabled | Skip if an open issue already exists for the same TC (GitHub: by tc:ID label; ADO: by title) |
GitHub Issues (recommended)
ado-sync publish-test-results \
--testResult results/ctrf.json \
--create-issues-on-failure \
--github-repo myorg/myrepo \
--github-token $GITHUB_TOKEN
Each issue is labelled test-failure and tc:{ID} (when a TC ID is available). The issue body
contains the error message, stack trace, ADO TC link, and run URL — everything a healer agent
needs to propose a fix PR.
ADO Bugs
ado-sync publish-test-results \
--testResult results/junit.xml \
--create-issues-on-failure \
--issue-provider ado
ADO Bugs are created as Bug work items in the same project. The Repro Steps field is populated
with the error details. When a TC ID is known, a TestedBy relation is added linking the Bug to
the Test Case.
Config-based setup
{
"publishTestResults": {
"createIssuesOnFailure": {
"provider": "github",
"repo": "myorg/myrepo",
"token": "$GITHUB_TOKEN",
"labels": ["test-failure", "automated"],
"threshold": 20,
"maxIssues": 50,
"clusterByError": true,
"dedupByTestCase": true
}
}
}
CLI flags override the config values when both are present.
MCP tool: create_issue
The create_issue MCP tool lets healer agents file a single issue directly:
create_issue({
title: "[FAILED] Login with valid credentials",
body: "Error: Expected 200 but got 401\n\nStack: ...",
provider: "github",
githubRepo: "myorg/myrepo",
githubToken: "$GITHUB_TOKEN",
testCaseId: 1234
})
Returns the issue URL immediately, which the agent can embed in its fix PR.
Output
ado-sync publish-test-results
Config: ado-sync.json
Total results: 42
38 passed 3 failed 1 other
Run ID: 9876
URL: https://dev.azure.com/my-org/MyProject/_testManagement/runs?runId=9876