All test reports are stored in docs/
for easy access and web publishing.
Open in your browser:
# On Linux/WSL
xdg-open docs/test-results.html
# On macOS
open docs/test-results.html
# Or simply copy the path and open in browser
echo "file://$PWD/docs/test-results.html"
# Or view live on GitHub Pages
echo "https://dzivkovi.github.io/neo4j-for-surveillance-poc-3/test-results.html"
Features:
View the SVG file:
# Open in default image viewer
xdg-open docs/benchmark-histogram-eval-queries.svg
# Or in browser
echo "file://$PWD/docs/benchmark-histogram-eval-queries.svg"
# Or view live on GitHub Pages
echo "https://dzivkovi.github.io/neo4j-for-surveillance-poc-3/benchmark-histogram-eval-queries.svg"
Shows:
Generate fresh reports:
# Set correct environment
export DATASET=bigdata
# Generate test report and benchmark histogram (web-ready)
pytest tests/test_eval_queries.py::test_eval_functional \
--html=docs/test-results.html --self-contained-html
pytest tests/test_eval_queries.py::test_eval_performance \
--benchmark-only --benchmark-histogram=docs/benchmark-histogram
pytest tests/test_eval_queries.py --html=report.html --self-contained-html
pytest tests/test_eval_queries.py --html=report.html --css=custom.css
pytest tests/test_eval_queries.py --json-report --json-report-file=report.json
pytest tests/test_eval_queries.py::test_eval_performance \
--benchmark-only \
--benchmark-columns=min,max,mean,stddev,median,iqr,outliers,rounds \
--benchmark-sort=mean
# Run 1
pytest tests/test_eval_queries.py::test_eval_performance \
--benchmark-only \
--benchmark-json=benchmark-1.json
# Run 2 (after changes)
pytest tests/test_eval_queries.py::test_eval_performance \
--benchmark-only \
--benchmark-json=benchmark-2.json
# Compare
pytest-benchmark compare benchmark-1.json benchmark-2.json
pytest tests/test_eval_queries.py::test_eval_performance \
--benchmark-only \
--benchmark-json=benchmark.json
# Then convert
pytest-benchmark compare benchmark.json --csv=results.csv
# Only failed tests
pytest tests/test_eval_queries.py --html=failures.html --self-contained-html -x
# By marker
pytest tests/test_eval_queries.py -m "not slow" --html=fast-tests.html
# In your test
def test_example(extra):
extra.append(extras.text("Custom text"))
extra.append(extras.url("https://example.com"))
extra.append(extras.image("screenshot.png"))
# Fail if any test takes >2 seconds
pytest tests/test_eval_queries.py --benchmark-only \
--benchmark-max-time=2.0
# Save with timestamp
pytest tests/test_eval_queries.py::test_eval_performance \
--benchmark-only \
--benchmark-save=run-$(date +%Y%m%d-%H%M%S)
# List all saved benchmarks
pytest-benchmark list
# Compare last two
pytest-benchmark compare
⚠️ Critical: After any test changes (adding, removing, or moving tests), update all report files:
# 1. Update documentation counts
python scripts/update_counts.py
# 2. Regenerate test results HTML
pytest tests/test_eval_queries.py::test_eval_functional \
--html=docs/test-results.html --self-contained-html
# 3. Regenerate performance histogram (REQUIRED for GitHub Pages)
pytest tests/test_eval_queries.py::test_eval_performance \
--benchmark-only --benchmark-histogram=docs/benchmark-histogram
Why This Matters:
--json-report
for machine-readable output--html
with --tb=short
for concise tracebacks--benchmark-autosave
to automatically save results--benchmark-histogram
for visual performance graphsCreate a simple dashboard:
#!/bin/bash
# run-dashboard.sh
# Run tests
pytest tests/test_eval_queries.py \
--html=dashboard.html \
--self-contained-html \
--benchmark-only \
--benchmark-histogram=perf
# Open in browser
xdg-open dashboard.html