π§βπ» Contributing¶
Contributions welcome. This page covers everything from first setup to opening a PR that gets merged fast.
π οΈ Dev Environment Setup¶
1. Clone and install (editable)¶
git clone https://github.com/vishal8shah/code-review-council
cd code-review-council
pip install -e .
Editable install means your local changes take effect immediately β no reinstall needed.
2. Verify¶
All tests should pass on a clean checkout. If they donβt, open an issue before doing anything else.
3. API keys for integration tests¶
Unit tests are fully mocked and run without any API keys. Integration tests require at least one provider key:
Use cheap models for local integration testing
Set reviewer models to a cheaper provider/model in .council.toml for local integration runs. The generated GitHub workflows are Gemini-pinned, but local runs can use any LiteLLM-supported provider you have budget for.
π§ͺ Tests¶
Running tests¶
# All tests (unit only, no API calls)
pytest -q
# With integration tests (requires API keys)
pytest -q --integration
# Specific test file
pytest tests/test_chair.py -v
# With coverage
pytest --cov=council --cov-report=term-missing
Test categories¶
| Category | Location | Requires Keys? | Whatβs Covered |
|---|---|---|---|
| Unit | tests/unit/ |
No | Pipeline stages, parsing, validation logic |
| Integration | tests/integration/ |
Yes | End-to-end review runs against real models |
| Fixtures | tests/fixtures/ |
No | Sample diffs, ReviewPacks, mock verdicts |
Test expectations for PRs¶
- New features must include unit tests
- Bug fixes must include a regression test that fails before the fix and passes after
- New reviewer personas must include at least one unit test covering output schema compliance
- Coverage should not decrease on the changed module
π€ Adding a Reviewer Persona¶
This is the most common contribution type. Hereβs the full walkthrough:
Step 1 β Write the prompt¶
Create a prompt file in prompts/:
prompts/
secops.md β existing
qa.md β existing
architect.md β existing
docs.md β existing
yourpersona.md β new
Your prompt must:
- Define the reviewer's domain clearly
- Require file + line evidence for every finding
- Reject speculative findings (no pattern-match-only blockers)
- Produce output compatible with the
ReviewerFindingschema
Step 2 β Register in .council.toml¶
[[reviewers]]
id = "yourpersona"
name = "Your Persona"
enabled = true
model = "gemini/gemini-3-pro-preview"
prompt = "prompts/yourpersona.md"
Step 3 β Verify schema compatibility¶
The reviewer output must deserialise cleanly into ReviewerFinding. Run:
Step 4 β Add tests¶
- One unit test with a mock diff β mock LLM response β assert parsed finding structure
- One test asserting the reviewer rejects a finding that has no evidence chain
- One test for the degraded mode path (reviewer returns malformed output)
Step 5 β Open a PR¶
See the PR standards below.
Naming convention
Reviewer IDs in .council.toml should be lowercase, single-word (e.g. secops, qa, perf, accessibility). The Chair refers to reviewers by their ID in the verdict output.
π₯ Opening a PR¶
PR checklist¶
- [ ] Editable install works cleanly (
pip install -e .) - [ ] All unit tests pass (
pytest -q) - [ ] New behaviour has test coverage
- [ ]
.council.tomlchanges are documented in the PR description - [ ] Prompt changes include a before/after example in the PR description
- [ ] No hardcoded API keys, tokens, or secrets anywhere in the diff
What makes a PR get merged fast¶
| Signal | Why it matters |
|---|---|
| Clear scope | Reviewers understand what changed and why in 30 seconds |
| Test evidence | No guessing whether the fix actually works |
| Small diff | Easier to review, faster to merge |
| Linked issue | Context is captured, not buried in Slack |
| No scope creep | One PR, one concern |
PR title format¶
Examples:
feat(reviewer): add performance reviewer persona
fix(chair): dismiss speculative injection findings without exploit chain
docs(faq): add cost reduction lever table
test(gate-zero): add regression for empty diff false positive
π Opening an Issue¶
Bug reports¶
Include:
- What you ran (exact command)
- What you expected
- What happened (paste the relevant output or
council-report.jsonfields) - Your
.council.tomlreviewer/model config (redact keys)
Feature requests¶
Describe the use case first, not the implementation. A good feature request looks like:
"When reviewing a large PR, I want Council to flag which files were skipped due to token budget so I know what wasnβt covered."
Not:
"Add a
--show-skippedflag."
Known good issue labels¶
| Label | Meaning |
|---|---|
good first issue |
Self-contained, well-scoped, good for new contributors |
prompt-tuning |
Improvements to reviewer or Chair prompts |
false-positive |
A finding that should have been dismissed |
false-negative |
A real issue Council missed |