Pull request reviews are bottlenecks. A diff lands, sits in a queue for hours, and the author context-switches back and forth. You can automate a meaningful first pass with Claude — not to replace human review, but to catch obvious issues instantly and draft the boilerplate so reviewers focus on architecture.
This guide shows four production-ready GitHub Actions workflows: automated PR review, PR description generation, test failure summaries, and release note generation.
How it works
GitHub Actions run a Node.js script that:
- Reads the git diff or relevant context
- Sends it to Claude via the Anthropic SDK
- Posts the result as a PR comment using the GitHub API
No Claude Code CLI required — just the @anthropic-ai/sdk npm package and a secret.
Setup
Add your Anthropic API key to your repository's secrets:
Settings → Secrets and variables → Actions → New repository secret
Name: ANTHROPIC_API_KEY
Install the SDK once in a shared step or use npx inline. For a dedicated workflow, commit a minimal package.json in .github/scripts/:
{
"type": "module",
"dependencies": {
"@anthropic-ai/sdk": "^0.39.0"
}
}Workflow 1: Automated PR review
Triggers on every PR open or push. Sends the diff to Claude and posts a review comment.
# .github/workflows/pr-review.yml
name: Claude PR Review
on:
pull_request:
types: [opened, synchronize]
permissions:
pull-requests: write
contents: read
jobs:
review:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install SDK
run: npm install @anthropic-ai/sdk
- name: Run Claude review
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.pull_request.number }}
REPO: ${{ github.repository }}
BASE_SHA: ${{ github.event.pull_request.base.sha }}
HEAD_SHA: ${{ github.event.pull_request.head.sha }}
run: node .github/scripts/pr-review.mjsThe script:
// .github/scripts/pr-review.mjs
import Anthropic from '@anthropic-ai/sdk'
import { execSync } from 'child_process'
const client = new Anthropic()
// Get the diff — limit to 8000 chars to control cost
const diff = execSync(
`git diff ${process.env.BASE_SHA} ${process.env.HEAD_SHA} -- '*.ts' '*.tsx' '*.js' '*.jsx'`
).toString().slice(0, 8000)
if (!diff.trim()) {
console.log('No TS/JS changes to review.')
process.exit(0)
}
const message = await client.messages.create({
model: 'claude-haiku-4-5-20251001', // fast + cheap for CI
max_tokens: 1024,
messages: [
{
role: 'user',
content: `You are a senior TypeScript engineer reviewing a pull request.
Review the following diff and provide concise feedback.
Focus on: bugs, type safety issues, missing error handling, security concerns, and obvious performance problems.
Skip style/formatting comments — we have linters for that.
Be direct. Use markdown bullet points. Max 10 items.
\`\`\`diff
${diff}
\`\`\``,
},
],
})
const review = message.content[0].text
// Post as PR comment via GitHub REST API
const response = await fetch(
`https://api.github.com/repos/${process.env.REPO}/issues/${process.env.PR_NUMBER}/comments`,
{
method: 'POST',
headers: {
Authorization: `Bearer ${process.env.GITHUB_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
body: `## Claude Code Review\n\n${review}\n\n---\n*Automated review — always verify before merging.*`,
}),
}
)
if (!response.ok) {
console.error('Failed to post comment:', await response.text())
process.exit(1)
}
console.log('Review posted.')Key decisions:
claude-haiku-4-5-20251001for CI — it's fast (~2s) and costs a fraction of Sonnet. Use Sonnet for deeper security audits.- Truncate the diff at 8000 characters to avoid runaway costs on large PRs. A well-scoped PR rarely exceeds this.
- Filter to TS/JS files — reviewing lock file changes is useless.
Workflow 2: Auto-generate PR descriptions
Developers often write minimal PR descriptions. This workflow generates a structured description from the diff and the commit messages.
# .github/workflows/pr-description.yml
name: Generate PR Description
on:
pull_request:
types: [opened]
permissions:
pull-requests: write
contents: read
jobs:
describe:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install SDK
run: npm install @anthropic-ai/sdk
- name: Generate description
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.pull_request.number }}
PR_TITLE: ${{ github.event.pull_request.title }}
REPO: ${{ github.repository }}
BASE_SHA: ${{ github.event.pull_request.base.sha }}
HEAD_SHA: ${{ github.event.pull_request.head.sha }}
run: node .github/scripts/pr-description.mjs// .github/scripts/pr-description.mjs
import Anthropic from '@anthropic-ai/sdk'
import { execSync } from 'child_process'
const client = new Anthropic()
const diff = execSync(
`git diff ${process.env.BASE_SHA} ${process.env.HEAD_SHA}`
).toString().slice(0, 6000)
const commits = execSync(
`git log ${process.env.BASE_SHA}..${process.env.HEAD_SHA} --pretty=format:"%s"`
).toString()
const message = await client.messages.create({
model: 'claude-haiku-4-5-20251001',
max_tokens: 800,
messages: [
{
role: 'user',
content: `Generate a concise GitHub pull request description for this PR.
PR title: ${process.env.PR_TITLE}
Commit messages:
${commits}
Diff (truncated):
\`\`\`diff
${diff}
\`\`\`
Format the description with these sections:
## What changed
(2-4 bullets)
## Why
(1-2 sentences)
## How to test
(numbered steps if applicable)
Keep it factual. Do not pad with filler phrases.`,
},
],
})
const body = message.content[0].text
// Update the PR body via GitHub API
await fetch(
`https://api.github.com/repos/${process.env.REPO}/pulls/${process.env.PR_NUMBER}`,
{
method: 'PATCH',
headers: {
Authorization: `Bearer ${process.env.GITHUB_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ body }),
}
)
console.log('PR description updated.')This only runs on opened (not synchronize) so it doesn't overwrite manual edits on subsequent pushes.
Workflow 3: Summarize test failures
When tests fail in CI, Claude reads the raw output and posts a plain-English summary — what broke, what the error likely means, and where to look.
# .github/workflows/test-summary.yml
name: Tests with Claude Summary
on:
push:
branches: [main, develop]
pull_request:
permissions:
pull-requests: write
contents: read
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- name: Run tests
id: tests
run: npm test -- --reporter=verbose 2>&1 | tee test-output.txt
continue-on-error: true
- name: Install SDK
if: steps.tests.outcome == 'failure'
run: npm install @anthropic-ai/sdk
- name: Summarize failures
if: steps.tests.outcome == 'failure'
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
PR_NUMBER: ${{ github.event.pull_request.number }}
REPO: ${{ github.repository }}
run: node .github/scripts/test-summary.mjs
- name: Fail the job
if: steps.tests.outcome == 'failure'
run: exit 1// .github/scripts/test-summary.mjs
import Anthropic from '@anthropic-ai/sdk'
import { readFileSync } from 'fs'
const client = new Anthropic()
const testOutput = readFileSync('test-output.txt', 'utf-8').slice(-5000) // last 5000 chars
const message = await client.messages.create({
model: 'claude-haiku-4-5-20251001',
max_tokens: 600,
messages: [
{
role: 'user',
content: `A CI test run just failed. Summarize what broke in plain English for a developer.
Include:
- Which tests failed (test names)
- What the error messages mean in plain terms
- The most likely root cause
- What file or function to look at first
Keep it under 200 words. No markdown headers needed.
Test output:
${testOutput}`,
},
],
})
const summary = message.content[0].text
if (process.env.PR_NUMBER) {
await fetch(
`https://api.github.com/repos/${process.env.REPO}/issues/${process.env.PR_NUMBER}/comments`,
{
method: 'POST',
headers: {
Authorization: `Bearer ${process.env.GITHUB_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
body: `## Test Failure Summary\n\n${summary}`,
}),
}
)
}
console.log('Summary posted.')Workflow 4: Release notes from commits
On every push to main (or when you create a tag), generate a changelog from commits since the last tag.
# .github/workflows/release-notes.yml
name: Generate Release Notes
on:
push:
tags:
- 'v*'
permissions:
contents: write
jobs:
release-notes:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
with:
fetch-depth: 0
- uses: actions/setup-node@v4
with:
node-version: 20
- name: Install SDK
run: npm install @anthropic-ai/sdk
- name: Generate and create release
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
REPO: ${{ github.repository }}
TAG: ${{ github.ref_name }}
run: node .github/scripts/release-notes.mjs// .github/scripts/release-notes.mjs
import Anthropic from '@anthropic-ai/sdk'
import { execSync } from 'child_process'
const client = new Anthropic()
// Get all commits since the previous tag
const previousTag = execSync('git tag --sort=-creatordate | sed -n 2p').toString().trim()
const range = previousTag ? `${previousTag}..HEAD` : 'HEAD'
const commits = execSync(
`git log ${range} --pretty=format:"%h %s" --no-merges`
).toString()
if (!commits.trim()) {
console.log('No commits to summarize.')
process.exit(0)
}
const message = await client.messages.create({
model: 'claude-sonnet-4-6', // better quality for public release notes
max_tokens: 1000,
messages: [
{
role: 'user',
content: `Generate user-facing release notes for version ${process.env.TAG}.
Commits:
${commits}
Format:
## What's new
(features, grouped by area)
## Bug fixes
(only if applicable)
## Breaking changes
(only if applicable, be explicit)
Write for developers consuming this library/app. Skip internal refactors and chore commits. Use present tense ("Add", "Fix", "Remove").`,
},
],
})
const notes = message.content[0].text
// Create a GitHub release
await fetch(
`https://api.github.com/repos/${process.env.REPO}/releases`,
{
method: 'POST',
headers: {
Authorization: `Bearer ${process.env.GITHUB_TOKEN}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({
tag_name: process.env.TAG,
name: process.env.TAG,
body: notes,
draft: false,
prerelease: process.env.TAG.includes('-'),
}),
}
)
console.log(`Release ${process.env.TAG} created.`)Note this one uses claude-sonnet-4-6 — release notes are public-facing and worth the extra quality and cost.
Cost management
| Workflow | Model | Avg cost/run |
|---|---|---|
| PR review | Haiku | ~$0.001 |
| PR description | Haiku | ~$0.001 |
| Test summary | Haiku | ~$0.001 |
| Release notes | Sonnet | ~$0.01 |
For a team with 50 PRs/week, the PR review workflow costs roughly $0.05–0.10/week. Use Haiku everywhere you can and reserve Sonnet for the workflows where quality matters — release notes, security audits, or architecture reviews.
To cap cost exposure, always truncate your diff:
const diff = execSync('git diff ...').toString().slice(0, 8000)You can also skip the workflow entirely for trivial changes:
- name: Check diff size
id: diff-check
run: |
SIZE=$(git diff $BASE_SHA $HEAD_SHA | wc -c)
echo "size=$SIZE" >> $GITHUB_OUTPUT
- name: Run Claude review
if: steps.diff-check.outputs.size > 200
run: node .github/scripts/pr-review.mjsAvoiding duplicate comments
If you run the PR review on synchronize, you'll pile up comments on every push. Two approaches:
Delete the previous comment before posting a new one:
// Find and delete existing Claude review comments
const commentsRes = await fetch(
`https://api.github.com/repos/${process.env.REPO}/issues/${process.env.PR_NUMBER}/comments`,
{ headers: { Authorization: `Bearer ${process.env.GITHUB_TOKEN}` } }
)
const comments = await commentsRes.json()
for (const comment of comments) {
if (comment.body.startsWith('## Claude Code Review')) {
await fetch(
`https://api.github.com/repos/${process.env.REPO}/issues/comments/${comment.id}`,
{
method: 'DELETE',
headers: { Authorization: `Bearer ${process.env.GITHUB_TOKEN}` },
}
)
}
}Or only run on opened and skip synchronize entirely for the description/review workflows.
Using Claude Code CLI in Actions (alternative)
If you prefer the Claude Code CLI directly, it supports non-interactive mode:
- name: Install Claude Code
run: npm install -g @anthropic-ai/claude-code
- name: Review PR
env:
ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}
run: |
DIFF=$(git diff $BASE_SHA $HEAD_SHA | head -c 8000)
claude -p "Review this diff for bugs and type safety issues: $DIFF" > review.txt
cat review.txtThe -p flag (print) runs Claude Code in non-interactive mode and exits. This is simpler but gives you less control over the output format compared to calling the API directly.
What to automate and what not to
Good candidates for Claude in CI:
- First-pass code review (catches obvious issues instantly)
- PR description generation (saves author time)
- Test failure summaries (reduces debugging time)
- Release notes (nobody likes writing changelogs)
- Documentation updates for changed public APIs
Avoid automating:
- Architectural decisions — Claude doesn't know your product constraints
- Security sign-off — treat Claude's security feedback as a hint, not a verdict
- Auto-merging PRs — always keep a human in the final approval loop
Next steps
Pair these workflows with:
- Claude Code hooks to run similar checks locally before you even push
- Claude API complete guide for advanced features like tool use and caching
- Claude Code subagents when you need to run multiple analysis tasks in parallel on the same codebase