How to Pass a Pentest Retest the First Time (Step-by-Step)
Sonali Sood
Founding GTM, CodeAnt AI
A penetration test produces a report. The report lists findings. Engineering remediates the findings. The security team marks them closed. The compliance auditor receives a report showing all findings resolved. Everyone moves on.
Six months later, a different penetration testing firm runs the same test. Eight of the twelve previously "closed" findings are still exploitable.
This is not hypothetical. It is the statistical reality of penetration test remediation at scale. Studies across thousands of penetration test engagements consistently show that 40–60% of findings that engineering teams mark as remediated fail independent retest verification. The finding was patched in one environment but not another. The root cause was addressed in one code path but not the others. The fix worked for the specific payload in the original report but fails for minor variations. The remediation was applied to staging but never deployed to production.
Retest methodology, how penetration testing firms verify that remediations actually work, is the least discussed and most consequential phase of the engagement. A penetration test without rigorous retest is a vulnerability list, not a security assurance. And the distinction matters enormously for compliance auditors, who increasingly require evidence not just that findings were identified but that remediations were independently verified.
This guide covers the complete retest methodology:
what a retest actually verifies
how to structure remediation evidence
what auditors look for
why findings fail retests even when developers are confident the fix is correct
how to close findings correctly the first time
What a Retest is and What it Isn't
A penetration test retest is a structured verification engagement in which the penetration testing team re-executes the specific exploit techniques used to confirm each original finding, against the production environment where the remediation has been applied, to verify that the remediation successfully prevents exploitation.
Three elements of that definition matter:
Re-executes specific exploit techniques: The retest uses the exact attack vector documented in the original finding. If the original finding was an IDOR via GET /api/users/{id} with another user's ID, the retest sends that exact request pattern. It also tests variations, other user IDs, other endpoints following the same pattern, adjacent attack surface the original finding revealed.
Against the production environment: Not staging. Not a QA environment. The environment where the remediation needs to hold. This is the most common retest failure point, remediations applied to non-production environments that never make it to production.
Where the remediation has been applied: The retest is not a new penetration test. It's targeted verification of specific fixes. The scope is the original finding set plus variation testing around each finding.
What a Retest is Not
The most dangerous misunderstanding: organizations that treat a successful retest as evidence that the application is secure. A retest verifies that the specific findings from the original engagement are remediated. It says nothing about vulnerabilities that the original engagement didn't find, new code introduced since the original test, or attack surface that was out of scope.
Why Remediations Fail Retest: The Complete Taxonomy
Understanding why remediations fail is the prerequisite for structuring remediations that pass. There are six distinct failure categories:
Failure Mode 1: Incomplete Fix: Root Cause Not Addressed
The most common failure. The developer fixes the specific instance of the vulnerability reported but not the underlying pattern that causes it.
# Original finding: IDOR on GET /api/invoices/{id}# The endpoint returned any invoice by ID without ownership check# WRONG remediation — fixes only the reported endpoint:
@app.route('/api/invoices/<int:invoice_id>')
@login_requireddefget_invoice(invoice_id):
invoice = Invoice.query.get(invoice_id)# Added after finding:ifinvoice.user_id != current_user.id:
abort(403)returnjsonify(invoice.to_dict())# Retest result: ORIGINAL FINDING PASSES — this specific endpoint is fixed# But variation testing finds:# GET /api/invoices/{id}/pdf ← Same IDOR, different endpoint# GET /api/invoices/{id}/line-items ← Same IDOR, sub-resource# GET /api/invoices/{id}/history ← Same IDOR, history endpoint# PUT /api/invoices/{id} ← Same IDOR on update# DELETE /api/invoices/{id} ← Same IDOR on delete# The fix addressed the symptom (one endpoint) not the cause# (missing ownership filter pattern across all Invoice operations)# CORRECT remediation — fixes the root cause:class InvoiceQuery:
@staticmethoddefget_for_current_user(invoice_id):
"""ALWAYS filter by current user — impossible to use without ownership check"""returnInvoice.query.filter_by(id=invoice_id,user_id=current_user.id).first_or_404()# All Invoice endpoints now use InvoiceQuery.get_for_current_user()# Root cause addressed — not just the reported symptom
# Original finding: IDOR on GET /api/invoices/{id}# The endpoint returned any invoice by ID without ownership check# WRONG remediation — fixes only the reported endpoint:
@app.route('/api/invoices/<int:invoice_id>')
@login_requireddefget_invoice(invoice_id):
invoice = Invoice.query.get(invoice_id)# Added after finding:ifinvoice.user_id != current_user.id:
abort(403)returnjsonify(invoice.to_dict())# Retest result: ORIGINAL FINDING PASSES — this specific endpoint is fixed# But variation testing finds:# GET /api/invoices/{id}/pdf ← Same IDOR, different endpoint# GET /api/invoices/{id}/line-items ← Same IDOR, sub-resource# GET /api/invoices/{id}/history ← Same IDOR, history endpoint# PUT /api/invoices/{id} ← Same IDOR on update# DELETE /api/invoices/{id} ← Same IDOR on delete# The fix addressed the symptom (one endpoint) not the cause# (missing ownership filter pattern across all Invoice operations)# CORRECT remediation — fixes the root cause:class InvoiceQuery:
@staticmethoddefget_for_current_user(invoice_id):
"""ALWAYS filter by current user — impossible to use without ownership check"""returnInvoice.query.filter_by(id=invoice_id,user_id=current_user.id).first_or_404()# All Invoice endpoints now use InvoiceQuery.get_for_current_user()# Root cause addressed — not just the reported symptom
# Original finding: IDOR on GET /api/invoices/{id}# The endpoint returned any invoice by ID without ownership check# WRONG remediation — fixes only the reported endpoint:
@app.route('/api/invoices/<int:invoice_id>')
@login_requireddefget_invoice(invoice_id):
invoice = Invoice.query.get(invoice_id)# Added after finding:ifinvoice.user_id != current_user.id:
abort(403)returnjsonify(invoice.to_dict())# Retest result: ORIGINAL FINDING PASSES — this specific endpoint is fixed# But variation testing finds:# GET /api/invoices/{id}/pdf ← Same IDOR, different endpoint# GET /api/invoices/{id}/line-items ← Same IDOR, sub-resource# GET /api/invoices/{id}/history ← Same IDOR, history endpoint# PUT /api/invoices/{id} ← Same IDOR on update# DELETE /api/invoices/{id} ← Same IDOR on delete# The fix addressed the symptom (one endpoint) not the cause# (missing ownership filter pattern across all Invoice operations)# CORRECT remediation — fixes the root cause:class InvoiceQuery:
@staticmethoddefget_for_current_user(invoice_id):
"""ALWAYS filter by current user — impossible to use without ownership check"""returnInvoice.query.filter_by(id=invoice_id,user_id=current_user.id).first_or_404()# All Invoice endpoints now use InvoiceQuery.get_for_current_user()# Root cause addressed — not just the reported symptom
Failure Mode 2: Environment Mismatch: Fix Not in Production
# Verification checklist the developer forgot:# Developer actions taken:# [✓] Applied fix in feature branch# [✓] Merged to main# [✓] Deployed to staging# [✓] Tested on staging — fix works# [✗] Deployed to production ← MISSING# How this happens:# - Deployment to production requires separate approval (developer doesn't have it)# - Production deployment is scheduled for next release cycle (2 weeks away)# - CI/CD pipeline has a manual gate before production that wasn't triggered# - Hotfix process requires security sign-off that wasn't initiated# - Infrastructure-as-code change was merged but terraform apply wasn't run# - Docker image was rebuilt but Kubernetes deployment wasn't updated# Retest protocol: ALWAYS verify in production# The retest agreement must specify: retest against production environment# Check deployment timestamps: when was this version last deployed to prod?# Verification before requesting retest:git log --oneline-5# Confirm fix commit
kubectl get deployment myapp -ojsonpath='{.spec.template.spec.containers[0].image}'# Confirm the image tag in production matches the image containing the fixcurl <https://api.company.com/api/version>
# Confirm application reports the expected version
# Verification checklist the developer forgot:# Developer actions taken:# [✓] Applied fix in feature branch# [✓] Merged to main# [✓] Deployed to staging# [✓] Tested on staging — fix works# [✗] Deployed to production ← MISSING# How this happens:# - Deployment to production requires separate approval (developer doesn't have it)# - Production deployment is scheduled for next release cycle (2 weeks away)# - CI/CD pipeline has a manual gate before production that wasn't triggered# - Hotfix process requires security sign-off that wasn't initiated# - Infrastructure-as-code change was merged but terraform apply wasn't run# - Docker image was rebuilt but Kubernetes deployment wasn't updated# Retest protocol: ALWAYS verify in production# The retest agreement must specify: retest against production environment# Check deployment timestamps: when was this version last deployed to prod?# Verification before requesting retest:git log --oneline-5# Confirm fix commit
kubectl get deployment myapp -ojsonpath='{.spec.template.spec.containers[0].image}'# Confirm the image tag in production matches the image containing the fixcurl <https://api.company.com/api/version>
# Confirm application reports the expected version
# Verification checklist the developer forgot:# Developer actions taken:# [✓] Applied fix in feature branch# [✓] Merged to main# [✓] Deployed to staging# [✓] Tested on staging — fix works# [✗] Deployed to production ← MISSING# How this happens:# - Deployment to production requires separate approval (developer doesn't have it)# - Production deployment is scheduled for next release cycle (2 weeks away)# - CI/CD pipeline has a manual gate before production that wasn't triggered# - Hotfix process requires security sign-off that wasn't initiated# - Infrastructure-as-code change was merged but terraform apply wasn't run# - Docker image was rebuilt but Kubernetes deployment wasn't updated# Retest protocol: ALWAYS verify in production# The retest agreement must specify: retest against production environment# Check deployment timestamps: when was this version last deployed to prod?# Verification before requesting retest:git log --oneline-5# Confirm fix commit
kubectl get deployment myapp -ojsonpath='{.spec.template.spec.containers[0].image}'# Confirm the image tag in production matches the image containing the fixcurl <https://api.company.com/api/version>
# Confirm application reports the expected version
Failure Mode 3: Fix Bypassed by Variation
The fix addresses the exact payload in the original report but fails for variations that achieve the same outcome:
# Original finding — CORS misconfiguration
# Original test request:
GET /api/v1/users/profile HTTP/1.1
Origin: <https://evil.attacker.com>
# Original response (vulnerable):
Access-Control-Allow-Origin: <https://evil.attacker.com>
# Developer's fix: explicitly check for "attacker" in origin and reject
if 'attacker' in origin:
reject()
else:
allow(origin)
# Retest of original payload — PASSES:
GET /api/v1/users/profile HTTP/1.1
Origin: <https://evil.attacker.com>
→ 403 Forbidden (fix works for this specific origin)
# Retest variation — FAILS:
GET /api/v1/users/profile HTTP/1.1
Origin: <https://evil.pwned.com> ← Different domain, no "attacker"
→ Access-Control-Allow-Origin: <https://evil.pwned.com>
# Original finding — CORS misconfiguration
# Original test request:
GET /api/v1/users/profile HTTP/1.1
Origin: <https://evil.attacker.com>
# Original response (vulnerable):
Access-Control-Allow-Origin: <https://evil.attacker.com>
# Developer's fix: explicitly check for "attacker" in origin and reject
if 'attacker' in origin:
reject()
else:
allow(origin)
# Retest of original payload — PASSES:
GET /api/v1/users/profile HTTP/1.1
Origin: <https://evil.attacker.com>
→ 403 Forbidden (fix works for this specific origin)
# Retest variation — FAILS:
GET /api/v1/users/profile HTTP/1.1
Origin: <https://evil.pwned.com> ← Different domain, no "attacker"
→ Access-Control-Allow-Origin: <https://evil.pwned.com>
# Original finding — CORS misconfiguration
# Original test request:
GET /api/v1/users/profile HTTP/1.1
Origin: <https://evil.attacker.com>
# Original response (vulnerable):
Access-Control-Allow-Origin: <https://evil.attacker.com>
# Developer's fix: explicitly check for "attacker" in origin and reject
if 'attacker' in origin:
reject()
else:
allow(origin)
# Retest of original payload — PASSES:
GET /api/v1/users/profile HTTP/1.1
Origin: <https://evil.attacker.com>
→ 403 Forbidden (fix works for this specific origin)
# Retest variation — FAILS:
GET /api/v1/users/profile HTTP/1.1
Origin: <https://evil.pwned.com> ← Different domain, no "attacker"
→ Access-Control-Allow-Origin: <https://evil.pwned.com>
# More variation examples that catch incomplete remediations:# SQL injection fix variation testing:# Original payload: ' OR '1'='1# Fixed against: ' OR '1'='1# Fails against: ' OR 1=1--# '; WAITFOR DELAY '0:0:5'--# ' UNION SELECT NULL--# admin'--# JWT bypass fix variation testing:# Original test: alg:none with empty signature# Fixed against: alg:none# Fails against: alg:None (capitalized)# alg:NONE (all caps)# alg:nOnE (mixed case)# Path traversal fix variation testing:# Original payload: ../../../etc/passwd# Fixed against: ../../../etc/passwd# Fails against: ..%2F..%2F..%2Fetc%2Fpasswd (URL encoded)# ..%252F..%252Fetc%252Fpasswd (double encoded)# ..%c0%af..%c0%afetc%c0%afpasswd (overlong UTF-8)
# More variation examples that catch incomplete remediations:# SQL injection fix variation testing:# Original payload: ' OR '1'='1# Fixed against: ' OR '1'='1# Fails against: ' OR 1=1--# '; WAITFOR DELAY '0:0:5'--# ' UNION SELECT NULL--# admin'--# JWT bypass fix variation testing:# Original test: alg:none with empty signature# Fixed against: alg:none# Fails against: alg:None (capitalized)# alg:NONE (all caps)# alg:nOnE (mixed case)# Path traversal fix variation testing:# Original payload: ../../../etc/passwd# Fixed against: ../../../etc/passwd# Fails against: ..%2F..%2F..%2Fetc%2Fpasswd (URL encoded)# ..%252F..%252Fetc%252Fpasswd (double encoded)# ..%c0%af..%c0%afetc%c0%afpasswd (overlong UTF-8)
# More variation examples that catch incomplete remediations:# SQL injection fix variation testing:# Original payload: ' OR '1'='1# Fixed against: ' OR '1'='1# Fails against: ' OR 1=1--# '; WAITFOR DELAY '0:0:5'--# ' UNION SELECT NULL--# admin'--# JWT bypass fix variation testing:# Original test: alg:none with empty signature# Fixed against: alg:none# Fails against: alg:None (capitalized)# alg:NONE (all caps)# alg:nOnE (mixed case)# Path traversal fix variation testing:# Original payload: ../../../etc/passwd# Fixed against: ../../../etc/passwd# Fails against: ..%2F..%2F..%2Fetc%2Fpasswd (URL encoded)# ..%252F..%252Fetc%252Fpasswd (double encoded)# ..%c0%af..%c0%afetc%c0%afpasswd (overlong UTF-8)
Failure Mode 4: Fix Introduces New Vulnerability
Less common but documented, the remediation introduces a different vulnerability:
# Original finding: SQL injection via unsanitized user input# WRONG fix: Input sanitization (security through sanitization)defget_user(username):
# Developer sanitizes by removing special characterssafe_username = re.sub(r"['\\";]", "", username)query = f"SELECT * FROM users WHERE username = '{safe_username}'"returndb.execute(query)# This "fixes" the SQL injection (partially) but:# 1. Still vulnerable to some SQL injection variants (comments, UNION, etc.)# 2. Breaks legitimate usernames with apostrophes ("O'Brien")# 3. Introduces a second vulnerability: if san_username is logged,# the original (un-sanitized) input might be stored/displayed elsewhere# CORRECT fix: Parameterized queries (actual fix)defget_user(username):
query = "SELECT * FROM users WHERE username = %s"returndb.execute(query,(username,))# Database handles escaping# But the wrong fix creates a different attack surface:# What if the sanitization is applied inconsistently?# What if there's a code path that bypasses the sanitization?
# Original finding: SQL injection via unsanitized user input# WRONG fix: Input sanitization (security through sanitization)defget_user(username):
# Developer sanitizes by removing special characterssafe_username = re.sub(r"['\\";]", "", username)query = f"SELECT * FROM users WHERE username = '{safe_username}'"returndb.execute(query)# This "fixes" the SQL injection (partially) but:# 1. Still vulnerable to some SQL injection variants (comments, UNION, etc.)# 2. Breaks legitimate usernames with apostrophes ("O'Brien")# 3. Introduces a second vulnerability: if san_username is logged,# the original (un-sanitized) input might be stored/displayed elsewhere# CORRECT fix: Parameterized queries (actual fix)defget_user(username):
query = "SELECT * FROM users WHERE username = %s"returndb.execute(query,(username,))# Database handles escaping# But the wrong fix creates a different attack surface:# What if the sanitization is applied inconsistently?# What if there's a code path that bypasses the sanitization?
# Original finding: SQL injection via unsanitized user input# WRONG fix: Input sanitization (security through sanitization)defget_user(username):
# Developer sanitizes by removing special characterssafe_username = re.sub(r"['\\";]", "", username)query = f"SELECT * FROM users WHERE username = '{safe_username}'"returndb.execute(query)# This "fixes" the SQL injection (partially) but:# 1. Still vulnerable to some SQL injection variants (comments, UNION, etc.)# 2. Breaks legitimate usernames with apostrophes ("O'Brien")# 3. Introduces a second vulnerability: if san_username is logged,# the original (un-sanitized) input might be stored/displayed elsewhere# CORRECT fix: Parameterized queries (actual fix)defget_user(username):
query = "SELECT * FROM users WHERE username = %s"returndb.execute(query,(username,))# Database handles escaping# But the wrong fix creates a different attack surface:# What if the sanitization is applied inconsistently?# What if there's a code path that bypasses the sanitization?
Failure Mode 5: Fix Applied in Code But Not in Infrastructure
# Original finding: Spring Boot Actuator /actuator/env exposed publicly# Developer's code fix (WRONG approach for infrastructure issue):# Added @PreAuthorize to the actuator endpoints in application configmanagement:
endpoints:
web:
exposure:
include: "env,health,info"# But the cloud provider's load balancer / API gateway had:# /actuator/* → ALLOW (no authentication at the gateway level)# The application-level auth works for direct requests# But the gateway/WAF passes through /actuator/* before auth runs# Retest result: /actuator/env still returns 200 with all environment variables# Because the gateway routes around the application's auth middleware# CORRECT fix: Application config + infrastructure change# Step 1: Application config — restrict endpointsmanagement:
endpoints:
web:
exposure:
include: "health,info"# Only safe endpoints server:
port: 8081 # Internal port only# Step 2: Infrastructure config — block at load balancer level# AWS ALB rule: deny /actuator/* except /actuator/health, /actuator/info# Or: actuator port 8081 not exposed in security group
# Original finding: Spring Boot Actuator /actuator/env exposed publicly# Developer's code fix (WRONG approach for infrastructure issue):# Added @PreAuthorize to the actuator endpoints in application configmanagement:
endpoints:
web:
exposure:
include: "env,health,info"# But the cloud provider's load balancer / API gateway had:# /actuator/* → ALLOW (no authentication at the gateway level)# The application-level auth works for direct requests# But the gateway/WAF passes through /actuator/* before auth runs# Retest result: /actuator/env still returns 200 with all environment variables# Because the gateway routes around the application's auth middleware# CORRECT fix: Application config + infrastructure change# Step 1: Application config — restrict endpointsmanagement:
endpoints:
web:
exposure:
include: "health,info"# Only safe endpoints server:
port: 8081 # Internal port only# Step 2: Infrastructure config — block at load balancer level# AWS ALB rule: deny /actuator/* except /actuator/health, /actuator/info# Or: actuator port 8081 not exposed in security group
# Original finding: Spring Boot Actuator /actuator/env exposed publicly# Developer's code fix (WRONG approach for infrastructure issue):# Added @PreAuthorize to the actuator endpoints in application configmanagement:
endpoints:
web:
exposure:
include: "env,health,info"# But the cloud provider's load balancer / API gateway had:# /actuator/* → ALLOW (no authentication at the gateway level)# The application-level auth works for direct requests# But the gateway/WAF passes through /actuator/* before auth runs# Retest result: /actuator/env still returns 200 with all environment variables# Because the gateway routes around the application's auth middleware# CORRECT fix: Application config + infrastructure change# Step 1: Application config — restrict endpointsmanagement:
endpoints:
web:
exposure:
include: "health,info"# Only safe endpoints server:
port: 8081 # Internal port only# Step 2: Infrastructure config — block at load balancer level# AWS ALB rule: deny /actuator/* except /actuator/health, /actuator/info# Or: actuator port 8081 not exposed in security group
Failure Mode 6: Time-Based Conditions: Fix Not Persistent
# Original finding: Race condition on wallet withdrawal# Fixed with: database-level SELECT FOR UPDATE locking# But the fix was applied inside a transaction that has# auto-commit set to True in the ORM configuration:# WRONG fix — locking doesn't persist across auto-commit transactionsdefwithdraw_funds(user_id,amount):
withdb.session()assession:
# SELECT FOR UPDATE acquires lockwallet = session.query(Wallet).filter_by(user_id=user_id).with_for_update().first()ifwallet.balance >= amount:
wallet.balance -= amountsession.commit()# Auto-commit releases the lock# Another transaction can now also pass the balance check# before this commit is visible# CORRECT fix — lock must be held across the entire check-and-update:defwithdraw_funds(user_id,amount):
withdb.session.begin(): # Explicit transaction boundarywallet = db.session.query(Wallet).filter_by(user_id=user_id).with_for_update().first()# Lock acquiredifwallet.balance >= amount:
wallet.balance -= amount# commit() called implicitly at end of `with` block# Lock held until commitelse:
raiseInsufficientFundsError()
# Original finding: Race condition on wallet withdrawal# Fixed with: database-level SELECT FOR UPDATE locking# But the fix was applied inside a transaction that has# auto-commit set to True in the ORM configuration:# WRONG fix — locking doesn't persist across auto-commit transactionsdefwithdraw_funds(user_id,amount):
withdb.session()assession:
# SELECT FOR UPDATE acquires lockwallet = session.query(Wallet).filter_by(user_id=user_id).with_for_update().first()ifwallet.balance >= amount:
wallet.balance -= amountsession.commit()# Auto-commit releases the lock# Another transaction can now also pass the balance check# before this commit is visible# CORRECT fix — lock must be held across the entire check-and-update:defwithdraw_funds(user_id,amount):
withdb.session.begin(): # Explicit transaction boundarywallet = db.session.query(Wallet).filter_by(user_id=user_id).with_for_update().first()# Lock acquiredifwallet.balance >= amount:
wallet.balance -= amount# commit() called implicitly at end of `with` block# Lock held until commitelse:
raiseInsufficientFundsError()
# Original finding: Race condition on wallet withdrawal# Fixed with: database-level SELECT FOR UPDATE locking# But the fix was applied inside a transaction that has# auto-commit set to True in the ORM configuration:# WRONG fix — locking doesn't persist across auto-commit transactionsdefwithdraw_funds(user_id,amount):
withdb.session()assession:
# SELECT FOR UPDATE acquires lockwallet = session.query(Wallet).filter_by(user_id=user_id).with_for_update().first()ifwallet.balance >= amount:
wallet.balance -= amountsession.commit()# Auto-commit releases the lock# Another transaction can now also pass the balance check# before this commit is visible# CORRECT fix — lock must be held across the entire check-and-update:defwithdraw_funds(user_id,amount):
withdb.session.begin(): # Explicit transaction boundarywallet = db.session.query(Wallet).filter_by(user_id=user_id).with_for_update().first()# Lock acquiredifwallet.balance >= amount:
wallet.balance -= amount# commit() called implicitly at end of `with` block# Lock held until commitelse:
raiseInsufficientFundsError()
The Retest Process: What Actually Happens
A retest is not a repeat of the original penetration test. It is a targeted verification exercise: the testing firm returns to each original finding, applies the same technique that confirmed the vulnerability, and determines whether the remediation holds.
The process only works if the right conditions are in place before it starts.
Pre-Retest Requirements
Four things need to be confirmed before the retest clock starts.
Deployment confirmation: The production environment must be updated with every remediation, not just the ones that were easy to fix. The deployment timestamp must be after the original finding date. Document the version string or commit hash. If any finding is missing from the deployment, the retest scope needs to reflect that explicitly, partial retests produce partial evidence, and auditors notice.
Remediation documentation: For each finding, the testing firm needs to know what specifically changed. For code changes: which files, which functions, and what was different. For configuration changes: what the old value was and what the new value is. For infrastructure changes: which cloud resources were modified and how. A pull request or change ticket number for each remediation gives the tester a paper trail and gives the auditor traceability.
Scope agreement: The retest window needs to be agreed in advance, start date, end date, and the same environment as the original test. The same credentials and accounts used in the original test need to be available. A point of contact should be reachable during the retest window for questions. An emergency escalation contact should be designated in case a new critical finding surfaces during retesting.
Documentation package: The tester needs three things: the original finding report with all finding IDs, remediation notes per finding describing what was done, and deployment verification showing evidence that the fixes actually reached production.
Without this package, the retest becomes guesswork. The tester is trying to verify remediations they cannot trace to specific changes. Evidence produced under those conditions will not satisfy a Type II auditor.
What the Retest Covers
The retest is scoped to original findings only. The testing firm re-executes the specific technique that confirmed each vulnerability and documents the outcome: fixed, partially fixed, not fixed, or no longer applicable.
Fixed means the exact attack vector that produced the original finding no longer works. The tester documents what they attempted, what the application returned, and why that constitutes verification.
Partially fixed means the specific instance was addressed but a variant of the same vulnerability class remains. This produces a new finding, not a closed one.
Not fixed is self-explanatory. The remediation either was not deployed to production, was incorrectly implemented, or addressed a symptom rather than the root cause.
No longer applicable covers cases where the underlying feature or endpoint was removed rather than fixed. This is a valid remediation method, just document it explicitly.
The Retest Report
The retest report is a separate deliverable from the original penetration test report. It should contain, for every original finding:
Field
What it contains
Finding ID
Matches the ID from the original report
Original status
Severity and description from original test
Retest date
When verification was performed
Technique used
What the tester did to verify
Outcome
Fixed, partially fixed, not fixed, or removed
Evidence
Screenshot, response, or log confirming outcome
New finding (if any)
Assigned new ID if a variant was discovered
This document is what the SOC 2 auditor reads when they ask how you know the remediations worked. It needs to exist as a standalone report, not as an appendix to the original.
The Retest Execution Protocol
For each original finding, the retest follows a structured protocol:
class RetestProtocol:
"""
Structured retest execution for each finding.
"""defretest_finding(self,finding: dict,target_env: str) -> dict:
"""
Execute retest for a single finding.
Returns: retest result with pass/fail/partial determination
"""result = {'finding_id': finding['id'],'finding_title': finding['title'],'original_cvss': finding['cvss'],'retest_date': datetime.now().isoformat(),'target_environment': target_env,'steps_executed': [],'status': None,# 'REMEDIATED' | 'PARTIALLY_REMEDIATED' | 'NOT_REMEDIATED''new_cvss': None,'notes': []}# Step 1: Reproduce original exploitoriginal_exploit_result = self.execute_original_exploit(finding)result['steps_executed'].append({'step': 'original_exploit_reproduction','description': 'Execute exact attack vector from original finding','payload': finding['exploit']['payload'],'expected_outcome': '403/401/400 (vulnerability fixed)','actual_outcome': original_exploit_result['status_code'],'passed': original_exploit_result['status_code']in[401,403,400]})ifnotresult['steps_executed'][-1]['passed']:
# Original exploit still works — finding NOT remediatedresult['status'] = 'NOT_REMEDIATED'result['new_cvss'] = finding['cvss']# Unchangedresult['notes'].append('Original exploit vector still active')returnresult# Step 2: Variation testing — test related attack vectorsvariation_results = []forvariationinself.generate_variations(finding):
var_result = self.execute_variation(variation,finding)variation_results.append({'variation': variation['name'],'payload': variation['payload'],'passed': var_result['status_code']in[401,403,400],'actual_outcome': var_result['status_code']})failed_variations = [vforvinvariation_resultsifnotv['passed']]result['steps_executed'].extend(variation_results)# Step 3: Root cause verificationroot_cause_verified = self.verify_root_cause_fix(finding)result['steps_executed'].append({'step': 'root_cause_verification','description': 'Verify the underlying vulnerability class is addressed','passed': root_cause_verified})# Determine overall statusifnotfailed_variationsandroot_cause_verified:
result['status'] = 'REMEDIATED'result['new_cvss'] = 0.0result['notes'].append('All exploit vectors confirmed fixed')eliffailed_variationsandnotroot_cause_verified:
result['status'] = 'NOT_REMEDIATED'result['new_cvss'] = finding['cvss']result['notes'].append(f'Variation testing failed: {[v["variation"]forvinfailed_variations]}')else:
# Original fixed but variations fail — partial remediationresult['status'] = 'PARTIALLY_REMEDIATED'# Adjust CVSS based on remaining attack surfaceresult['new_cvss'] = self.calculate_adjusted_cvss(finding,failed_variations)result['notes'].append('Original vector fixed but related vulnerabilities remain')returnresultdefgenerate_variations(self,finding: dict) -> list:
"""Generate variation test cases based on finding type"""finding_type = finding['type']variations = []iffinding_type == 'IDOR':
# Test adjacent endpoints following same patternbase_endpoint = finding['exploit']['endpoint']variations = [{'name': 'sub_resource','payload': f"{base_endpoint}/details"},{'name': 'list_endpoint','payload': base_endpoint.replace('{id}','').rstrip('/')},{'name': 'update_method','method': 'PUT','payload': base_endpoint},{'name': 'delete_method','method': 'DELETE','payload': base_endpoint},]eliffinding_type == 'SQL_INJECTION':
# Test payload variationsoriginal_payload = finding['exploit']['payload']variations = [{'name': 'comment_style','payload': "' OR 1=1--"},{'name': 'union_based','payload': "' UNION SELECT NULL--"},{'name': 'time_based','payload': "'; WAITFOR DELAY '0:0:5'--"},{'name': 'stacked_query','payload': "'; DROP TABLE--"},{'name': 'double_encoded','payload': original_payload.replace("'","%2527")},]eliffinding_type == 'CORS_MISCONFIGURATION':
variations = [{'name': 'different_attacker_domain','origin': '<https://different-attacker.net>'},{'name': 'null_origin','origin': 'null'},{'name': 'subdomain_bypass','origin': f"<https://evil>.{finding['target_domain']}"},{'name': 'http_protocol','origin': finding['allowed_origin'].replace('https://','http://')},]returnvariations
class RetestProtocol:
"""
Structured retest execution for each finding.
"""defretest_finding(self,finding: dict,target_env: str) -> dict:
"""
Execute retest for a single finding.
Returns: retest result with pass/fail/partial determination
"""result = {'finding_id': finding['id'],'finding_title': finding['title'],'original_cvss': finding['cvss'],'retest_date': datetime.now().isoformat(),'target_environment': target_env,'steps_executed': [],'status': None,# 'REMEDIATED' | 'PARTIALLY_REMEDIATED' | 'NOT_REMEDIATED''new_cvss': None,'notes': []}# Step 1: Reproduce original exploitoriginal_exploit_result = self.execute_original_exploit(finding)result['steps_executed'].append({'step': 'original_exploit_reproduction','description': 'Execute exact attack vector from original finding','payload': finding['exploit']['payload'],'expected_outcome': '403/401/400 (vulnerability fixed)','actual_outcome': original_exploit_result['status_code'],'passed': original_exploit_result['status_code']in[401,403,400]})ifnotresult['steps_executed'][-1]['passed']:
# Original exploit still works — finding NOT remediatedresult['status'] = 'NOT_REMEDIATED'result['new_cvss'] = finding['cvss']# Unchangedresult['notes'].append('Original exploit vector still active')returnresult# Step 2: Variation testing — test related attack vectorsvariation_results = []forvariationinself.generate_variations(finding):
var_result = self.execute_variation(variation,finding)variation_results.append({'variation': variation['name'],'payload': variation['payload'],'passed': var_result['status_code']in[401,403,400],'actual_outcome': var_result['status_code']})failed_variations = [vforvinvariation_resultsifnotv['passed']]result['steps_executed'].extend(variation_results)# Step 3: Root cause verificationroot_cause_verified = self.verify_root_cause_fix(finding)result['steps_executed'].append({'step': 'root_cause_verification','description': 'Verify the underlying vulnerability class is addressed','passed': root_cause_verified})# Determine overall statusifnotfailed_variationsandroot_cause_verified:
result['status'] = 'REMEDIATED'result['new_cvss'] = 0.0result['notes'].append('All exploit vectors confirmed fixed')eliffailed_variationsandnotroot_cause_verified:
result['status'] = 'NOT_REMEDIATED'result['new_cvss'] = finding['cvss']result['notes'].append(f'Variation testing failed: {[v["variation"]forvinfailed_variations]}')else:
# Original fixed but variations fail — partial remediationresult['status'] = 'PARTIALLY_REMEDIATED'# Adjust CVSS based on remaining attack surfaceresult['new_cvss'] = self.calculate_adjusted_cvss(finding,failed_variations)result['notes'].append('Original vector fixed but related vulnerabilities remain')returnresultdefgenerate_variations(self,finding: dict) -> list:
"""Generate variation test cases based on finding type"""finding_type = finding['type']variations = []iffinding_type == 'IDOR':
# Test adjacent endpoints following same patternbase_endpoint = finding['exploit']['endpoint']variations = [{'name': 'sub_resource','payload': f"{base_endpoint}/details"},{'name': 'list_endpoint','payload': base_endpoint.replace('{id}','').rstrip('/')},{'name': 'update_method','method': 'PUT','payload': base_endpoint},{'name': 'delete_method','method': 'DELETE','payload': base_endpoint},]eliffinding_type == 'SQL_INJECTION':
# Test payload variationsoriginal_payload = finding['exploit']['payload']variations = [{'name': 'comment_style','payload': "' OR 1=1--"},{'name': 'union_based','payload': "' UNION SELECT NULL--"},{'name': 'time_based','payload': "'; WAITFOR DELAY '0:0:5'--"},{'name': 'stacked_query','payload': "'; DROP TABLE--"},{'name': 'double_encoded','payload': original_payload.replace("'","%2527")},]eliffinding_type == 'CORS_MISCONFIGURATION':
variations = [{'name': 'different_attacker_domain','origin': '<https://different-attacker.net>'},{'name': 'null_origin','origin': 'null'},{'name': 'subdomain_bypass','origin': f"<https://evil>.{finding['target_domain']}"},{'name': 'http_protocol','origin': finding['allowed_origin'].replace('https://','http://')},]returnvariations
class RetestProtocol:
"""
Structured retest execution for each finding.
"""defretest_finding(self,finding: dict,target_env: str) -> dict:
"""
Execute retest for a single finding.
Returns: retest result with pass/fail/partial determination
"""result = {'finding_id': finding['id'],'finding_title': finding['title'],'original_cvss': finding['cvss'],'retest_date': datetime.now().isoformat(),'target_environment': target_env,'steps_executed': [],'status': None,# 'REMEDIATED' | 'PARTIALLY_REMEDIATED' | 'NOT_REMEDIATED''new_cvss': None,'notes': []}# Step 1: Reproduce original exploitoriginal_exploit_result = self.execute_original_exploit(finding)result['steps_executed'].append({'step': 'original_exploit_reproduction','description': 'Execute exact attack vector from original finding','payload': finding['exploit']['payload'],'expected_outcome': '403/401/400 (vulnerability fixed)','actual_outcome': original_exploit_result['status_code'],'passed': original_exploit_result['status_code']in[401,403,400]})ifnotresult['steps_executed'][-1]['passed']:
# Original exploit still works — finding NOT remediatedresult['status'] = 'NOT_REMEDIATED'result['new_cvss'] = finding['cvss']# Unchangedresult['notes'].append('Original exploit vector still active')returnresult# Step 2: Variation testing — test related attack vectorsvariation_results = []forvariationinself.generate_variations(finding):
var_result = self.execute_variation(variation,finding)variation_results.append({'variation': variation['name'],'payload': variation['payload'],'passed': var_result['status_code']in[401,403,400],'actual_outcome': var_result['status_code']})failed_variations = [vforvinvariation_resultsifnotv['passed']]result['steps_executed'].extend(variation_results)# Step 3: Root cause verificationroot_cause_verified = self.verify_root_cause_fix(finding)result['steps_executed'].append({'step': 'root_cause_verification','description': 'Verify the underlying vulnerability class is addressed','passed': root_cause_verified})# Determine overall statusifnotfailed_variationsandroot_cause_verified:
result['status'] = 'REMEDIATED'result['new_cvss'] = 0.0result['notes'].append('All exploit vectors confirmed fixed')eliffailed_variationsandnotroot_cause_verified:
result['status'] = 'NOT_REMEDIATED'result['new_cvss'] = finding['cvss']result['notes'].append(f'Variation testing failed: {[v["variation"]forvinfailed_variations]}')else:
# Original fixed but variations fail — partial remediationresult['status'] = 'PARTIALLY_REMEDIATED'# Adjust CVSS based on remaining attack surfaceresult['new_cvss'] = self.calculate_adjusted_cvss(finding,failed_variations)result['notes'].append('Original vector fixed but related vulnerabilities remain')returnresultdefgenerate_variations(self,finding: dict) -> list:
"""Generate variation test cases based on finding type"""finding_type = finding['type']variations = []iffinding_type == 'IDOR':
# Test adjacent endpoints following same patternbase_endpoint = finding['exploit']['endpoint']variations = [{'name': 'sub_resource','payload': f"{base_endpoint}/details"},{'name': 'list_endpoint','payload': base_endpoint.replace('{id}','').rstrip('/')},{'name': 'update_method','method': 'PUT','payload': base_endpoint},{'name': 'delete_method','method': 'DELETE','payload': base_endpoint},]eliffinding_type == 'SQL_INJECTION':
# Test payload variationsoriginal_payload = finding['exploit']['payload']variations = [{'name': 'comment_style','payload': "' OR 1=1--"},{'name': 'union_based','payload': "' UNION SELECT NULL--"},{'name': 'time_based','payload': "'; WAITFOR DELAY '0:0:5'--"},{'name': 'stacked_query','payload': "'; DROP TABLE--"},{'name': 'double_encoded','payload': original_payload.replace("'","%2527")},]eliffinding_type == 'CORS_MISCONFIGURATION':
variations = [{'name': 'different_attacker_domain','origin': '<https://different-attacker.net>'},{'name': 'null_origin','origin': 'null'},{'name': 'subdomain_bypass','origin': f"<https://evil>.{finding['target_domain']}"},{'name': 'http_protocol','origin': finding['allowed_origin'].replace('https://','http://')},]returnvariations
What Auditors Actually Want: Compliance Evidence Requirements
Each compliance framework asks the same underlying question, did you find vulnerabilities, fix them, and prove the fixes worked, but phrases it differently and weighs the evidence differently. Here is what each framework actually requires.
SOC 2 Type II
SOC 2 Type II auditors are evaluating whether controls operated effectively over the audit period, not just whether they were designed correctly. For penetration testing, three TSC controls generate the most evidence requirements.
CC7.1: Security events monitored: This is the primary control where penetration testing evidence lives. The complete evidence package for CC7.1:
Document
What it must contain
Original penetration test report
Dated, signed by the testing firm
Findings list
CVSS scores for every finding
Remediation tracking document
Finding, fix applied, who deployed it, date deployed
Retest report
Signed by the testing firm, confirming each remediation
Timeline
Finding date, remediation date, retest date, in sequence
The timeline is what auditors use to verify SLA compliance. If your policy says critical findings are remediated within 14 days, the timeline must show that. A gap between the remediation date and the retest date that exceeds your stated policy is a finding in itself.
CC6.1: Logical access restrictions: For any access control finding, IDOR, auth bypass, privilege escalation, auditors want three things beyond the retest report: code review evidence showing the specific fix, deployment evidence such as a Git commit or CI/CD pipeline log, and per-finding retest confirmation. A general retest that says "access controls are improved" does not satisfy CC6.1. Each finding needs its own verification trail.
CC8.1: Changes controlled: Every remediation is a change to your application or infrastructure. For each one: a change ticket or pull request showing who approved the fix, QA sign-off on the fix, and a deployment record showing when it went to production, deployed by whom, and to which environment.
The three auditor questions that trip people up:
"How do you know the remediation is deployed in production?" - The answer is a production deployment log with timestamp and version, not a statement from the developer.
"How do you verify the fix is correct, not just that the test passes?" - Root cause analysis documentation plus a description of the retest methodology. The auditor wants to see that the fix addressed the root cause, not just the symptom.
"What is your remediation SLA for critical findings?" - A policy document plus evidence that this specific engagement met that SLA. If the policy says 14 days and the tracking log shows 30 days, that gap is worse than having no SLA.
PCI-DSS v4.0
PCI-DSS has the most prescriptive penetration testing requirements of any compliance framework. Requirements 11.3.1 and 11.3.2 specify the following.
11.3.1: External penetration test at least annually, and after any significant infrastructure or application change.
11.3.2: Internal penetration test at least annually.
11.3.3: Exploitable vulnerabilities found during penetration testing are corrected, and testing is repeated to verify the corrections.
The phrase "repeated to verify" makes the retest explicit and non-optional. Unlike SOC 2, where the retest requirement emerges from auditor judgment, PCI-DSS states it directly.
Evidence required for PCI-DSS:
Document
Requirement
Penetration test report
Scope, methodology, and all findings
Remediation evidence
Per finding
Retest report
Covering every original finding
QSA review
Qualified Security Assessor must review retest results
Two things that fail in PCI-DSS assessments that would pass in SOC 2:
Closing a finding as "accepted risk" requires QSA approval. You cannot unilaterally accept a PCI finding the way you can document a risk acceptance for SOC 2.
Critical findings must be remediated before the next test window, not just documented. There is no grace period.
The most common PCI failure: remediating in a test environment, documenting the fix, and presenting that as evidence. The QSA's response will be: show me the fix is live in the cardholder data environment.
ISO 27001:2022
ISO 27001 addresses penetration testing primarily through control A.8.8, Management of Technical Vulnerabilities. The evidence requirements are less prescriptive than PCI-DSS but more process-oriented than SOC 2, auditors want to see a mature vulnerability management process, not just a completed test.
Required documentation:
Document
Purpose
Vulnerability management policy
Defines SLAs and process ownership
Penetration test report
Primary test evidence
Risk assessment per finding
CVSS score plus business context
Remediation action record
Who fixed it, what they did, when
Retest evidence
Internal or external verification acceptable
Exception documentation
Formal record for any accepted risk
The ISO auditor questions that require preparation:
"What is your defined SLA for critical vulnerability remediation?" - This needs a policy document answer, not an ad hoc one.
"Show me evidence this critical finding was remediated within your defined SLA." - The remediation action record tied to a date, compared against your policy.
"Who is accountable for vulnerability remediation in your organization?" - A named role or individual in your policy. Not a team. Not a department.
"How do you verify remediations are effective?" - Your verification methodology, whether that is an external retest, an internal review, or both. ISO 27001 accepts internal verification in some cases where PCI-DSS and SOC 2 Type II generally do not.
The Common Thread
Every framework is asking the same three questions, phrased differently.
First: did you find the vulnerabilities? The penetration test report answers this.
Second: did you fix them? The remediation documentation answers this.
Third: how do you know the fixes worked? The retest report answers this.
The organizations that pass audits without exceptions are not the ones with the fewest vulnerabilities. They are the ones whose documentation answers all three questions clearly, with traceable evidence, for every finding.
Building the Remediation Evidence Package
The Per-Finding Evidence Template
For each finding in the original report, the remediation evidence package should contain:
# Finding Remediation Evidence
## Finding Reference
- **Finding ID:** FIND-2026-001
- **Title:** IDOR — Authenticated Cross-User Document Access
- **Original CVSS:** 8.3 (High)
- **Original Discovery Date:** 2026-02-15
- **Remediation Target Date:** 2026-02-22 (7-day SLA for High findings)
## Root Cause Analysis
The `/api/v1/documents/{id}` endpoint retrieved documents by ID without
verifying that the document belongs to the authenticated user. The ORM
query was: `Document.query.get(doc_id)` with no ownership filter.
## Remediation Applied
**What changed:**
- File: `app/api/documents/views.py`
- Function: `get_document()` (line 47)
- Before: `doc = Document.query.get(doc_id)`
- After: `doc = Document.query.filter_by(id=doc_id, user_id=current_user.id).first_or_404()`
**Scope of fix:**
- Applied the same ownership filter pattern to all Document endpoints:
GET /api/v1/documents/{id}
PUT /api/v1/documents/{id}
DELETE /api/v1/documents/{id}
GET /api/v1/documents/{id}/versions
- Implemented DocumentQuery.for_current_user() helper to prevent recurrence
**Pull Request:**<https://github.com/company/app/pull/1842>**Code Review Approval:** Approved by: Jane Smith (2026-02-20)
**Test Coverage:** Unit tests added in test_documents.py lines 120-145
## Deployment Evidence
- **Environment:** Production (api.company.com)
- **Deployment Date:** 2026-02-21 14:33 UTC
- **Deployed By:** CI/CD pipeline (trigger: merge to main)
- **Build ID:** build-20260221-1433
- **Version:** v2.14.1
- **Deployment Log:**<https://ci.company.com/builds/20260221-1433>
## Self-Test Results
Engineer self-test performed 2026-02-21 16:00:
GET /api/v1/documents/[other_user_doc_id] → 404 Not Found ✓
GET /api/v1/documents/[own_doc_id] → 200 OK ✓
## Retest Status
**Awaiting independent retest**
# Finding Remediation Evidence
## Finding Reference
- **Finding ID:** FIND-2026-001
- **Title:** IDOR — Authenticated Cross-User Document Access
- **Original CVSS:** 8.3 (High)
- **Original Discovery Date:** 2026-02-15
- **Remediation Target Date:** 2026-02-22 (7-day SLA for High findings)
## Root Cause Analysis
The `/api/v1/documents/{id}` endpoint retrieved documents by ID without
verifying that the document belongs to the authenticated user. The ORM
query was: `Document.query.get(doc_id)` with no ownership filter.
## Remediation Applied
**What changed:**
- File: `app/api/documents/views.py`
- Function: `get_document()` (line 47)
- Before: `doc = Document.query.get(doc_id)`
- After: `doc = Document.query.filter_by(id=doc_id, user_id=current_user.id).first_or_404()`
**Scope of fix:**
- Applied the same ownership filter pattern to all Document endpoints:
GET /api/v1/documents/{id}
PUT /api/v1/documents/{id}
DELETE /api/v1/documents/{id}
GET /api/v1/documents/{id}/versions
- Implemented DocumentQuery.for_current_user() helper to prevent recurrence
**Pull Request:**<https://github.com/company/app/pull/1842>**Code Review Approval:** Approved by: Jane Smith (2026-02-20)
**Test Coverage:** Unit tests added in test_documents.py lines 120-145
## Deployment Evidence
- **Environment:** Production (api.company.com)
- **Deployment Date:** 2026-02-21 14:33 UTC
- **Deployed By:** CI/CD pipeline (trigger: merge to main)
- **Build ID:** build-20260221-1433
- **Version:** v2.14.1
- **Deployment Log:**<https://ci.company.com/builds/20260221-1433>
## Self-Test Results
Engineer self-test performed 2026-02-21 16:00:
GET /api/v1/documents/[other_user_doc_id] → 404 Not Found ✓
GET /api/v1/documents/[own_doc_id] → 200 OK ✓
## Retest Status
**Awaiting independent retest**
# Finding Remediation Evidence
## Finding Reference
- **Finding ID:** FIND-2026-001
- **Title:** IDOR — Authenticated Cross-User Document Access
- **Original CVSS:** 8.3 (High)
- **Original Discovery Date:** 2026-02-15
- **Remediation Target Date:** 2026-02-22 (7-day SLA for High findings)
## Root Cause Analysis
The `/api/v1/documents/{id}` endpoint retrieved documents by ID without
verifying that the document belongs to the authenticated user. The ORM
query was: `Document.query.get(doc_id)` with no ownership filter.
## Remediation Applied
**What changed:**
- File: `app/api/documents/views.py`
- Function: `get_document()` (line 47)
- Before: `doc = Document.query.get(doc_id)`
- After: `doc = Document.query.filter_by(id=doc_id, user_id=current_user.id).first_or_404()`
**Scope of fix:**
- Applied the same ownership filter pattern to all Document endpoints:
GET /api/v1/documents/{id}
PUT /api/v1/documents/{id}
DELETE /api/v1/documents/{id}
GET /api/v1/documents/{id}/versions
- Implemented DocumentQuery.for_current_user() helper to prevent recurrence
**Pull Request:**<https://github.com/company/app/pull/1842>**Code Review Approval:** Approved by: Jane Smith (2026-02-20)
**Test Coverage:** Unit tests added in test_documents.py lines 120-145
## Deployment Evidence
- **Environment:** Production (api.company.com)
- **Deployment Date:** 2026-02-21 14:33 UTC
- **Deployed By:** CI/CD pipeline (trigger: merge to main)
- **Build ID:** build-20260221-1433
- **Version:** v2.14.1
- **Deployment Log:**<https://ci.company.com/builds/20260221-1433>
## Self-Test Results
Engineer self-test performed 2026-02-21 16:00:
GET /api/v1/documents/[other_user_doc_id] → 404 Not Found ✓
GET /api/v1/documents/[own_doc_id] → 200 OK ✓
## Retest Status
**Awaiting independent retest**
The Retest Report Structure
The retest report is a standalone document, not an appendix to the original test, not a summary email. It is the primary evidence auditors use to verify that your remediation program actually worked.
What the Report Contains
Executive Summary: The opening section answers the questions an auditor or executive asks first: when was the original test, when was the retest, how many findings were confirmed, and how many were fixed. The status breakdown should be a simple table, N Remediated, Y Partially Remediated, Z Not Remediated, with a CVSS delta showing the aggregate risk reduction from original test to post-remediation state.
Retest Methodology: A brief section confirming the environment tested was production, the approach taken (reproduce the original technique plus variation testing to catch incomplete fixes), tools used, and any limitations. If any finding could not be retested due to scope changes or unavailable components, that needs to be documented here with an explanation. Untestable findings are not automatically closed.
Per-Finding Results: This is the core of the report. For every finding from the original test:
Field
What it contains
Finding ID
Matches original report exactly
Title
Same as original
Original CVSS
Score from original test
Retest status
Remediated, Partially Remediated, or Not Remediated
New CVSS
0.0 if fully remediated, adjusted score if partial
Evidence
What was tested, what the application returned
Residual risk
For partial remediations only, what remains
Every finding needs an entry. A finding missing from the retest report is a finding the auditor will treat as unresolved.
CVSS Delta Summary: A single-page section showing the aggregate original risk score, the post-remediation score, and the percentage reduction. This is the trend data that Type II renewals depend on year over year. If your CVSS delta is improving each cycle, that story should be easy to tell from this section.
Recommendations: For partially remediated findings: the specific additional steps required to reach full closure. For unresolved findings: prioritized remediation guidance. For the overall program: any process observations from the retest that would reduce future failure rates.
Why the Format Matters
Auditors have seen every variation of retest documentation. A well-structured report moves through fieldwork quickly. A poorly structured one — missing finding IDs, no CVSS comparison, vague evidence descriptions, generates follow-up requests and creates the impression that the testing program is not mature.
The format described above maps directly to the auditor questions in Part 7. Every field in the per-finding section exists because an auditor will ask for exactly that information.
CVSS Delta Reporting: Quantifying Risk Reduction
One of the most valuable outputs of a professional retest is CVSS delta reporting, the quantifiable change in risk posture from the original test to the retest:
defcalculate_cvss_delta_report(original_findings: list,retest_results: list) -> dict:
"""
Calculate the CVSS delta between original test and retest.
Provides quantifiable evidence of risk reduction for compliance.
"""# Build lookup by finding IDoriginal_by_id = {f['id']: fforfinoriginal_findings}retest_by_id = {r['finding_id']: rforrinretest_results}# Calculate aggregate scoresoriginal_total_cvss = sum(f['cvss']forfinoriginal_findings)remediated_cvss = 0partially_remediated_cvss = 0not_remediated_cvss = 0finding_details = []forfinding_id,originalinoriginal_by_id.items():
retest = retest_by_id.get(finding_id,{})status = retest.get('status','NOT_RETESTED')ifstatus == 'REMEDIATED':
new_cvss = 0.0remediated_cvss += original['cvss']elifstatus == 'PARTIALLY_REMEDIATED':
new_cvss = retest.get('new_cvss',original['cvss'] * 0.5)partially_remediated_cvss += new_cvsselse:
new_cvss = original['cvss']not_remediated_cvss += new_cvssfinding_details.append({'id': finding_id,'title': original['title'],'original_cvss': original['cvss'],'original_severity': original['severity'],'status': status,'new_cvss': new_cvss,'cvss_delta': original['cvss'] - new_cvss,'risk_eliminated': status == 'REMEDIATED'})post_retest_total = partially_remediated_cvss + not_remediated_cvssrisk_reduction_pct = ((original_total_cvss - post_retest_total) /
original_total_cvss * 100)iforiginal_total_cvss > 0else0return{'original_test_date': original_findings[0].get('test_date'),'retest_date': retest_results[0].get('retest_date')ifretest_resultselseNone,'finding_counts': {'original': len(original_findings),'remediated': sum(1forrinretest_resultsifr['status'] == 'REMEDIATED'),'partially_remediated': sum(1forrinretest_resultsifr['status'] == 'PARTIALLY_REMEDIATED'),'not_remediated': sum(1forrinretest_resultsifr['status'] == 'NOT_REMEDIATED'),},'cvss_aggregate': {'original_total': round(original_total_cvss,1),'post_retest_total': round(post_retest_total,1),'risk_eliminated': round(original_total_cvss - post_retest_total,1),'risk_reduction_percentage': round(risk_reduction_pct,1)},'severity_breakdown': {'critical': {'original': sum(1forfinoriginal_findingsiff['severity'] == 'CRITICAL'),'remaining': sum(1forrinretest_resultsifr['status'] != 'REMEDIATED'andoriginal_by_id[r['finding_id']]['severity'] == 'CRITICAL')},'high': {'original': sum(1forfinoriginal_findingsiff['severity'] == 'HIGH'),'remaining': sum(1forrinretest_resultsifr['status'] != 'REMEDIATED'andoriginal_by_id[r['finding_id']]['severity'] == 'HIGH')},},'findings': finding_details,'compliance_summary': {'pci_dss_compliant': not_remediated_cvss == 0andpartially_remediated_cvss == 0,'soc2_evidence_complete': len(finding_details) == len(original_findings),'all_critical_remediated': all(r['status'] == 'REMEDIATED'forrinretest_resultsiforiginal_by_id[r['finding_id']]['severity'] == 'CRITICAL')}}# Example output:"""
CVSS Delta Report
=================
Original Test Date: 2026-02-15
Retest Date: 2026-03-01
Finding Summary:
Total original findings: 12
Fully remediated: 9
Partially remediated: 2
Not remediated: 1
CVSS Aggregate:
Original total CVSS: 67.4
Post-retest total CVSS: 11.2
Risk eliminated: 56.2 (83.4%)
Severity Breakdown:
Critical: 2 original → 0 remaining ✓
High: 4 original → 1 remaining ✗
Medium: 6 original → 2 remaining ✗
Compliance Status:
PCI-DSS: NOT YET COMPLIANT (1 high finding not remediated)
SOC 2: Evidence package complete for 9 remediated findings
Critical findings: All remediated ✓
"""
defcalculate_cvss_delta_report(original_findings: list,retest_results: list) -> dict:
"""
Calculate the CVSS delta between original test and retest.
Provides quantifiable evidence of risk reduction for compliance.
"""# Build lookup by finding IDoriginal_by_id = {f['id']: fforfinoriginal_findings}retest_by_id = {r['finding_id']: rforrinretest_results}# Calculate aggregate scoresoriginal_total_cvss = sum(f['cvss']forfinoriginal_findings)remediated_cvss = 0partially_remediated_cvss = 0not_remediated_cvss = 0finding_details = []forfinding_id,originalinoriginal_by_id.items():
retest = retest_by_id.get(finding_id,{})status = retest.get('status','NOT_RETESTED')ifstatus == 'REMEDIATED':
new_cvss = 0.0remediated_cvss += original['cvss']elifstatus == 'PARTIALLY_REMEDIATED':
new_cvss = retest.get('new_cvss',original['cvss'] * 0.5)partially_remediated_cvss += new_cvsselse:
new_cvss = original['cvss']not_remediated_cvss += new_cvssfinding_details.append({'id': finding_id,'title': original['title'],'original_cvss': original['cvss'],'original_severity': original['severity'],'status': status,'new_cvss': new_cvss,'cvss_delta': original['cvss'] - new_cvss,'risk_eliminated': status == 'REMEDIATED'})post_retest_total = partially_remediated_cvss + not_remediated_cvssrisk_reduction_pct = ((original_total_cvss - post_retest_total) /
original_total_cvss * 100)iforiginal_total_cvss > 0else0return{'original_test_date': original_findings[0].get('test_date'),'retest_date': retest_results[0].get('retest_date')ifretest_resultselseNone,'finding_counts': {'original': len(original_findings),'remediated': sum(1forrinretest_resultsifr['status'] == 'REMEDIATED'),'partially_remediated': sum(1forrinretest_resultsifr['status'] == 'PARTIALLY_REMEDIATED'),'not_remediated': sum(1forrinretest_resultsifr['status'] == 'NOT_REMEDIATED'),},'cvss_aggregate': {'original_total': round(original_total_cvss,1),'post_retest_total': round(post_retest_total,1),'risk_eliminated': round(original_total_cvss - post_retest_total,1),'risk_reduction_percentage': round(risk_reduction_pct,1)},'severity_breakdown': {'critical': {'original': sum(1forfinoriginal_findingsiff['severity'] == 'CRITICAL'),'remaining': sum(1forrinretest_resultsifr['status'] != 'REMEDIATED'andoriginal_by_id[r['finding_id']]['severity'] == 'CRITICAL')},'high': {'original': sum(1forfinoriginal_findingsiff['severity'] == 'HIGH'),'remaining': sum(1forrinretest_resultsifr['status'] != 'REMEDIATED'andoriginal_by_id[r['finding_id']]['severity'] == 'HIGH')},},'findings': finding_details,'compliance_summary': {'pci_dss_compliant': not_remediated_cvss == 0andpartially_remediated_cvss == 0,'soc2_evidence_complete': len(finding_details) == len(original_findings),'all_critical_remediated': all(r['status'] == 'REMEDIATED'forrinretest_resultsiforiginal_by_id[r['finding_id']]['severity'] == 'CRITICAL')}}# Example output:"""
CVSS Delta Report
=================
Original Test Date: 2026-02-15
Retest Date: 2026-03-01
Finding Summary:
Total original findings: 12
Fully remediated: 9
Partially remediated: 2
Not remediated: 1
CVSS Aggregate:
Original total CVSS: 67.4
Post-retest total CVSS: 11.2
Risk eliminated: 56.2 (83.4%)
Severity Breakdown:
Critical: 2 original → 0 remaining ✓
High: 4 original → 1 remaining ✗
Medium: 6 original → 2 remaining ✗
Compliance Status:
PCI-DSS: NOT YET COMPLIANT (1 high finding not remediated)
SOC 2: Evidence package complete for 9 remediated findings
Critical findings: All remediated ✓
"""
defcalculate_cvss_delta_report(original_findings: list,retest_results: list) -> dict:
"""
Calculate the CVSS delta between original test and retest.
Provides quantifiable evidence of risk reduction for compliance.
"""# Build lookup by finding IDoriginal_by_id = {f['id']: fforfinoriginal_findings}retest_by_id = {r['finding_id']: rforrinretest_results}# Calculate aggregate scoresoriginal_total_cvss = sum(f['cvss']forfinoriginal_findings)remediated_cvss = 0partially_remediated_cvss = 0not_remediated_cvss = 0finding_details = []forfinding_id,originalinoriginal_by_id.items():
retest = retest_by_id.get(finding_id,{})status = retest.get('status','NOT_RETESTED')ifstatus == 'REMEDIATED':
new_cvss = 0.0remediated_cvss += original['cvss']elifstatus == 'PARTIALLY_REMEDIATED':
new_cvss = retest.get('new_cvss',original['cvss'] * 0.5)partially_remediated_cvss += new_cvsselse:
new_cvss = original['cvss']not_remediated_cvss += new_cvssfinding_details.append({'id': finding_id,'title': original['title'],'original_cvss': original['cvss'],'original_severity': original['severity'],'status': status,'new_cvss': new_cvss,'cvss_delta': original['cvss'] - new_cvss,'risk_eliminated': status == 'REMEDIATED'})post_retest_total = partially_remediated_cvss + not_remediated_cvssrisk_reduction_pct = ((original_total_cvss - post_retest_total) /
original_total_cvss * 100)iforiginal_total_cvss > 0else0return{'original_test_date': original_findings[0].get('test_date'),'retest_date': retest_results[0].get('retest_date')ifretest_resultselseNone,'finding_counts': {'original': len(original_findings),'remediated': sum(1forrinretest_resultsifr['status'] == 'REMEDIATED'),'partially_remediated': sum(1forrinretest_resultsifr['status'] == 'PARTIALLY_REMEDIATED'),'not_remediated': sum(1forrinretest_resultsifr['status'] == 'NOT_REMEDIATED'),},'cvss_aggregate': {'original_total': round(original_total_cvss,1),'post_retest_total': round(post_retest_total,1),'risk_eliminated': round(original_total_cvss - post_retest_total,1),'risk_reduction_percentage': round(risk_reduction_pct,1)},'severity_breakdown': {'critical': {'original': sum(1forfinoriginal_findingsiff['severity'] == 'CRITICAL'),'remaining': sum(1forrinretest_resultsifr['status'] != 'REMEDIATED'andoriginal_by_id[r['finding_id']]['severity'] == 'CRITICAL')},'high': {'original': sum(1forfinoriginal_findingsiff['severity'] == 'HIGH'),'remaining': sum(1forrinretest_resultsifr['status'] != 'REMEDIATED'andoriginal_by_id[r['finding_id']]['severity'] == 'HIGH')},},'findings': finding_details,'compliance_summary': {'pci_dss_compliant': not_remediated_cvss == 0andpartially_remediated_cvss == 0,'soc2_evidence_complete': len(finding_details) == len(original_findings),'all_critical_remediated': all(r['status'] == 'REMEDIATED'forrinretest_resultsiforiginal_by_id[r['finding_id']]['severity'] == 'CRITICAL')}}# Example output:"""
CVSS Delta Report
=================
Original Test Date: 2026-02-15
Retest Date: 2026-03-01
Finding Summary:
Total original findings: 12
Fully remediated: 9
Partially remediated: 2
Not remediated: 1
CVSS Aggregate:
Original total CVSS: 67.4
Post-retest total CVSS: 11.2
Risk eliminated: 56.2 (83.4%)
Severity Breakdown:
Critical: 2 original → 0 remaining ✓
High: 4 original → 1 remaining ✗
Medium: 6 original → 2 remaining ✗
Compliance Status:
PCI-DSS: NOT YET COMPLIANT (1 high finding not remediated)
SOC 2: Evidence package complete for 9 remediated findings
Critical findings: All remediated ✓
"""
How to Fix Findings to Pass Retest the First Time
The most valuable investment an engineering team can make is ensuring that remediations pass retest on the first attempt. This requires three practices:
Practice 1: Fix Root Cause, Not Symptom
Every finding in a pentest report documents a specific instance of a vulnerability class. The remediation must address the vulnerability class, not just the specific instance:
# Mental model for root cause vs symptom:# Finding: "SQL injection on /api/users/search?name=X"# Symptom fix: Add input sanitization to this endpoint ← FAILS variation testing# Root cause fix: Use parameterized queries everywhere ← PASSES variation testing# Finding: "IDOR on GET /api/orders/{id}"# Symptom fix: Add ownership check to GET /api/orders/{id} ← FAILS variation# Root cause fix: Implement ownership-enforced query manager for all Order operations# Finding: "JWT accepts alg:none"# Symptom fix: Check for "none" string in algorithm ← FAILS case variation# Root cause fix: Strict algorithm allowlist ['HS256'] only ← PASSES all variations# Finding: "hardcoded AWS key in JavaScript bundle"# Symptom fix: Remove the key from that file ← FAILS if pattern repeated elsewhere# Root cause fix: Remove all AWS SDK usage from frontend + implement presigned URLs# + add CI/CD secret scanning to prevent recurrence
# Mental model for root cause vs symptom:# Finding: "SQL injection on /api/users/search?name=X"# Symptom fix: Add input sanitization to this endpoint ← FAILS variation testing# Root cause fix: Use parameterized queries everywhere ← PASSES variation testing# Finding: "IDOR on GET /api/orders/{id}"# Symptom fix: Add ownership check to GET /api/orders/{id} ← FAILS variation# Root cause fix: Implement ownership-enforced query manager for all Order operations# Finding: "JWT accepts alg:none"# Symptom fix: Check for "none" string in algorithm ← FAILS case variation# Root cause fix: Strict algorithm allowlist ['HS256'] only ← PASSES all variations# Finding: "hardcoded AWS key in JavaScript bundle"# Symptom fix: Remove the key from that file ← FAILS if pattern repeated elsewhere# Root cause fix: Remove all AWS SDK usage from frontend + implement presigned URLs# + add CI/CD secret scanning to prevent recurrence
# Mental model for root cause vs symptom:# Finding: "SQL injection on /api/users/search?name=X"# Symptom fix: Add input sanitization to this endpoint ← FAILS variation testing# Root cause fix: Use parameterized queries everywhere ← PASSES variation testing# Finding: "IDOR on GET /api/orders/{id}"# Symptom fix: Add ownership check to GET /api/orders/{id} ← FAILS variation# Root cause fix: Implement ownership-enforced query manager for all Order operations# Finding: "JWT accepts alg:none"# Symptom fix: Check for "none" string in algorithm ← FAILS case variation# Root cause fix: Strict algorithm allowlist ['HS256'] only ← PASSES all variations# Finding: "hardcoded AWS key in JavaScript bundle"# Symptom fix: Remove the key from that file ← FAILS if pattern repeated elsewhere# Root cause fix: Remove all AWS SDK usage from frontend + implement presigned URLs# + add CI/CD secret scanning to prevent recurrence
Practice 2: Test Variations Before Requesting Retest
Before marking a finding as remediated and requesting retest, the engineering team should self-test the variations the retest will use:
# Self-test checklist template for common finding types:SELF_TEST_CHECKLISTS = {'IDOR': ['Test original endpoint with another user\\'s ID → expect 403/404',
'Test all HTTP methods on the same endpoint (GET, PUT, DELETE)',
'Test all sub-resources of the same object',
'Test the list endpoint for the same resource type',
'Test with IDs from different tenants (if multi-tenant)',
'Test with numeric IDs adjacent to own IDs (±1, ±10)',
],
'SQL_INJECTION': [
'Test with original payload → expect safe response',
'Test with single quote payload → expect safe response',
'Test with comment-based payload (--) → expect safe response',
'Test with UNION-based payload → expect safe response',
'Test with time-based payload (SLEEP/WAITFOR) → timing should be normal',
'Test with all input fields on the same endpoint',
'Test with same input pattern on adjacent endpoints',
],
'CORS_MISCONFIGURATION': [
'Test with original attacker origin → expect no ACAO header or 403',
'Test with different attacker domain → expect same result',
'Test with null origin → expect no reflected null',
'Test with HTTP variant of trusted domain → expect rejection',
'Test subdomain variations of trusted domain',
'Confirm legitimate origins still work (don\\'t break production)',
],
'AUTHENTICATION_BYPASS': [
'Test original bypass technique → expect 401/403',
'Test endpoint without any credentials → expect 401',
'Test with expired credentials → expect 401',
'Test with invalid token signature → expect 401',
'Test all HTTP methods without credentials',
'Test adjacent endpoints in same namespace',
],
'SECRETS_IN_BUNDLE': [
'Download and search current production bundle → no secrets found',
'Search all JS files, not just main bundle (chunk files too)',
'Search source maps if still deployed',
'Verify secret has been rotated (test old secret doesn\\'t work)',
'Scan git history for the same secret (ensure it\\'s removed)',
'Run gitleaks on current codebase',
],
}
# Self-test checklist template for common finding types:SELF_TEST_CHECKLISTS = {'IDOR': ['Test original endpoint with another user\\'s ID → expect 403/404',
'Test all HTTP methods on the same endpoint (GET, PUT, DELETE)',
'Test all sub-resources of the same object',
'Test the list endpoint for the same resource type',
'Test with IDs from different tenants (if multi-tenant)',
'Test with numeric IDs adjacent to own IDs (±1, ±10)',
],
'SQL_INJECTION': [
'Test with original payload → expect safe response',
'Test with single quote payload → expect safe response',
'Test with comment-based payload (--) → expect safe response',
'Test with UNION-based payload → expect safe response',
'Test with time-based payload (SLEEP/WAITFOR) → timing should be normal',
'Test with all input fields on the same endpoint',
'Test with same input pattern on adjacent endpoints',
],
'CORS_MISCONFIGURATION': [
'Test with original attacker origin → expect no ACAO header or 403',
'Test with different attacker domain → expect same result',
'Test with null origin → expect no reflected null',
'Test with HTTP variant of trusted domain → expect rejection',
'Test subdomain variations of trusted domain',
'Confirm legitimate origins still work (don\\'t break production)',
],
'AUTHENTICATION_BYPASS': [
'Test original bypass technique → expect 401/403',
'Test endpoint without any credentials → expect 401',
'Test with expired credentials → expect 401',
'Test with invalid token signature → expect 401',
'Test all HTTP methods without credentials',
'Test adjacent endpoints in same namespace',
],
'SECRETS_IN_BUNDLE': [
'Download and search current production bundle → no secrets found',
'Search all JS files, not just main bundle (chunk files too)',
'Search source maps if still deployed',
'Verify secret has been rotated (test old secret doesn\\'t work)',
'Scan git history for the same secret (ensure it\\'s removed)',
'Run gitleaks on current codebase',
],
}
# Self-test checklist template for common finding types:SELF_TEST_CHECKLISTS = {'IDOR': ['Test original endpoint with another user\\'s ID → expect 403/404',
'Test all HTTP methods on the same endpoint (GET, PUT, DELETE)',
'Test all sub-resources of the same object',
'Test the list endpoint for the same resource type',
'Test with IDs from different tenants (if multi-tenant)',
'Test with numeric IDs adjacent to own IDs (±1, ±10)',
],
'SQL_INJECTION': [
'Test with original payload → expect safe response',
'Test with single quote payload → expect safe response',
'Test with comment-based payload (--) → expect safe response',
'Test with UNION-based payload → expect safe response',
'Test with time-based payload (SLEEP/WAITFOR) → timing should be normal',
'Test with all input fields on the same endpoint',
'Test with same input pattern on adjacent endpoints',
],
'CORS_MISCONFIGURATION': [
'Test with original attacker origin → expect no ACAO header or 403',
'Test with different attacker domain → expect same result',
'Test with null origin → expect no reflected null',
'Test with HTTP variant of trusted domain → expect rejection',
'Test subdomain variations of trusted domain',
'Confirm legitimate origins still work (don\\'t break production)',
],
'AUTHENTICATION_BYPASS': [
'Test original bypass technique → expect 401/403',
'Test endpoint without any credentials → expect 401',
'Test with expired credentials → expect 401',
'Test with invalid token signature → expect 401',
'Test all HTTP methods without credentials',
'Test adjacent endpoints in same namespace',
],
'SECRETS_IN_BUNDLE': [
'Download and search current production bundle → no secrets found',
'Search all JS files, not just main bundle (chunk files too)',
'Search source maps if still deployed',
'Verify secret has been rotated (test old secret doesn\\'t work)',
'Scan git history for the same secret (ensure it\\'s removed)',
'Run gitleaks on current codebase',
],
}
Practice 3: Verify Production Deployment Explicitly
#!/bin/bash
# production_deployment_verification.sh# Run this BEFORE requesting retest to confirm fix is in productionFINDING_ID=$1FIX_COMMIT=$2PRODUCTION_URL=$3PRODUCTION_API_KEY=$4echo"=== Production Deployment Verification ==="echo"Finding: $FINDING_ID"echo"Fix Commit: $FIX_COMMIT"# Step 1: Get current deployed versionDEPLOYED_VERSION=$(curl -s "$PRODUCTION_URL/api/version" \\ -H "Authorization: Bearer $PRODUCTION_API_KEY" | \\ jq -r '.version')echo"Currently deployed version: $DEPLOYED_VERSION"# Step 2: Check if the fix commit is included in the deployed version# (Requires git to be available and repo to be cloned)IS_INCLUDED=$(git merge-base --is-ancestor "$FIX_COMMIT""HEAD" && echo "YES" || echo "NO")echo"Fix commit included in deployed build: $IS_INCLUDED"# Step 3: Check application healthHEALTH_STATUS=$(curl -s "$PRODUCTION_URL/api/health" | jq -r '.status')echo"Application health: $HEALTH_STATUS"# Step 4: Run specific finding self-testecho""echo"=== Self-Test Results ==="
case $FINDING_IDin"FIND-2026-001") # IDOR findingecho"Testing IDOR remediation..."# Test with another user's resource ID (use known test account IDs)RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \\"$PRODUCTION_URL/api/v1/documents/[other_user_doc_id]" \\ -H "Authorization: Bearer $PRODUCTION_API_KEY")if [ "$RESPONSE" = "404" ] || [ "$RESPONSE" = "403" ]; thenecho"✓ IDOR remediated — got $RESPONSE (expected 403 or 404)"elseecho"✗ IDOR NOT remediated — got $RESPONSE (expected 403 or 404)"fi
;;
esacecho""echo"Verification complete. If all checks pass, request retest."
#!/bin/bash
# production_deployment_verification.sh# Run this BEFORE requesting retest to confirm fix is in productionFINDING_ID=$1FIX_COMMIT=$2PRODUCTION_URL=$3PRODUCTION_API_KEY=$4echo"=== Production Deployment Verification ==="echo"Finding: $FINDING_ID"echo"Fix Commit: $FIX_COMMIT"# Step 1: Get current deployed versionDEPLOYED_VERSION=$(curl -s "$PRODUCTION_URL/api/version" \\ -H "Authorization: Bearer $PRODUCTION_API_KEY" | \\ jq -r '.version')echo"Currently deployed version: $DEPLOYED_VERSION"# Step 2: Check if the fix commit is included in the deployed version# (Requires git to be available and repo to be cloned)IS_INCLUDED=$(git merge-base --is-ancestor "$FIX_COMMIT""HEAD" && echo "YES" || echo "NO")echo"Fix commit included in deployed build: $IS_INCLUDED"# Step 3: Check application healthHEALTH_STATUS=$(curl -s "$PRODUCTION_URL/api/health" | jq -r '.status')echo"Application health: $HEALTH_STATUS"# Step 4: Run specific finding self-testecho""echo"=== Self-Test Results ==="
case $FINDING_IDin"FIND-2026-001") # IDOR findingecho"Testing IDOR remediation..."# Test with another user's resource ID (use known test account IDs)RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \\"$PRODUCTION_URL/api/v1/documents/[other_user_doc_id]" \\ -H "Authorization: Bearer $PRODUCTION_API_KEY")if [ "$RESPONSE" = "404" ] || [ "$RESPONSE" = "403" ]; thenecho"✓ IDOR remediated — got $RESPONSE (expected 403 or 404)"elseecho"✗ IDOR NOT remediated — got $RESPONSE (expected 403 or 404)"fi
;;
esacecho""echo"Verification complete. If all checks pass, request retest."
#!/bin/bash
# production_deployment_verification.sh# Run this BEFORE requesting retest to confirm fix is in productionFINDING_ID=$1FIX_COMMIT=$2PRODUCTION_URL=$3PRODUCTION_API_KEY=$4echo"=== Production Deployment Verification ==="echo"Finding: $FINDING_ID"echo"Fix Commit: $FIX_COMMIT"# Step 1: Get current deployed versionDEPLOYED_VERSION=$(curl -s "$PRODUCTION_URL/api/version" \\ -H "Authorization: Bearer $PRODUCTION_API_KEY" | \\ jq -r '.version')echo"Currently deployed version: $DEPLOYED_VERSION"# Step 2: Check if the fix commit is included in the deployed version# (Requires git to be available and repo to be cloned)IS_INCLUDED=$(git merge-base --is-ancestor "$FIX_COMMIT""HEAD" && echo "YES" || echo "NO")echo"Fix commit included in deployed build: $IS_INCLUDED"# Step 3: Check application healthHEALTH_STATUS=$(curl -s "$PRODUCTION_URL/api/health" | jq -r '.status')echo"Application health: $HEALTH_STATUS"# Step 4: Run specific finding self-testecho""echo"=== Self-Test Results ==="
case $FINDING_IDin"FIND-2026-001") # IDOR findingecho"Testing IDOR remediation..."# Test with another user's resource ID (use known test account IDs)RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \\"$PRODUCTION_URL/api/v1/documents/[other_user_doc_id]" \\ -H "Authorization: Bearer $PRODUCTION_API_KEY")if [ "$RESPONSE" = "404" ] || [ "$RESPONSE" = "403" ]; thenecho"✓ IDOR remediated — got $RESPONSE (expected 403 or 404)"elseecho"✗ IDOR NOT remediated — got $RESPONSE (expected 403 or 404)"fi
;;
esacecho""echo"Verification complete. If all checks pass, request retest."
The Retest Report That Closes the Audit Loop
A complete retest report for compliance purposes includes:
SECTION 1: RETEST SCOPE AND METHODOLOGY
Test window: [dates]
Testing firm: CodeAnt AI
Original test reference: [report ID and date]
Environment: Production (<https://api.company.com>)
Version deployed: v2.14.1 (deployed 2026-02-21)
Methodology: For each original finding:
1. Reproduced original exploit technique
2. Executed variation testing (documented per finding)
3. Verified root cause resolution (not just symptom fix)
4. Documented evidence for compliance use
SECTION 2: FINDING-BY-FINDING RESULTS
FIND-2026-001: IDOR — Cross-User Document Access
Original CVSS: 8.3 (High) | Retest Status: REMEDIATED | New CVSS: 0.0
Evidence:
Original exploit: GET /api/v1/documents/[other_user_id] → 200 OK (original)
Retest result: GET /api/v1/documents/[other_user_id] → 404 Not Found ✓
Variation 1: PUT /api/v1/documents/[other_user_id] → 404 Not Found ✓
Variation 2: DELETE /api/v1/documents/[other_user_id] → 404 Not Found ✓
Variation 3: GET /api/v1/documents/[other_user_id]/versions → 404 ✓
Root cause: DocumentQuery.for_current_user() helper verified in code
All Document operations confirmed to use ownership filter
[continues for each finding...]
SECTION 1: RETEST SCOPE AND METHODOLOGY
Test window: [dates]
Testing firm: CodeAnt AI
Original test reference: [report ID and date]
Environment: Production (<https://api.company.com>)
Version deployed: v2.14.1 (deployed 2026-02-21)
Methodology: For each original finding:
1. Reproduced original exploit technique
2. Executed variation testing (documented per finding)
3. Verified root cause resolution (not just symptom fix)
4. Documented evidence for compliance use
SECTION 2: FINDING-BY-FINDING RESULTS
FIND-2026-001: IDOR — Cross-User Document Access
Original CVSS: 8.3 (High) | Retest Status: REMEDIATED | New CVSS: 0.0
Evidence:
Original exploit: GET /api/v1/documents/[other_user_id] → 200 OK (original)
Retest result: GET /api/v1/documents/[other_user_id] → 404 Not Found ✓
Variation 1: PUT /api/v1/documents/[other_user_id] → 404 Not Found ✓
Variation 2: DELETE /api/v1/documents/[other_user_id] → 404 Not Found ✓
Variation 3: GET /api/v1/documents/[other_user_id]/versions → 404 ✓
Root cause: DocumentQuery.for_current_user() helper verified in code
All Document operations confirmed to use ownership filter
[continues for each finding...]
SECTION 1: RETEST SCOPE AND METHODOLOGY
Test window: [dates]
Testing firm: CodeAnt AI
Original test reference: [report ID and date]
Environment: Production (<https://api.company.com>)
Version deployed: v2.14.1 (deployed 2026-02-21)
Methodology: For each original finding:
1. Reproduced original exploit technique
2. Executed variation testing (documented per finding)
3. Verified root cause resolution (not just symptom fix)
4. Documented evidence for compliance use
SECTION 2: FINDING-BY-FINDING RESULTS
FIND-2026-001: IDOR — Cross-User Document Access
Original CVSS: 8.3 (High) | Retest Status: REMEDIATED | New CVSS: 0.0
Evidence:
Original exploit: GET /api/v1/documents/[other_user_id] → 200 OK (original)
Retest result: GET /api/v1/documents/[other_user_id] → 404 Not Found ✓
Variation 1: PUT /api/v1/documents/[other_user_id] → 404 Not Found ✓
Variation 2: DELETE /api/v1/documents/[other_user_id] → 404 Not Found ✓
Variation 3: GET /api/v1/documents/[other_user_id]/versions → 404 ✓
Root cause: DocumentQuery.for_current_user() helper verified in code
All Document operations confirmed to use ownership filter
[continues for each finding...]
The Report is the Beginning, Not the End
A penetration test report is not a security certification. It is a point-in-time assessment of specific vulnerabilities that existed on a specific date, against a specific version of a specific application. Its value to the organization, and to auditors evaluating the organization's security posture, is determined almost entirely by what happens after the report is delivered.
The remediation process is where security improvements actually happen. The retest is where those improvements are verified as real rather than documented as intentions. The compliance evidence package is where that verification becomes auditable proof that the organization's security controls actually work.
Most penetration testing engagements deliver the report and consider the engagement complete. The remediation, the variation testing, the production deployment verification, the CVSS delta calculation, the compliance evidence packaging, that work is left to the organization's security team, frequently without the technical context needed to do it correctly.
The result is exactly the scenario this guide opened with: findings marked closed that aren't closed, compliance evidence that doesn't withstand auditor scrutiny, and a second penetration test that finds eight of the twelve "fixed" findings from the first test still exploitable.
CodeAnt AI's engagement model includes retest as a standard component, not an upsell, not an optional add-on. The engagement isn't complete until findings are verified remediated. CVSS delta documentation is included in the retest report. And the 48-hour escalation SLA for critical findings means that if a critical vulnerability is confirmed, the remediation-to-verification cycle starts within days, not quarters.