How Scoring Works
MCPRadar's security scanner runs 13 detection rules across 3 categories: security, performance, and quality. Here's exactly what each check does and how your score is calculated.
MCPRadar v1 · Last updated March 2026
Score Calculation
Every MCP server starts with a perfect score of 100/100. Each finding deducts points based on severity. To prevent unfair over-penalization, deductions are capped per check category — if you have 10 filesystem tools with the same issue, you're only penalized once for that architectural decision, not 10 times.
Severe security vulnerabilities that could lead to system compromise or data breach
Significant issues that expose sensitive data or risk infrastructure security
Performance degradation or quality issues that may impact user experience
Minor optimizations and best practice recommendations
Grade Thresholds
Fair Scoring: Category Caps
To prevent unfair over-penalization for repeated architectural issues, deductions are capped per check category. For example, if you have 9 filesystem tools with unconstrained paths, you're deducted a maximum of 20 points total (not 135 points for 9 separate findings).
Caps ensure servers with consistent architectural patterns aren't unfairly penalized compared to servers with diverse issues.
Security Checks (7)
Static analysis of tool definitions to detect injection risks, data exposure, and command execution vulnerabilities
Performance Checks (3)
Dynamic analysis of tool execution times and connection behavior
Quality Checks (3)
Best practices and documentation quality
Frequently Asked Questions
Why did my score change between scans?
Performance checks (P1, P2) are dynamic — they analyze actual tool execution times. If you re-scan after calling different tools or after server load changes, execution times may vary. Security checks (S1-S4) are static and won't change unless you modify tool definitions.
Can I get a perfect 100/100 score?
Yes! A 100/100 score means zero findings across all 13 checks. This indicates your MCP server follows security best practices, performs well, and has clear documentation.
What if I disagree with a finding?
Check the evidence shown in the finding card — it shows exactly what triggered the check. If you believe it's a false positive:
- Verify the check criteria in this documentation
- Review the evidence to understand the detection
- If genuinely incorrect, open a GitHub issue with details
Does MCPRadar store my scan results?
No. All scans run locally in your browser and on MCPRadar's proxy (which forgets everything after the request). Your tool definitions, scan results, and MCP server details are never persisted or sent to external analytics services.
How does risk scoring work for Agent Debug Mode?
Risk scoring is separate from security scanning. It's used in Agent Debug Mode to decide which tool calls should pause for approval. Risk levels (HIGH/MEDIUM/LOW) are calculated based on:
- Tool name is weighted more heavily (strongest signal)
- Description summary (first 200 chars only, excluding parameter details)
- Context-aware — ignores keywords in explanatory text
Example: A tool named "sequentialthinking" with "execute" in the description → LOW risk (the word appears in explanatory text, not as the tool's actual function). But "execute_command" → HIGH risk (name contains execution keyword).