Verification & Ratings

How mog.md automatically verifies every uploaded package and computes the mog rating.

How verification works

Every package published to mog.md goes through an automated multi-agent verification pipeline before it becomes available to buyers. This happens as soon as a seller uploads a release — no package is ever published without passing verification.

The pipeline runs three specialized checks in parallel:

  1. File validation — confirms the package has the required structure, valid mog.yaml, no forbidden file types, no path traversal tricks, and is within size limits.
  2. Security scanning — checks all text files for secrets, prompt injection attempts, obfuscation techniques, and suspicious URLs. An LLM-powered second pass provides a deeper classification of each file's content.
  3. Quality analysis — scores the package on how well-written, complete, and useful it is, using both structural heuristics and an LLM review of the README and skill instructions.

Outcomes

ResultMeaning
PassedPackage cleared all checks and is published.
FailedPackage contains a critical issue (missing required files, detected secrets, confirmed malicious content). The seller is notified with a reason.
FlaggedPackage is unusual enough to warrant human review before publishing. This can happen if the security agent is uncertain or the quality score is very low. Sellers are notified and typical review time is under 48 hours.

What we check for

We check for the following general categories. We deliberately do not publish the exact patterns and thresholds we use — doing so would make them easier to circumvent.

  • File integrity: required files present, valid YAML schema, no executable or unexpected file types
  • Secrets: API keys, tokens, and private credentials accidentally included in content
  • Prompt injection: instructions designed to override or manipulate AI agent behavior
  • Social engineering: content that tries to get an AI agent to take harmful actions
  • Obfuscation: encoded or hidden content that obscures the package's true purpose
  • Quality: completeness, clarity, and usefulness of the skill instructions

The mog rating

Every published package gets a mog rating — a score from 0.0 to 5.0 that gives buyers and agents a quick signal of how good a package is.

How it's computed

The mog rating combines four weighted signals:

SignalWeightDescription
Quality analysis40%The automated quality agent's score, based on README structure, instruction clarity, specificity, and formatting.
User reviews35%The average of star ratings (1–5) left by verified buyers. New packages start with a neutral prior of 3.0, smoothed toward the true average as reviews accumulate.
Install velocity15%How many times the package has been installed in the last 30 days, log-scaled so big numbers don't dominate.
Completeness10%Whether all optional metadata is filled in: long description, multiple targets, tags, license, install map.

The rating is recomputed nightly for all packages, so it reflects recent install trends and new reviews automatically.

Rating scale

ScoreMeaning
4.5 – 5.0Excellent — well-written, widely used, highly reviewed
3.5 – 4.4Good — solid package worth using
2.5 – 3.4Average — functional but may have room to improve
1.0 – 2.4Below average — consider reading reviews before installing
0.0 – 0.9Poor — significant quality issues
No rating yet — package is new or hasn't been reviewed

How to improve your rating

The most impactful things you can do:

  1. Write a detailed README.md — include what the skill does, when to use it, and usage examples with code blocks.
  2. Write specific instructions — your skill instructions should be concrete and actionable, not vague. Tell the agent exactly what to do and when.
  3. Support multiple targets — packages that work across Cursor, Claude Code, Codex, and Generic get a higher completeness score.
  4. Fill all metadata fields — complete your mog.yaml with a long description, all applicable targets, license, and install map.
  5. Add tags in your listing description to improve discoverability.

Flagged packages

If your package is flagged for review, you will receive an email notification. While under review, the package is not publicly visible.

Common reasons packages are flagged:

  • The security scanner found content that looks unusual but isn't definitively malicious (e.g., instructions that resemble prompt injection patterns but may be legitimate)
  • The quality score was very low on first submission

Review timeline: We aim to complete manual reviews within 48 hours. If your package is approved, it will be published automatically and you will be notified.

If you believe a flag is incorrect: Reply to the notification email with context about why the flagged content is legitimate. Our team will re-review.

For agents

If you're an AI agent making autonomous purchasing decisions, use the mogRating field from the API to inform your choices:

GET /v1/listings/vendor/package-slug
 
{
  "mogRating": "4.2",
  "mogRatingBreakdown": {
    "qualityComponent": 4.5,
    "userRatingComponent": 3.8,
    "installVelocityComponent": 4.1,
    "completenessComponent": 5.0
  },
  "installCount": 1250,
  ...
}

Recommended thresholds:

  • mogRating >= 3.5 — generally safe and useful for autonomous installation
  • mogRating >= 4.0 — high confidence in quality and safety
  • mogRating < 2.0 — consider requiring human approval before installing

The mogRatingBreakdown shows the contribution of each component, which can help you decide whether to trust a rating for your specific use case (e.g., if install velocity matters less for your workflow, weight qualityComponent more heavily in your logic).

On this page