How We Test AI Tools
Every review on RawPickAI follows the same rigorous process. Here's the framework behind every score we publish.
Step 1: Discovery & setup
Before testing begins, we research the tool's background — who built it, when it launched, what problem it claims to solve, and who its competitors are. Then we sign up. Always through the regular signup flow, never through a press account or special reviewer access.
We note the onboarding: How long does it take to go from signup to first useful output? Is there a learning curve? Do you need a tutorial, or is it intuitive enough to figure out on your own?
Step 2: Hands-on testing
For quick tools like AI writing assistants, we spend 15-20 minutes running specific test prompts — the same ones we use across competitors. For complex tools like code editors or video generators, we'll spend an hour or more working on a real task.
For writing and content tools, we test with three standard prompts: a blog introduction, a product description, and a cold email.
For image generation tools, we run five prompts ranging from simple to complex, including India-specific visual contexts.
For code assistants, we test autocomplete accuracy on a real Python project, ask for a function refactoring, and try debugging a known issue.
Step 3: Pricing analysis
We break down every pricing tier, including free plans and trial periods. We show pricing in both USD and INR at current exchange rates.
Step 4: Scoring
Every tool is rated across five weighted dimensions, each scored from 0 to 100 based on hands-on testing. The weighted average becomes the overall score, which we display on each review in the familiar X/5 format (e.g., 4.5/5 = 90/100 overall).
How quickly can someone new get productive?
Does the tool produce results you'd actually use?
What do you get relative to what you pay?
Does the tool offer meaningful features beyond the basics?
How usable is the free plan?
A note on precision: scores are calibrated in bucket thinking (e.g., 70 = competent, 80 = strong, 90 = exceptional) rather than fine-grained absolute numbers. This mirrors how a single reviewer actually evaluates tools — you can tell the difference between a 70 and an 85, but not between an 85 and 87.
What we don't do
We don't accept payment for reviews. Our affiliate relationships are separate from our editorial process.
We don't review tools we haven't used. If we can't sign up and test it ourselves, it doesn't get a review page.
We don't copy other reviews. Every observation comes from our own testing.
Questions?
Reach out at hello@rawpickai.com or through our contact page.