Our Methodology
Full transparency on how we analyze and score your SaaS idea. No black box: verifiable data and open sources.
1. Data Sources
We collect data from 7+ sources for a comprehensive analysis.
Analysis of discussions, pain signals, frustrations and requests in relevant subreddits.
HackerNews
Stories and technical comments to measure interest from the tech community.
GitHub
Competing repos, stars, activity and technical ecosystem around the topic.
ProductHunt
Similar launched products, votes, engagement and competitor positioning.
Google Trends
Search trend via SerpApi to measure market interest over time.
Perplexity AI
In-depth market analysis, estimated size and competitive context.
AI Analysis (Claude)
Semantic analysis of conversations, pain point extraction, signal classification and adversarial critique.
2. Scoring Categories
Each idea is evaluated across 8 independent categories, scored from 0 to 100.
Pain Signals (pain)
Intensity of expressed problems, willingness-to-pay, pain points vs feature requests ratio, deep conversation analysis.
Competition (competition)
Number of competitors, market saturation, quality of existing solutions and differentiation opportunities.
Revenue Potential (revenue)
Monetization signals, competitor pricing and viable business models.
Market Size (market)
TAM/SAM/SOM estimation, discussion volume and potential audience.
Sentiment (sentiment)
Overall tone of discussions around the topic.
Trend (trend)
Evolution of interest over time via Google Trends and recent discussion volume.
Tech Stack (tech)
Technical ecosystem, maturity of available tools and development complexity.
Devil's Advocate (adversarial)
AI adversarial critique: critical risks, investor objections, potential failure reasons. This safeguard prevents false positives.
3. Overall Score Calculation
The final score combines all categories in a balanced manner.
Weighted Average per Category
Each category is calculated as a weighted average of its internal metrics (each metric has its own weight).
Overall Score = Average of Categories
The final score is the arithmetic mean of all categories. Each category carries equal weight to avoid bias.
Verdict
- Excellent: 80+/100
- Good: 60-79/100
- Average: 40-59/100
- Weak: 0-39/100
4. What Sets Us Apart
Unlike competing tools that rely solely on AI-generated text.
Verifiable Sources
Every signal is traced back to its source (Reddit link, HN thread, GitHub repo). You can verify it yourself.
No Hallucination
The AI analyzes real collected data. It does not generate fake competitors or made-up statistics.
Built-in Devil's Advocate
Our adversarial AI actively identifies reasons why your idea could fail. Other tools don't dare to.
Fresh Data
Data is collected in real time at the moment of your analysis, not cached or outdated.
Test It on Your Idea
Describe your idea and get a viability score based on this methodology.
Analyze my idea →