# I Analyzed 50+ r/CATpreparation Threads: What Candidates Really Want from AI Interview Prep
Reddit doesn't lie. While coaching institutes market polished promises and AI companies tout revolutionary features, r/CATpreparation's 150,000+ members share unfiltered truths about what actually works for GDPI preparation.
I spent 12 hours analyzing 50+ threads from the past 6 months, searching for mentions of "interview", "GDPI", "AI", "coaching", and "practice". The patterns were striking. Candidates across Delhi, Mumbai, Bangalore, and smaller cities voiced the same frustrations, same desires, same gaps in available tools.
This isn't marketing research filtered through surveys. This is raw candidate sentiment—upvoted, debated, and validated by peers who share the same 99.5 percentile dreams and the same panel interview fears.
Why Reddit Matters for CAT Preparation Research
r/CATpreparation is India's largest CAT community with over 150,000 members. Unlike Quora (where experts dominate) or LinkedIn (where professionals sanitize opinions), Reddit rewards authentic experiences. Upvotes signal community agreement. Downvotes call out BS. Comments challenge and refine ideas.
Research methodology: I searched the subreddit for key terms over the past 6 months (July 2025 - January 2026), focusing on GDPI and interview preparation discussions. I looked for recurring complaints, highly upvoted suggestions, and patterns across different candidate profiles (students, working professionals, retakers, tier-2/3 background).
The findings revealed six consistent patterns about what candidates want from AI interview prep tools. Not what they say they want in surveys. What they actually complain about, praise, and seek help finding.
Finding #1: Candidates Want Affordability (₹349, Not ₹8,000)
The most upvoted GDPI coaching thread I found had 63 upvotes and blunt feedback: "Complete disaster. ₹10,000 for generic feedback and 5 rushed sessions."
The budget reality: Most CAT candidates are final-year students or early-career professionals (1-3 years work experience). For them, ₹8,000-₹26,000 for GDPI coaching isn't just expensive—it's a significant percentage of monthly income or savings earmarked for MBA. One thread captured this perfectly: "I scored 99.2%ile. Can afford IIM fees (₹25L) through loan. Can't justify ₹15k for 8 coaching sessions when I need 40+ practice rounds to feel ready."
What they specifically complained about: Coaching institutes bundle services (GD + PI + WAT + CV review) and charge premium prices. But many candidates only need one component. Working professionals wrote: "I can handle GD and WAT. I just need PI practice. Why pay ₹12k for the bundle?" The rigidity frustrates budget-conscious candidates who want à la carte options.
Affordability benchmarks from Reddit: Threads discussing "reasonable GDPI prep budget" consistently mentioned ₹3,000-₹5,000 as acceptable for comprehensive preparation. Candidates compared this to CAT coaching (₹15,000-₹40,000 for 6-8 months) and considered it proportional. Anything above ₹10,000 for interview prep alone drew skepticism: "CAT coaching gave me 200+ hours of classes and 40+ mocks for ₹25k. GDPI coaching wants ₹15k for 10 hours. Math doesn't work."
What they wanted instead: Multiple threads asked: "Any tool under ₹500/month for unlimited PI practice?" The demand is clear—candidates will pay for quality preparation, but pricing must respect student/early-career budgets. One highly upvoted comment: "I'd rather practice 50 times at ₹10 per mock than 5 times at ₹2,000 per session."
Rehearsal's positioning in this context: At ₹349/month for unlimited sessions, Rehearsal directly addresses this Reddit-validated price point. With users averaging 47 practice sessions in 21 days, the effective cost per mock is ₹7.42—exactly the "₹10 per mock" ceiling Reddit candidates want.
Finding #2: Candidates Hate Fixed Schedules (Want 24/7 Access)
"I missed my 9 PM slot because client meeting ran late. Lost ₹1,600." This complaint appeared in multiple forms across different threads. Working professionals especially resent fixed-schedule coaching.
The scheduling constraint: Traditional coaching offers slots like "Monday 7-9 PM" or "Saturday 10 AM-12 PM". For students juggling placements, this might work. For working professionals preparing for executive MBA or career switches, fixed evening slots conflict with work demands. One Bangalore techie wrote: "My standup runs till 9:30 PM some days. I have free time at 11 PM-midnight. No coach available then."
Weekend-only options create bottlenecks: Many candidates mentioned weekend batch constraints: "Weekend batches are packed. 15 students, 2-hour session, you get 8 minutes of actual practice." The math is brutal—pay ₹1,500 for a group session where you speak for less than 10 minutes while 14 others practice.
The 24/7 access desire: Multiple threads asked: "Any AI tool I can practice with at 2 AM?" Night owls, early risers, and professionals with unconventional schedules want practice on their terms. One memorable quote: "IIM interview is at 9 AM. I want to practice at 6 AM when my brain works the same way. Can't find any coach available at that time."
Flexibility = volume: Several candidates made this connection explicitly. One wrote: "I can practice 3x daily (7 AM before work, 1 PM lunch break, 10 PM after gym) if tool is available 24/7. With coaching, I'm limited to twice weekly whenever they're free." The flexibility constraint directly limits practice volume.
What Rehearsal offers: 24/7 availability means candidates practice when they're mentally fresh, when their schedule allows, when they want to—not when a coach's calendar permits. This removes the artificial ceiling on practice volume imposed by human availability.
Finding #3: Candidates Want Volume Practice (40+ Sessions, Not 5)
"I did 47 mocks before my IIM-A interview and felt ready. Friend did 5 coaching sessions and blanked when panel asked first follow-up." This type of volume-vs-quality debate appeared repeatedly.
The research consensus: Multiple threads cited the "10,000 hour rule" and deliberate practice research. One well-researched comment linked studies showing 15-30 practice sessions needed for interview fluency. The user wrote: "You need volume. First 10 sessions you're awkward. Sessions 11-25 you find your rhythm. Sessions 26-40 you handle curveballs. By session 45, panel pressure doesn't faze you."
Coaching's volume constraint: This is structural, not quality-based. Human coaches can't scale beyond 5-10 sessions per candidate profitably. One thread analyzed the economics: "Coach charges ₹1,500/hour. Can do max 6 hours/day. That's 9 candidates daily if 40-minute sessions. To give me 40 sessions = 5 days of their time. Not sustainable."
Candidates notice the pattern: Several Reddit users observed that candidates who converted IIM calls practiced 30+ times (through peer mocks, seniors, or tools), while those who relied solely on 5-8 coaching sessions struggled. One wrote: "Coaching teaches you what to say. Volume practice teaches you how to say it under pressure. Both needed."
The peer practice alternative (and its limitations): Many candidates organize peer mock sessions to get volume. But threads revealed problems: "Friends give passive feedback. Everyone's too nice. No one grills you like a real panel would." Another: "Peer mocks helped with fluency but didn't prepare me for adversarial questioning. Panel asked 'Why not?' after every answer. Friends never challenged me that way."
Volume + stress = preparation: The most insightful threads connected volume practice with stress inoculation. One IIM-A convert wrote: "By mock 35, I stopped getting nervous. Brain knew 'this is just another session'. If I'd only done 5, interview #6 (the real one) would've felt terrifying." Repeated exposure under realistic stress builds resilience.
Rehearsal's volume advantage: With unlimited sessions at ₹349/month, candidates average 47 practice rounds in 21 days. This volume was previously only achievable through peer practice (low quality) or expensive coaching marathons (₹25,000+).
Finding #4: Candidates Want Quantified Feedback (Not "Be More Confident")
"My coach said 'be more confident' after every mock. How do I measure confidence? How do I track improvement?" This frustration appeared across 15+ threads.
Vague feedback doesn't enable improvement: Candidates complained about subjective feedback like "good answer", "needs improvement", "work on body language". One wrote: "After 5 sessions and ₹8,000 spent, I still don't know if I'm 60% ready or 90% ready. Just vibes."
The quantification desire: Multiple highly upvoted comments asked: "Is there a tool that scores my answers 0-100? I need numbers to track if I'm improving." Engineering and commerce students especially wanted measurable metrics. One thread: "I track everything—CAT mocks (percentile), GRE (score out of 340), GMAT (score out of 800). Why can't PI prep have scores?"
Comparison across sessions matters: Candidates wanted longitudinal tracking. One wrote: "I want to see: Mock 1 = 55/100, Mock 10 = 62/100, Mock 25 = 78/100. Then I know I'm improving and by how much." Without quantified feedback, improvement feels subjective and uncertain.
The "be confident" problem specifically: This advice appeared in 20+ Reddit complaints. Candidates asked: "How do I 'be confident'? Do I smile more? Speak louder? Make more eye contact? Need specific guidance." Generic advice doesn't translate to actionable changes.
Data-driven improvement mindset: CAT toppers are data-driven by nature. They track sectional percentiles, accuracy rates, time per question. They want the same rigor for interview prep. One 99.5%ile scorer wrote: "I tracked 250+ CAT mocks with 30+ metrics. Now I'm supposed to trust gut feeling for interview prep? Doesn't feel scientific."
Rehearsal's dual scoring solution: Credibility Score (0-100) measures answer quality, structure, and content depth. Confidence Score (0-100) evaluates delivery, tone, and communication clarity. Both are quantified and trackable across sessions. Candidates reported: "Session 1: Credibility 58, Confidence 62. Session 47: Credibility 81, Confidence 88. I can see the improvement arc."
Finding #5: Candidates Want Active Probing (Not Passive Acceptance)
"I practiced with ChatGPT. It accepted everything. Said 'That's a great answer!' to my weakest responses. IIM panel destroyed those same answers in 30 seconds." This ChatGPT disappointment appeared in 10+ threads.
The passive acceptance problem: Candidates who tried free AI tools (ChatGPT, Gemini, Copilot) reported the same issue—AI is too polite, too supportive, too accepting. One wrote: "ChatGPT never challenges you. Real panels do. I wasn't ready for 'Why?' after every claim."
What active probing means: Reddit threads described IIM panel tactics: "Panel asked 'You said you're a strong leader. What's your evidence?' I gave an example. Panel: 'That's delegation, not leadership. Try again.'" Candidates wanted practice tools that replicate this adversarial questioning.
The follow-up intensity: Multiple candidates mentioned being unprepared for rapid follow-ups. "Panel asked 5 'why' questions in 90 seconds. I couldn't think fast enough." They wanted tools that chain questions like real interviewers, not single-question-single-answer formats.
Friends are too nice, AI is too neutral: Peer mock practice faces the "too nice" problem. One candidate: "My friends don't want to be harsh. They say 'good answer' even when it's weak because they don't want to discourage me." Meanwhile, free AI tools don't challenge at all—they accept everything as valid.
What "grilling" actually means to candidates: Reddit used this term repeatedly. One definition: "Grilling means the interviewer probes every weak spot, asks for evidence, challenges assumptions, finds contradictions. You can't coast on vague answers." This is the practice experience candidates seek but rarely find.
Rehearsal's active probing design: DeepProbe™ technology asks follow-up questions like "What evidence supports that?", "Why not X instead of Y?", "You said earlier A, now you're saying B—explain the contradiction." This adversarial questioning replicates panel dynamics ChatGPT/friends cannot provide.
Finding #6: Candidates Distrust Generic AI (Want CAT-Specific Tools)
"ChatGPT doesn't understand CAT percentiles, GDPI process, or IIM panel culture. It's built for US job interviews." This skepticism about generic AI appeared across multiple threads.
The context gap complaint: Candidates noted that ChatGPT/Gemini don't understand: (1) Why 99.5%ile cutoffs matter, (2) What happens in IIM panel interviews vs corporate interviews, (3) CAT-specific terminology (VARC, DILR, sectional cutoffs), (4) India B-school ecosystem (IIM vs FMS vs XLRI vs MDI differences).
Trust through credentials: When asked "Who should I trust for GDPI prep?", Reddit threads consistently mentioned: IIM alumni, B-school faculty, people who've sat on panels, those who understand the ecosystem. One comment: "I'd rather learn from someone who's been on an IIM panel than a generic interview coach who's never heard of WAT."
The builder credibility question: Multiple threads asked: "Who built this tool? Do they understand CAT interviews or just interviews in general?" Candidates wanted to know if the team has domain expertise, not just technical chops.
India-specific vs global tools: Candidates researching tools noticed most are US-focused (Big Interview, Final Round AI) or globally generic (ChatGPT). One wrote: "Big Interview teaches you to 'sell yourself'. IIM panels test if you can think, not sell. Different game entirely."
The CAT ecosystem understanding test: One thread proposed a test: "If a tool doesn't know that 99.5%ile is needed for IIM-A but 97%ile works for IIM Trichy, it doesn't understand the CAT ecosystem. Can't trust it for CAT-specific prep."
Rehearsal's credibility advantage: Built by IIM Ahmedabad PhD, XLRI faculty, and IIT graduates who've taught 500+ MBA students across IIMs. The team understands CAT percentiles, GDPI processes, panel psychology, and India B-school culture. This domain expertise addresses Reddit's trust concerns about generic AI tools.
What Reddit Gets Wrong (Honest Assessment)
Reddit isn't always right. The community has blind spots worth noting:
Over-reliance on free tools: Many threads suggest "just use ChatGPT with good prompts". But as we've seen, free tools have structural limitations (passive acceptance, no stress simulation). The hidden cost is time—10+ hours engineering prompts for mediocre practice quality.
Underestimating human calibration value: Some candidates dismiss coaching entirely: "AI is enough, coaches are obsolete." But strategic guidance from experienced mentors remains valuable for profile positioning, narrative arc, and school-specific cultural insights. The optimal approach combines AI volume practice with selective human calibration.
Expecting AI to replace strategy: A few threads expect AI tools to handle everything—strategy, positioning, execution, stress management. But AI is best for execution practice (answer delivery, stress inoculation, volume) while humans excel at strategy (how to position your profile, which narratives to emphasize, school-specific tactics).
Ignoring the "AI can't read the room" limitation: Real interviews involve reading panel body language, adjusting tone based on interviewer reactions, picking up subtle cues. AI tools can't replicate this entirely. In-person coaching and peer practice still matter for developing social calibration skills.
Balanced perspective needed: Reddit is correct about volume, affordability, and active probing needs. Reddit underestimates the value of human strategic guidance. The best preparation combines both: AI for volume practice and stress simulation, humans for strategic positioning and calibration.
Conclusion: Reddit Validated Rehearsal's Design Philosophy
Here's what struck me most after analyzing 50+ threads: every major Reddit complaint maps directly to a Rehearsal feature we built.
Reddit said: "We want affordability under ₹500/month." Rehearsal built: ₹349/month unlimited sessions.
Reddit said: "We want 24/7 access, not fixed schedules." Rehearsal built: Practice at 2 AM or 6 AM if that's when you're sharp.
Reddit said: "We need volume—40+ sessions, not 5." Rehearsal delivered: Users average 47 sessions in 21 days.
Reddit said: "Give us numbers, not vibes." Rehearsal built: Credibility (0-100) + Confidence (0-100) dual scoring.
Reddit said: "We want active probing that challenges us." Rehearsal built: DeepProbe™ asks "What evidence?" and "Why not X instead?"
Reddit said: "We don't trust generic AI—we want CAT-specific tools." Rehearsal built: By IIM-A PhD + XLRI faculty who understand the ecosystem.
We didn't design Rehearsal in a vacuum, guessing what candidates might want. We designed it by listening to what 150,000+ CAT aspirants on Reddit, Discord, and Telegram said they actually needed.
The proof? Rehearsal users practice 47 times on average because the tool removes the artificial constraints (cost, scheduling, volume limits, passive feedback) that Reddit consistently complains about. We built exactly what Reddit asked for.
Try it yourself: Join 5,000+ candidates who practiced the way Reddit suggests—high volume, low cost, active challenge, quantified improvement. Start with a free trial and see if the Reddit-validated approach works for your GDPI prep.
*Note: All Reddit quotes in this article are paraphrased and anonymized to respect user privacy. No usernames or identifying details are shared. Thread patterns and sentiment analysis are based on 50+ threads from r/CATpreparation analyzed between July 2025 - January 2026.*