AI for Automation
Back to AI News
2026-03-23AI learningGeminijob interviewLeetCodeAI tutor

He used AI to cram for a Google interview in 7 days — and got invited on-site

A telecom developer who failed algorithms for 10 years used Gemini as a private tutor, solved 34 LeetCode problems in one week, and earned a Google on-site interview.


A software developer with over a decade of experience just proved that AI can teach you what years of traditional study couldn't. Dominik Rudnik — a telecom and gamedev engineer — used Google's Gemini Pro as a private tutor to prepare for a Google technical interview in just 7 days. He solved 34 LeetCode problems (18 Medium, 1 Hard), and despite making mistakes under pressure, Google invited him for on-site interviews.

The story, shared on his personal blog, hit the Hacker News front page and sparked intense debate about whether AI-assisted learning genuinely works — or just creates an illusion of understanding.

Google Gemini AI interface used as a private tutor for interview preparation

10 years of failure, 7 days of AI tutoring

Rudnik describes himself as a practical engineer — telecom routing, message processing, and hobby game development. His approach to data structures was pragmatic: flat arrays, simple maps, and "for hard problems, SQLite." Classical algorithms and LeetCode-style challenges? He'd been failing at them since primary school programming contests.

Then Google's recruiter called about a year-old application he'd forgotten. He had one week, a day job, and a fundamental knowledge gap.

His three rules for AI-assisted learning:

1. The AI must never output code — only conceptual hints and real-world metaphors
2. He must write every solution himself, in his own coding style and variable names
3. Each problem gets a strict timebox — 10 to 30 minutes max, then move on

The method that actually worked

On Day 1, Rudnik fed Gemini Pro his CV, Google's prep materials, and a simple instruction: "Act like a human private teacher fully committed to teaching me new concepts I'm not aware of." He asked for conceptual explanations using analogies from other domains — not textbook definitions.

The result surprised him. Problems like "Best Time to Buy and Sell Stock" and "Valid Anagram" — which had seemed impenetrable as abstract math — suddenly made sense when the AI described them using real-world metaphors.

Day 2 was the marathon: 9 hours of focused learning, covering linked lists, binary trees, graphs, and breadth-first search (a method for exploring data by checking neighbors first, like solving a maze level by level). He discovered that tree problems felt like something he'd already done — traversing game UI elements. Graphs reminded him of his daily telecom work.

By Day 3, he switched to Medium-difficulty problems without a compiler — forcing himself to write code purely from memory, the way a real interview works. His key discovery: "Easy problems are often the hardest because they introduce entirely new concepts. Medium problems are just trickier versions of the easy ones."

What happened at the interview

The technical assessment was a hybrid of graph traversal (navigating connected data points) and binary search (a technique for quickly finding items in sorted data by repeatedly cutting the search space in half). Rudnik liked it immediately — it felt like designing game mechanics for an RTS game.

He nailed the first part. Then, under time pressure, his working memory dumped the iterative binary search syntax. He knew the logic but couldn't write the code.

He fell back on verbalization — explaining his approach out loud to the interviewer, reconstructing the algorithm step by step, even as his written code had mistakes. It wasn't clean, but the thinking was sound.

The outcome: Google's recruiter said his "code debuggability" needed work — corporate language for "your code had bugs." But they saw enough to invite him for two on-site technical interviews. For someone who couldn't solve even the simplest LeetCode problem one week earlier, this was a breakthrough.

Why Hacker News is arguing about this

The post triggered a heated debate. Some commenters questioned whether AI tutoring creates genuine understanding or just an illusion of learning. Others pointed out that algorithmic interviews themselves are flawed — one commenter noted they're "100% gamed by now" with remote LeetCode rounds.

But the most thoughtful responses focused on the learning method itself. Rudnik's approach — forcing the AI to explain without code, then writing solutions in his own style — mirrors what educational researchers call "desirable difficulty." The AI didn't do the work for him. It translated concepts into language his brain could process.

As Rudnik himself reflected: "I am still amazed at how an LLM helped me understand a problem space I had been trying to grasp for over 10 years. It highlights how crucial a private tutor is for understanding concepts that aren't just a bucket of facts."

Try this approach yourself

You don't need a Google interview to use this method. The core technique works for learning anything — from Excel formulas to music theory. Here's Rudnik's recipe adapted for any AI chat (ChatGPT, Claude, Gemini):

# Paste this into any AI chat to start a tutoring session:

Act as a private tutor fully committed to teaching me [TOPIC].
Do NOT output any code or direct answers.
Only provide conceptual hints, real-world analogies, and attack strategies.
If a metaphor from a different domain makes the concept clearer, use it.
After each concept, give me a practice exercise to try myself.
I will show you my attempt, and you will judge it and suggest improvements.

The three rules that made it work:

  • No copy-paste: Write everything yourself, in your own words and style
  • Strict timeboxes: 10-30 minutes per concept, then move on regardless
  • Explain it back: If you can teach the concept to a friend, you understand it

Related ContentGet Started with Easy Claude Code | Free Learning Guides | More AI News

Stay updated on AI news

Simple explanations of the latest AI developments