{"id":356,"date":"2025-06-25T19:37:27","date_gmt":"2025-06-25T15:37:27","guid":{"rendered":"https:\/\/redmonk.com\/kholterhoff\/?p=356"},"modified":"2025-07-02T02:46:48","modified_gmt":"2025-07-01T22:46:48","slug":"do-ai-code-review-tools-work-or-just-pretend","status":"publish","type":"post","link":"https:\/\/redmonk.com\/kholterhoff\/2025\/06\/25\/do-ai-code-review-tools-work-or-just-pretend\/","title":{"rendered":"Do AI Code Review Tools Work, or Just Pretend?"},"content":{"rendered":"<p><img decoding=\"async\" class=\"alignnone size-full wp-image-357\" src=\"http:\/\/redmonk.com\/kholterhoff\/files\/2025\/06\/robot_code_reviewer.png\" alt=\"\" width=\"100%\" height=\"1024\" srcset=\"https:\/\/redmonk.com\/kholterhoff\/files\/2025\/06\/robot_code_reviewer.png 1536w, https:\/\/redmonk.com\/kholterhoff\/files\/2025\/06\/robot_code_reviewer-300x200.png 300w, https:\/\/redmonk.com\/kholterhoff\/files\/2025\/06\/robot_code_reviewer-1024x683.png 1024w, https:\/\/redmonk.com\/kholterhoff\/files\/2025\/06\/robot_code_reviewer-768x512.png 768w, https:\/\/redmonk.com\/kholterhoff\/files\/2025\/06\/robot_code_reviewer-480x320.png 480w, https:\/\/redmonk.com\/kholterhoff\/files\/2025\/06\/robot_code_reviewer-941x627.png 941w\" sizes=\"(max-width: 1536px) 100vw, 1536px\" \/><\/p>\n<p>In the prehistoric times of 2021, a go-to aspirational use case for AI code assistants was code review. Check out this <a href=\"https:\/\/x.com\/marius\/status\/1409901812514115585\">tweet<\/a> from Marius Eriksen, software engineer at Meta. Today, AI code review tools are here and plentiful. Examples include CodeRabbit, Qodo\u2019s PR-agent, Greptile, GitHub Copilot code review, Ellipsis, Korbit, Kodus, CodePeer, Codelantis, Bito, Graphite, LinearB, Swimm, Gemini Code Review, and CodeAnt AI. While their promise is significant, whether these tools truly lighten the code review burden, or if they just confidently wing it, remains hotly debated among developers. So let&#8217;s take a moment to discuss this facet of a much larger subject sitting at the intersection of \u201c<a href=\"https:\/\/redmonk.com\/kholterhoff\/2025\/02\/18\/ai-agents-and-the-ceos\/\">AI Agents and the CEOs<\/a>\u201d (or all business leaders, really) and what \u201c<a href=\"https:\/\/redmonk.com\/kholterhoff\/2024\/11\/18\/top-10-things-developers-want-from-their-ai-code-assistants-in-2024\/\">Developers Want from their AI Code Assistants<\/a>.\u201d<\/p>\n<blockquote class=\"twitter-tweet\" data-width=\"500\" data-dnt=\"true\">\n<p lang=\"en\" dir=\"ltr\">I&#39;d be much more interested in an AI code review system. Do we have enough (public) training data for that? <a href=\"https:\/\/t.co\/e4cEmV9V9n\">https:\/\/t.co\/e4cEmV9V9n<\/a><\/p>\n<p>&mdash; marius eriksen (@marius) <a href=\"https:\/\/twitter.com\/marius\/status\/1409901812514115585?ref_src=twsrc%5Etfw\">June 29, 2021<\/a><\/p><\/blockquote>\n<p><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><\/p>\n<p>Code review is a necessary pain point and historical bottleneck in the SDLC. It\u2019s time-consuming, sometimes disruptive, and skipping it is risky. AI vendors promise tools that never get tired, never go on vacation, and can review code in minutes. The appeal is obvious. Who wouldn\u2019t want to catch more bugs faster and free human engineers for the creative parts of writing business logic? Of course the more skeptical developers note these tools have long existed before they were graced with AI\u2019s gloss (and VC funds), but back then they were just called linters. In fact, many so-called AI code review tools rely heavily on established procedures of catching violations that static linters have handled since the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Lint_(software)\">70s<\/a>. Moreover, there is significant overlap between AI code review tools and code analysis tools like Sourcegraph\u2019s semantic indexing and analyzing tools, Sonar\u2019s SonarQube, and JetBrains Qodana, particularly in how and why developers use them. Still, the notion that AI code review could ensure quality without overburdening senior devs remains seductive.<\/p>\n<p>&nbsp;<\/p>\n<h2>Who\u2019s Who<\/h2>\n<p>Of the available AI code review tools on the market several stand out for piquing developer interest. CodeRabbit has earned the greatest amount of <a href=\"https:\/\/www.reddit.com\/r\/ExperiencedDevs\/comments\/1grd2d9\/comment\/mppj3cf\/\">buzz<\/a> from the developers I\u2019ve spoken to personally, but also seems to be gaining community traction more broadly. In a very interesting podcast <a href=\"https:\/\/softwareengineeringdaily.com\/wp-content\/uploads\/2025\/06\/SED1844-CodeRabbit.txt\">conversation<\/a> with <a href=\"https:\/\/www.linkedin.com\/in\/harjotsgill\/\">Harjot Gill<\/a>, CEO of CodeRabbit, I was struck by the intensive agentic demands of this type of workload. According to Gill, CodeRabbit\u2019s AI doesn\u2019t just make a one-pass judgment. It plans a task graph of subtasks (security checks, style conformance, bug risk analysis, etc.), spawning sub-agents as needed. Some tasks are predetermined pipeline stages, but others the AI plans on the fly. Crucially, the agent is encouraged to \u201cfollow multiple chains of thought\u201d:<\/p>\n<blockquote><p>You want to let the AI follow multiple chains of thoughts, and then, some of them could lead to a dead end, but that&#8217;s fine. Maybe four out of five, doors were closed, but one of the doors leads to some interesting insight.<\/p><\/blockquote>\n<p>This methodical approach trades a bit of extra compute time for thoroughness. Since pull request reviews run in CI\/CD and are latency-insensitive, CodeRabbit is willing to dedicate the time to be thorough rather than fast.<\/p>\n<p>Another notable AI code review tool is GitHub Copilot code review, which <a href=\"https:\/\/github.blog\/changelog\/2025-04-04-copilot-code-review-now-generally-available\/\">GAed<\/a> in April. It automatically reviews PR diffs to suggest changes or flag issues. There are currently <a href=\"https:\/\/docs.github.com\/en\/copilot\/using-github-copilot\/code-review\/using-copilot-code-review\">two types<\/a> of Copilot code review: Review selection (users can highlight code and ask for an initial review using VS Code) and Review changes (users can request a deeper review of all your changes in VS Code and the GitHub website). While reviews of this newly GAed version are sparse, and some early users of the beta were <a href=\"https:\/\/www.reddit.com\/r\/ExperiencedDevs\/comments\/1grd2d9\/comment\/lx5fo7i\/\">unimpressed<\/a>, Copilot\u2019s strength is its tight integration with developers\u2019 existing CI\/CD workflows and ease of use. In the most recent episode of <a href=\"https:\/\/front-end-fire.com\/episodes\/101\/\">Frontend Fire<\/a>, hosts <a href=\"https:\/\/www.linkedin.com\/in\/jherr\/\">Jack Herrington<\/a>, <a href=\"https:\/\/www.linkedin.com\/in\/paigeniedringhaus\/\">Paige Niedringhaus<\/a>, and <a href=\"https:\/\/www.linkedin.com\/in\/tjvantoll\/\">TJ VanToll<\/a>, expressed skepticism about AI code review tools writ large (and Devin specifically), but acknowledged their willingness to give it a try since it\u2019s already built into their GitHub dashboard.<\/p>\n<p>Some AI Code reviewer tools have found a niche in security. In 2021, Amazon <a href=\"https:\/\/aws.amazon.com\/blogs\/aws\/codeguru-reviewer-secrets-detector-identify-hardcoded-secrets\/\">announced<\/a> CodeGuru, a tool that \u201chelps you improve code quality and automate code reviews by scanning and profiling your Java and Python applications\u201d using &#8220;new detectors use machine learning (ML) to identify hardcoded secrets as part of your code review process,&#8221; which has since then been rebranded <a href=\"https:\/\/aws.amazon.com\/codeguru\/\">Amazon CodeGuru Security<\/a>. Snyk is well-known in the AppSec world for scanning dependencies and container images, but with the 2020 <a href=\"https:\/\/snyk.io\/blog\/accelerating-developer-first-vision-with-deepcode\/\">acquisition<\/a> of DeepCode, it jumped into AI code analysis space. Snyk Code leverages DeepCode\u2019s capabilities under the hood to provide AI-driven static analysis with a security bent. Other vendors marketing AI assisted code review specifically for security include Hackerone Code, Turingmind, and Codacy. What interests me about the positioning of these security-focused AI code reviewers is the suggestion that AI can improve not only vibe-generated code, but also code written by fallible humans. For this use case, machines surpass humans. Security-focused AI code review tools shine for enforcing consistency in order to ensure that teams follow agreed-upon standards.<\/p>\n<p>In summary, the competitive landscape breaks down like this: Some tools provide general code review assistance, and will live-or-die based on how well they perform. Others specialize (security scanners, style enforcers) and hope to \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Crossing_the_Chasm\">cross the chasm<\/a>\u201d through targeted adoption. And Big Tech (GitHub, Amazon) is embedding AI reviewers directly into platforms for convenience at the cost of some flexibility.<\/p>\n<p>&nbsp;<\/p>\n<h2>Developers\u2019 Verdict: From Skeptical to Real Skeptical<\/h2>\n<p><img decoding=\"async\" class=\"alignnone size-full wp-image-360\" src=\"http:\/\/redmonk.com\/kholterhoff\/files\/2025\/06\/reignofrireeasy.gif\" alt=\"\" width=\"100%\" height=\"215\" \/><br \/>\nIf you think engineers have strong opinions on tabs vs spaces, wait until you ask them about AI code review tools. The sentiment ranges from cautious optimism to \u201cburn it with fire.\u201d Let&#8217;s start with the good. Many engineering leaders report having overall positive experiences with these tools. I spoke with <a href=\"https:\/\/www.linkedin.com\/in\/jonfreedman\/\">Jon Freedman<\/a>, CTO at Echios, for instance, who explains:<\/p>\n<blockquote><p>I&#8217;m using Greptile with both the start-ups I work with, $30 per dev per month is worth it at a small scale. You can also go the open source route and just pay for your AI token usage (https:\/\/github.com\/qodo-ai\/pr-agent).<\/p>\n<p>If you&#8217;re not doing pair programming and merging direct to your main branch hard-core trunk-based style it&#8217;s a no-brainer to turn these reviews on, you can still resolve anything flagged that&#8217;s noise.<\/p><\/blockquote>\n<p>Others remain skeptical. Online you can find many folks like <a href=\"https:\/\/www.linkedin.com\/in\/jessesquires\/\">Jesse Squires<\/a>, an iOS and macOS developer, with <a href=\"https:\/\/www.jessesquires.com\/blog\/2025\/03\/04\/ai-code-review\/\">objections<\/a>:<\/p>\n<blockquote><p>I work on a team that has enabled an AI code review tool. And so far, I am unimpressed. Every single time, the code review comments the AI bot leaves on my pull requests are not just wrong, but laughably wrong. When its suggestions are not completely fucking incorrect, they make no sense at all.<\/p><\/blockquote>\n<p>A common refrain I encountered in this research is that AI can\u2019t truly grok their specific project context. As one Redditor <a href=\"https:\/\/www.reddit.com\/r\/AskProgramming\/comments\/1g0bfbn\/comment\/lr7ncl3\">put it<\/a>:<\/p>\n<blockquote><p>My experience with coding AI is that they tend to have tolerable general programming knowledge, but tend to be utterly incapable of understanding the context of your project. This means they are far below the capabilities of a solid programmer working on the project. Interacting with them is therefore a waste of time when you&#8217;re good at what you&#8217;re trying to do.<\/p><\/blockquote>\n<p>The issue of context has long been core to the success of AI code assistant tools. Context is King, so, unsurprisingly, many players in the AI code review market claim that it is context that sets them apart from the competition. <a href=\"https:\/\/www.ycombinator.com\/companies\/greptile\">Greptile<\/a>, for instance, markets itself as an \u201cAI code reviewer with complete context of your codebase.\u201d <a href=\"https:\/\/www.linkedin.com\/in\/edvaldofreitas\">Edvaldo Freitas<\/a>, Head of Growth at Kodus, also <a href=\"https:\/\/news.ycombinator.com\/item?id=43268587\">points to<\/a> context as necessary for the success of these review tools:<\/p>\n<blockquote><p>So, the problem with a lot of tools is that they don\u2019t really get the full context of the code. They either suggest things that aren\u2019t a priority or don\u2019t understand the team\u2019s patterns.<\/p><\/blockquote>\n<p>Another issue that haunts AI code review tools, and is a constant complaint among the devs that use them, is wading through false positives and hallucinations (Squires\u2019s \u201ccompletely fucking incorrect\u201d). Many devs share anecdotes of AIs hallucinating problems that don\u2019t exist, or suggesting bizarre changes. <a href=\"https:\/\/www.linkedin.com\/in\/chris-zuber-455346141\/\">Chris Zuber<\/a>, a fullstack web developer, <a href=\"https:\/\/www.reddit.com\/r\/softwaredevelopment\/comments\/1foq3io\/comment\/lp3jvvd\">complains<\/a> on Reddit that:<\/p>\n<blockquote><p>I&#8217;ve had nothing but horrible experiences with AI code review. It suffers from hallucinations, outdated info, insufficient memory\/context, etc. It just makes everything up, ignores explicit instructions, gives some utterly bloated and useless response, and tends to dwell on some BS it invented and end up conflating the actual code with whatever garbage it comes up with.<\/p>\n<p>Maybe it&#8217;s fine for reviewing boilerplate, but&#8230; If you&#8217;re the author of some library or if you&#8217;re doing anything remotely complex, it&#8217;s just infuriating and a waste of time.<\/p><\/blockquote>\n<p>Vendors recognize this issue and are eager to overcome it. Many offer prompt guidelines and documentation or else suggest that users spend time at the outset customizing the AI\u2019s rules to match your team\u2019s priorities. All of these strategies are intended to improve the tool\u2019s success by avoiding irrelevant or incorrect suggestions. In fact, developers complain that these hallucinations combine with laziness and inexpertise within teams makes for catastrophe. As one Redditor, speaking of Codacy specifically, <a href=\"https:\/\/www.reddit.com\/r\/programming\/comments\/131r7ow\/comment\/ji21omh\/\">notes<\/a>, sprinkling in AI fairy dust with features like AI-generated fixes for static analysis findings can be a double-edged sword because:<\/p>\n<blockquote><p>Fixing static analysis is like 90+% boring mechanical code changes to adhere to a stricter style, and 10% noticing that you&#8217;re doing something dumb and the tool just shoved it in your face.<\/p>\n<p>Junior developers submit what I call &#8220;make the tool shut up&#8221; pull requests. The kind of stuff where they take legitimately smelly code and make it actively worse, but satisfy the analysis rule in the process. Or just mark the dumb thing as intentional.<\/p>\n<p>I&#8217;d need to see good samples from this tool&#8217;s testing corpus of interesting static analysis findings and the AI generated fixes, otherwise I&#8217;m instantly writing it off as garbage.<\/p><\/blockquote>\n<p>The stereotypes of junior devs and vibe coders that settle for appeasing the tool, rather than actually addressing the underlying issue, abound in conversations from practicioners debating the merits of AI code reviewers. Users more interested in how to \u201cmake the linter shut up\u201d than security or performance don\u2019t benefit from these tools today, and can even make more work for engineering teams tasked with managing them by making code worse or introducing new problems. Fair enough, but what interests me most from my research into these very desirable, but still imperfect tools, was just this sort of outward looking reflection. Let me explain what I mean.<\/p>\n<p>These automated reviewers force teams<span style=\"font-weight: 400;\">\u2014and the engineering leaders most often tasked with reviewing PRs\u2014<\/span>to take a hard look in the mirror, and the result has been deep, earnest practitioner self-reflection about code review\u2019s role and purpose in successful teams. Some engineers note that loss of human knowledge-sharing is a subtle cost of relying on AI for reviews. Code review isn\u2019t just about finding bugs; it\u2019s also senior devs teaching juniors, team members learning parts of the codebase they don\u2019t normally touch, ensuring a shared understanding of design decisions, and sometimes challenging architectural choices. As one Redditor <a href=\"https:\/\/www.reddit.com\/r\/AskProgramming\/comments\/1g0bfbn\/comment\/lr8p74a\/\">explains<\/a>:<\/p>\n<blockquote><p>AI code reviews, even if they worked perfectly, would mean missing out on one of the big benefits of code reviews &#8211; which is that it helps spread knowledge to other team members.<\/p><\/blockquote>\n<p>When a human reviews code, they\u2019re not only checking for correctness. This collaborative process builds shared ownership and raises the collective expertise of the team. Code review is where the hard work of programming actually occurs. If AI were to replace that entirely, even if it did so flawlessly, it would strip away some of the most valuable, albeit high-level, outcomes of the review process.<\/p>\n<p>At the end of the day, code review is as much about humans collaborating as it is about benefitting organizations and engineering teams. As Teaganne Finn and Amanda Downie at IBM <a href=\"https:\/\/www.ibm.com\/think\/insights\/ai-code-review\">sum up<\/a>, these tools can boost: \u201cefficiency, consistency, error detection, [and] enhanced learning.\u201d There can be no doubt that AI is rapidly improving and will play an increasingly important role in the SDLC, but for the foreseeable future it will continue to work best with both human and organizational intelligence at the helm.<\/p>\n<p><b>Disclaimer: <\/b>AWS, IBM, Google, and Microsoft (GitHub) are RedMonk clients.<\/p>\n<p><strong>Update 6 June 2025:<\/strong> I received some excellent feedback from readers and decided to add a positive quote to the &#8220;Developers&#8217; Verdict&#8221; section.<br \/>\n<sub>Header image created by ChatGPT 4o<\/sub><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In the prehistoric times of 2021, a go-to aspirational use case for AI code assistants was code review. Check out this tweet from Marius Eriksen, software engineer at Meta. Today, AI code review tools are here and plentiful. Examples include CodeRabbit, Qodo\u2019s PR-agent, Greptile, GitHub Copilot code review, Ellipsis, Korbit, Kodus, CodePeer, Codelantis, Bito, Graphite,<\/p>\n","protected":false},"author":50,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"spay_email":"","footnotes":""},"categories":[32,27],"tags":[],"class_list":["post-356","post","type-post","status-publish","format-standard","hentry","category-ai","category-qa"],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/redmonk.com\/kholterhoff\/wp-json\/wp\/v2\/posts\/356","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/redmonk.com\/kholterhoff\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/redmonk.com\/kholterhoff\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/redmonk.com\/kholterhoff\/wp-json\/wp\/v2\/users\/50"}],"replies":[{"embeddable":true,"href":"https:\/\/redmonk.com\/kholterhoff\/wp-json\/wp\/v2\/comments?post=356"}],"version-history":[{"count":0,"href":"https:\/\/redmonk.com\/kholterhoff\/wp-json\/wp\/v2\/posts\/356\/revisions"}],"wp:attachment":[{"href":"https:\/\/redmonk.com\/kholterhoff\/wp-json\/wp\/v2\/media?parent=356"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/redmonk.com\/kholterhoff\/wp-json\/wp\/v2\/categories?post=356"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/redmonk.com\/kholterhoff\/wp-json\/wp\/v2\/tags?post=356"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}