Benchmark illustrates fashions’ capabilities like coding and reasoning. ’s consequence displays he mannequin’s efficiency over varied domains out there on knowledge on agentic coding, math, reasoning, and gear use.
Benchmark | Claude 4 Opus | Claude 4 Sonnet | GPT-4o | Gemini 2.5 Professional |
HumanEval (Code Gen) | Not Accessible | Not Accessible | 74.8% | 75.6% |
GPQA (Graduate Reasoning) | 83.3% | 83.8% | 83.3% | 83.0% |
MMLU (World Data) | 88.8% | 86.5% | 88.7% | 88.6% |
AIME 2025 (Math) | 90.0% | 85.0% | 88.9% | 83.0% |
SWE-bench (Agentic Coding) | 72.5% | 72.7% | 69.1% | 63.2% |
TAU-bench (Instrument Use) | 81.4% | 80.5% | 70.4% | Not Accessible |
Terminal-bench (Coding) | 43.2% | 35.5% | 30.2% | 25.3% |
MMMU (Visible Reasoning) | 76.5% | 74.4% | 82.9% | 79.6% |
On this, Claude 4 usually excels in coding, GPT-4o in reasoning, and Gemini 2.5 Professional provides sturdy, balanced efficiency throughout completely different modalities. For extra data, please go to right here.
Total Evaluation
Right here’s what we’ve discovered about these superior closing fashions, primarily based on the above factors of comparability:
- We discovered that Claude 4 excels in coding, math, and gear use, however it’s also the most costly one.
- GPT-4o excels at reasoning and multimodal help, dealing with completely different enter codecs, making it an excellent selection for extra superior and complicated assistants.
- In the meantime, Gemini 2.5 Professional provides a robust and balanced efficiency with the most important context window and essentially the most cost-effective pricing.
Claude 4 vs GPT-4o vs Gemini 2.5 Professional: Coding Capabilities
Now we are going to examine the code-writing capabilities of Claude 4, GPT-4o, and Gemini 2.5 Professional. For that, we’re going to give the identical immediate to all three fashions and consider their responses on the next metrics:
- Effectivity
- Readability
- Remark and Documentation
- Error Dealing with
Job 1: Design Taking part in Playing cards with HTML, CSS, and JS
Immediate: “Create an interactive webpage that shows a group of WWE Famous person flashcards utilizing HTML, CSS, and JavaScript. Every card ought to characterize a WWE wrestler, and should embrace a back and front aspect. On the entrance, show the wrestler’s identify and picture. On the again, present extra stats akin to their ending transfer, model, and championship titles. The flashcards ought to have a flip animation when hovered over or clicked.
Moreover, add interactive controls to make the web page dynamic: a button that shuffles the playing cards, and one other that reveals a random card from the deck. The format ought to be visually interesting and responsive for various display screen sizes. Bonus factors should you embrace sound results like entrance music when a card is flipped.
Key Options to Implement:
- Entrance of card: wrestler’s identify + picture
- Again of card: stats (e.g., finisher, model, titles)
- Flip animation utilizing CSS or JS
- “Shuffle” button to randomly reorder playing cards
- “Present Random Famous person” button
- Responsive design.”
Claude 4’s Response:
GPT-4o’s Response:
Gemini 2.5 Professional’s Response:
Comparative Evaluation
Within the first job, Claude 4 gave essentially the most interactive expertise with essentially the most dynamic visuals. It additionally added a sound impact whereas clicking on the cardboard. GPT-4o gave a black theme format with easy transitions and totally practical buttons, however lacked the audio performance. In the meantime, Gemini 2.5 Professional gave the only and most elementary sequential format with no animation or sound. Additionally, the random card characteristic on this one failed to point out the cardboard’s face correctly. Total, Claude takes the lead right here, adopted by GPT-4o, after which Gemini.
Job 2: Construct a Sport
Immediate: “Spell Technique Sport is a turn-based battle recreation constructed with Pygame, the place two mages compete by casting spells from their spellbooks. Every participant begins with 100 HP and 100 Mana and takes turns deciding on spells that deal injury, heal, or apply particular results like shields and stuns. Spells eat mana and have cooldown durations, requiring gamers to handle sources and strategize rigorously. The sport options a fascinating UI with well being and mana bars, and spell cooldown indicators.. Gamers can face off in opposition to one other human or an AI opponent, aiming to scale back their rival’s HP to zero by way of tactical choices.
Key Options:
- Flip-based gameplay with two mages (PvP or PvAI)
- 100 HP and 100 Mana per participant
- Spellbook with numerous spells: injury, therapeutic, shields, stuns, mana recharge
- Mana prices and cooldowns for every spell to encourage strategic play
- Visible UI components: well being/mana bars, cooldown indicators, spell icons
- AI opponent with easy tactical decision-making
- Mouse-driven controls with non-obligatory keyboard shortcuts
- Clear in-game messaging exhibiting actions and results”
Claude 4’s Response:
GPT-4o’s Response:
Gemini 2.5 Professional’s Response:
Comparative Evaluation
Within the second job, on the entire, not one of the fashions supplied correct graphics. Every one displayed a black display screen with a minimal interface. Nonetheless, Claude 4 supplied essentially the most practical and easy management over the sport, with a variety of assault, defence, and different strategic gameplay. GPT-4o, alternatively, suffered from efficiency points, akin to lagging, and a small and concise window dimension. Even Gemini 2.5 Professional fell quick right here, as its code did not run and gave some errors. Total, as soon as once more, Claude takes the lead right here, adopted by GPT-4o, after which Gemini 2.5 Professional.
Job 3: Finest Time to Purchase and Promote Inventory
Immediate: “You’re given an array costs the place costs[i] is the value of a given inventory on the ith day.
Discover the utmost revenue you possibly can obtain. It’s possible you’ll full at most two transactions.
Be aware: It’s possible you’ll not have interaction in a number of transactions concurrently (i.e., it’s essential to promote the inventory before you purchase once more).
Instance:
Enter: costs = [3,3,5,0,0,3,1,4]
Output: 6
Clarification: Purchase on day 4 (worth = 0) and promote on day 6 (worth = 3), revenue = 3-0 = 3. Then purchase on day 7 (worth = 1) and promote on day 8 (worth = 4), revenue = 4-1 = 3.”
Claude 4’s Response:

GPT-4o’s Response:

Gemini 2.5 Professional’s Response:

Comparative Evaluation
Within the third and last job, the fashions needed to remedy the issue utilizing dynamic programming. Among the three, GPT-4o provideed essentially the most sensible and well-approached answer, utilizing a clear 2D dynamic programming with secure initialization, and likewise embraced take a look at instances. Whereas Claude 4 presentd a extra detailed and academic method, it’s extra verbose. In the meantime, Gemini 2.5 Professional gave a concise technique, however used INT_MIN initialization, which is a dangerous method. So on this job, GPT-4o takes the lead, adopted by Claude 4, after which Gemini 2.5 Professional.
Ultimate Verdict: Total Evaluation
Right here’s a comparative abstract of how nicely every mannequin has carried out within the above duties.
Job | Claude 4 | GPT-4o | Gemini 2.5 Professional | Winner |
Job 1 (Card UI) | Most interactive with animations and sound results | Easy darkish theme with practical buttons, no audio | Primary sequential format, card face difficulty, no animation/sound | Claude 4 |
Job 2 (Sport Management) | Easy controls, broad technique choices, most practical recreation | Usable however laggy, small window | Did not run, interface errors | Claude 4 |
Job 3 (Dynamic Programming) | Verbose however academic, good for studying | Clear and secure DP answer with take a look at instances, most sensible | Concise however unsafe (makes use of INT_MIN), lacks robustness | GPT-4o |
To examine the entire model of all of the code recordsdata, please go to right here.
Conclusion
Now, by way of this complete comparability of three numerous duties, we’ve got noticed that Claude 4 stands out with its interactive UI design capabilities and secure logic in modular programming, making it the highest performer total. Whereas GPT-4o follows intently with its clear and sensible coding, and excels in algorithmic drawback fixing. In the meantime, Gemini 2.5 Professional lacks in UI design and stability in execution throughout all duties. However these observations are utterly primarily based on the above comparability, whereas every mannequin has distinctive strengths, and the selection of mannequin utterly is determined by the issue we are attempting to resolve.
Login to proceed studying and revel in expert-curated content material.