ChatGPT vs. Google Bard: Which AI Chatbot Is Better at Coding? – MUO – MakeUseOf

The capabilities of AI chatbots are growing rapidly. But can they code yet, and which is better at the task?
When Google launched Bard, its answer to OpenAI's ChatGPT, it was missing a feature that was quite popular with ChatGPT users: the ability to write code. However, following popular demand, Google gave Bard a shot in the arm, enabling it to write code in dozens of programming languages.
Google has since been vocal about how well Bard can write and debug code, but how does it compare to the phenomenal ChatGPT? Let's find out.
Officially, Google's Bard can work with around 20 programming languages. These consist mostly of popular ones like Typescript, Python, C++, Go, Java, JavaScript, and PHP. It can still handle less popular options like Perl and Scala but is not necessarily as efficient.
ChatGPT on the other hand doesn't have an official list of supported languages. The chatbot can handle most of what Bard can handle and dozens more. ChatGPT can write, debug and explain code in both newer, popular programming languages and less popular, legacy languages like Fortran, Pascal, BASIC, and COBOL.
However, support doesn't necessarily mean proficiency. We tried out some simple tasks in select languages like PHP, JavaScript, BASIC, and C++. Both Bard and ChatGPT were able to deliver on the popular programming languages, but only ChatGPT was able to convincingly string together programs in older languages. So in terms of language support, we give the win to ChatGPT.
Let's say you ask ChatGPT or Bard to generate a piece of code that does something, and it spits out dozens of lines of code in seconds. Easy, right? But how often will that code work? Let’s say the code works; how good is that code?
To compare the accuracy and quality of code generated by the two AI chatbots, we gave both of them some coding tasks to complete. We asked Bard and ChatGPT to generate a simple to-do list app using HTML, CSS, and JavaScript. After copy-pasting and viewing the generated code in a browser, ChatGPT's app looked like this:
Using ChatGPT's version, you can add a new task, delete a task, or mark a task as complete. Google's Bard was also able to generate a functional to-do list app. However, you can only add a task, with no means to delete or mark it as complete. Bard's interface also seemed less appealing; here’s what it looked like:
We repeated the test, this time asking both chatbots to recreate Twitter timeline cards. Here’s ChatGPT's result:
And here’s what Google's Bard produced:
Both results have their pros and cons. We would have gone with Bard's results if it had the like, retweet, and comment buttons. However, it oddly left those out, so we'll let you decide which is better. Of course, the quality of code is not just about the aesthetic of what it produces.
When analyzing the code generated by both chatbots, Bard's seemed to be more object-oriented while ChatGPT's was more procedural. Our choice of programming language could influence this, but ChatGPT seemed to write cleaner code when necessary. It also tends to generate more complete solutions, typically leading to more lines of code.
In terms of the quality of generated code, we award this round to ChatGPT.
Errors and bugs are like puzzles that programmers love to hate. They'll drive you crazy, but fixing them is quite satisfying. So when you run into bugs in your code, should you call on Bard or ChatGPT for help? To decide, we gave both AI chatbots two debugging problems to solve.
Firstly, we prompted both chatbots to solve a logical error in a PHP code. Logic errors are notoriously harder to spot because code that contains them doesn't look wrong, it just doesn't do what the programmer intended.
The code in this screenshot runs, but it doesn't work. Can you spot the logical error? We asked Google's Bard for help and unfortunately, the chatbot couldn't pick out the logical error in the code. Interestingly, Google's Bard typically has three draft attempts at solving a problem, we checked all three drafts and they were all wrong.
We then asked ChatGPT for help, and it immediately picked out the logical error.
The PHP script didn't have any syntax errors, but the logic in the isOdd() function was backward. To see if a number is odd, you'd typically check if it has a remainder after dividing by 2. If it does, it's an odd number.
While Bard was nowhere near identifying this bug, ChatGPT picked it up on the first attempt. We tried four more logical errors and Bard was only able to pick out the error in one instance while ChatGPT consistently delivered. After switching to code with syntax errors, Google's Bard was able to keep up, identifying them in almost all the samples presented.
Google's Bard is relatively good at debugging, but we'll give this win to ChatGPT once again.
One of the biggest challenges with the use of AI chatbots for coding is their relatively limited context awareness. You ask the chatbot to write some code, then some more… along the line, it completely forgets that the next thing it's building is part of the same project.
For example, say you're building a web app with an AI chatbot. You tell it to write code for your registration and login HTML page, and it does it perfectly. And then, as you keep building, you ask the chatbot to generate a server-side script to handle the login logic. This is a simple task, but because of limited context awareness, it could end up generating a login script with new variables and naming conventions that don't match the rest of the code.
So, which chatbot is better at retaining context awareness? We gave both tools the same programming task: a chat app that we know ChatGPT can already build.
Unfortunately, Bard simply could not complete the app because it lost track of the project's context after it sat idle for some time. Despite being subject to the same conditions, ChatGPT completed the app. Once again, in terms of context awareness, we give it to ChatGPT.
At this point, Google's Bard is lacking in a lot of ways. But can it finally score a win? Let's test its problem-solving abilities. Sometimes you just have a problem, but you aren't sure how to represent it programmatically, let alone how to solve it.
Situations like this are when AI chatbots like Bard and ChatGPT can come in quite handy. But which chatbot has better problem-solving abilities? We asked them both to "write a JavaScript code that counts how many times a particular word appears in a text."
Bard responded with working code, although it fails when punctuation marks are close to a word or the word appears in different cases.
We threw the same problem at ChatGPT and here's the result:
ChatGPT's code takes a more robust and accurate approach to counting word occurrences in a text. It considers word boundaries and case-sensitivity, handling punctuation properly, and giving more reliable results. Once again, in terms of problem-solving, we give it to ChatGPT.
Since Google Bard has pretty much lost in every metric we used for comparison, we decided to give it a chance at redemption. We asked the chatbot "Which is better at coding? ChatGPT or Google Bard?"
While it agreed that ChatGPT was more creative, Bard said its competitor was more likely to make mistakes and that ChatGPT produced code that was less efficient, not well-structured, and was generally less reliable. We have to disagree!
Google's Bard has enjoyed a lot of hype, so it may come as a surprise to see just how much it lacks in comparison to ChatGPT. While ChatGPT clearly had a head start, you might think Google's massive resources would help it erode that advantage.
Despite these results, it would be unwise to write off Bard as a programming aid. Although it’s not as powerful as ChatGPT, Bard still packs a significant punch and is evolving at a rapid pace. Given Google's resources, the emergence of Bard as a worthy rival is surely a matter of time.

Maxwell is a technology enthusiast who spends most of his time testing productivity software and exploring the new generation of AI chatbots. His coverage of AI has garnered over 600,000 reads and counting. With a quirky sense of humor and a passionate love for all things tech, Maxwell leverages his 8+ years of experience in tech writing to simplify complex concepts for both novices and experts alike. He has a soft spot for cutting-edge AI tools, remains a dedicated Android user, and tinkers with PHP and Python code in his free time.