'OK, So ChatGPT Just Debugged My Code. For Real' – Slashdot
Slashdot is powered by your submissions, so send in your scoop
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
Of course, I don’t have mod points today.
It can’t be that difficult because almost every single other forum supports unicode characters without blowing everything up, somehow.
haha, you’re funny…. never underestimate a lot of users with a lot of time… people are trying to abuse all major platforms
I’m not saying it’s an easy problem to solve but it’s not impossible either
At some point,
That’s the point.
The most common form of abuse was the right-to-left-override where you can insert RTL formatted text in what would normally be LTR text (e.g., if you need to insert some Arabic in a block of English text). This would then set the text direction backwards when rendered on screen.
Moderation abuse is simple to Google because of this – just look for “5
Another one is overdecorated text – some languages are big on decorations, so those can be misapplied to other codepoints leading to text that a few million pixels tall and stretches above the line so you see a black line running down the page. Repeat this a few times and you can render a whole webpage black. Granted, you’re also going to write a comment that’s a few megabytes in size…
That’s not a good reason to not enable unicode. That’s a good reason to not *whitelist* accepted unicode characters. The fact that Slashdot has trouble rendering simple characters used daily in English speech is the problem.
It’s a flaw in Unicode itself. It’s been bodged together over the years, and offers no standard libraries or definitions to help programmers do basic stuff like determine which family of languages is in use. It also combines formatting with character encoding in a way that creates the problems you describe.
The RTL override character is a great example. It shouldn’t exist. The app should be able to use a standard library to query which way a given character should be rendered.
Sites like Slashdot that are pri
The Point of Unicode (pun intended) is to be able to mix languages in an agnostic way.
Why do you want to segregate ?
It’s not segregation, it’s being able to combine and mix languages in a way that actually makes sense and works in the real world.
The classic example is international airlines in East Asia. If they use Unicode, the names printed on tickets will be be wrong for half their passengers. At best they can try to guess which font to use, which shows you what a complete disaster those languages are in Unicode.
They mashed all the Chinese/Japanese/Korea characters together, do you now have similar but different characters from each language sharing the same codepoint. No way to tell which variation you should display, except by guessing based on knowledge of the other text or where the user is. The kind of stuff that should not be forced by encoding, and where needed should be handled by a standard library.
In fact adding Unicode is simple. What is hard is to prevent abuse.
In fact adding Unicode is simple. What is hard is to prevent abuse.
No, it is not at all hard. It’s called whitelisting. There’s less than a dozen characters which must be allowed to permit the functionality we actually need. The list could be expanded over time if desired, but right now all we need is smart quotes, literally a few accented letters, and a handful of currency symbols. And the lame filter could ostensibly be used to prevent their overuse.
Why is slashdot literally the only site on the internet with this problem? Go ahead and find another one with this behavior.
And after we get Unicode, maybe we can get Markdown support too.
And after we get Unicode, maybe we can get Markdown support too.
And after we get Unicode, maybe we can get Markdown support too.
And an Edit Button!
And, FFS, at least some kind of a Rich Text Editor!
Edit button is evil.
It messes up conversations and permits abuses, for no clear added value.
I find that it is a good thing to make posts final.
I don’t think it will fit the same reason Cobol didn’t.
Who’s going to drive it, give it prompts and then deal with the result? The job will still be called programmer.
I don’t think the average person understands how much of our world (government agencies are the biggest user though you’d be surprised how much corporate code still runs in cobol) is cobol. It’s significant.
As a side note, why is Slashdot’s comment editor still so shitty? I had to insert HTML linebreaks. That’s nuts.
I wasn’t referring to Cobol as some dead language, I’m referring to it as the first.
The point was it made programming super easy comparing to bashing bits in machine code (or asm if you were lucky), so instead of needing mega turbo nerds, business people could write the business logic.
Well at know how that worked out. Turns out programming has a lot of figuring out a coherent spec from requirements and then implementing those. Cobol greatly eased the latter as have many languages since. But they are still u
Writing the business logic became a job known as “systems analyst”, because it turned out most business people didn’t really know how to convert their processes into the kind of thing that a computer could actually do.
ChatGPT may eventually be able to do that job, but at the moment the limitation is that you need to tell it what you want. It doesn’t ask you questions and conduct its own investigation of your business, talking to your employees to find out their needs and how they work in practice (not just
call it whatever but this guy:
I dropped ChatGPT’s code into my function, and it worked. Instead of about two-to-four hours of hair-pulling, it took about five minutes to come up with the prompt and get an answer from ChatGPT.
I dropped ChatGPT’s code into my function, and it worked. Instead of about two-to-four hours of hair-pulling, it took about five minutes to come up with the prompt and get an answer from ChatGPT.
has absolutely no clue what he is doing. he does not know what programming is nor understands what a generative model is, yet decides to share his ignorance with the world by publishing an embarrassingly nonsensical article about exactly those two things, as “senior contributing editor” no less in a news outlet that is supposedly specialized in technology and innovation. if it’s a joke it’s a very cringey one. i understand
I found it newsworthy as a developer because it’s what managers are going to read and will thus set the bar for expectations of our work. It’s not newsworthy for what the guy did — and, in fact, did poorly, even with the AI assist. But “poorly” is still better than “not at all” and that’s going to move the needle for us
shoddy becomes available cheap
shoddy becomes available cheap
Shoddy is already available cheap. Just look at all of the data breaches we keep getting. Expecting some generative AI to fix it all is suicide. So expect some dumbass MBA to mandate it next week.
What we need is to make the failures expensive for those causing them, but again good luck with that.
So push the topics that MBAs that will make those same MBAs hesitate.
Once you upload code to a public database you lose copyright control of that code. All generative AI code samples are built on line. You cant upload your secert code to the public
AI will be hacked and tricked into providing those same code sgents to your competitetors.
Really? Because the job your describing sounds more like “manager”.
No it doesn’t.
Look at it this way: Cobol was the first of many many innovations making the act of writing code easier. The whole idea that coming up with easy high level descriptions and have the computer figure out what to do is as old as Cobol and FORTRAN.
But it’s still programmers figuring out what high level descriptions to use because going from wishes to a coherent technical description is ultimately what programmers do.
> The job will still be called programmer.
The early days of the industrial revolution the problem wasn’t the loss of jobs. It was that skill levels of existing workers was no longer needed.
People would spend years building up their trade and skills. They were replaced by children who could churn out faster using machines.
The same is going to happen. You will still have someone who can be defined as a “programmer” but it no where near the skill levels you need now.
There will be still roles for experts, b
In general yes, sure.
In this specific, case, I don’t think so. There has already been a massive drop in the required skill level. How many of us are on the level of Mel the Real Programmer.
Chat GPT etc maybe saves you from the burden of syntax, enabling you to write in yet another higher level language. But we’ve already had thousands of such innovations, starting with COBOL and it’s had the opposite effect so far. I’ve also never worked at a job where there were ever “enough” programmers: scope was always
> starting with COBOL
Actually COBOL is on the chopping block because of recent LLMs.
https://www.ibm.com/products/w… [ibm.com]
Having Java experience will keep you in the game longer. But COBOL developers soon won’t be able to dictate the crazy salaries they once did.
>> Who’s going to drive it, give it prompts and then deal with the result? The job will still be called programmer.
There will be
– “PROgrammer”
– “noob-grammer”
– “AI-BS-grammer”
in that order.
for anyone who aspires to go above code monkey but isn’t a math genius who’s really not a programmer, they’re a mathematician using a tool,
for anyone who aspires to go above code monkey but isn’t a math genius who’s really not a programmer, they’re a mathematician using a tool,
I’m not quite able to parse this. Did you leave out a comma after genius maybe? Are you saying if you’re not a math genius you won’t be able to be a programmer because AI will take your job, and only mathematicians using a tool will be programming?
BTW, in my opinion being a mathematician, or thinking like a mathematician, is not particularly applicable to the nuts and bolts of programming, even when programming above ‘code monkey’ status, unless you’re programming in Haskell.
A bit shirt sighted there perhaps. In a few iterations time it’ll be able to write the code from scratch and franky when it can do that it could probably emulate whatever system you want directly.
Author Verner Vinge once said that in the future there will only be two branches of computer science left: code archeology (to dig up the already written library you need) and applied theology (choose the traits of the AI overlord you want to live under).
I can definitely see a future where programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to find and fix bugs in entire projects.”
“ChatGPT, how to I remove the coding from Windows which sends telemetry without breaking the operating system?”
“I’m sorry, Dave. I’m afraid I can’t do that.”
Indeed, or:
“ChatGPT, how can I ask you to debug my code without giving your creator the code to do with as they please?”
“CharGPT, how can you write code such that I can retain the copyright and any other legal claims in future?”
Until you can have your own, self-hosted ChatGPT, most serious programming won’t be going near it. Hobby projects are going to get interesting though.
I gave up and went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.
I gave up and went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.
“Programmer” is unable to write a routine to copy an array. Uses “AI” to generate code that he doesn’t understand, but which crashes when he runs it. So then he searches the web to see if someone already wrote this code for him somewhere, copies and pastest it, maybe renames some variables, and says it’s “his code” that “he wrote”. Since it compiles and doesn’t seem to crash, we’re good to go.
I think I see the problem.
> Uses “AI” to generate code that he doesn’t understand…
“Programmers” have been doing this for a while now. Instead of AI, they used Google to find some code they could copy/paste without understanding. AI is just making it a easier to do what people had been doing for a while in this regard.
You obviously haven’t seen my co-workers code
I’m a AAA graphics programmer, I’m at the top of my game, 15 years experience on some big titles, a graphics engine that ships billions of dollars of games. When you use it properly, GPT absolutely rocks at programming for real world huge scale problems. You can quote me: it’s insane to lowball GPT’s ability. GPT knows intricate details about how to handle complex high performance code.
If you are not prepared and it gives you a hallucination, or you try to let it lead, then yes, it can’t code. One example – it can write itself in a corner where it needs a function that doesn’t exist. Next time you query – just tell it that it is having a problem because it keeps trying to use the same non-existant function. Problem solved.
Yes, it’s scope is limited – can’t handle more than about 200 lines of code at once. I just break up my ideas into pseudocode, capture the dependencies, and write them into the prompts to generate the functions.
Yes it makes simple mistakes. I just correct it and move on.
If you work through the problem with GPT methodically, it will solve it, absolutely, with enough retries, 95% of the time.
This is an absolutely insane productivity boost for me, I’d estimate 5x or more.
Yes, it’s scope is limited – can’t handle more than about 200 lines of code at once. I just break up my ideas into pseudocode, capture the dependencies, and write them into the prompts to generate the functions.
Yes, it’s scope is limited – can’t handle more than about 200 lines of code at once. I just break up my ideas into pseudocode, capture the dependencies, and write them into the prompts to generate the functions.
Reminds me when I was trying to solve a complex integral using Wolfram Alpha. I didn’t have the paid version, so I could only see some of the steps, so I kept breaking the integral down into separate parts and putting it in to Alpha until it had pretty much given me the full solution, I just had to put it together myself.
Of course, instead of all of that messing around, I would have been better off revising integration by substitution and parts, and integration of known functions to be better at Math… but
Sounds like the standard modus operandi of your typical 3rd rate Lego brick method dev of which there are unfortunately far too many in our industry. Knowing their shit and being able to write working code on their own is a foreign concept to them.
This is always a frustrating response to me — a complete unit test needs some knowledge of the internals of the function to know that all code paths got tested. The only tests I can write in advance are the ones that rise all the way to user requirements, which is more integration testing, usually. Yes, write as many tests as you can at the start and then get them passing, but, in my experience, that’s rarely the unit tests.
What, you think this is new? I graduated in the early 2000s. Sometime around 2009/2010 one of my old professors was bemoaning the fact that students didn’t want to write any code any more, they just wanted to copy and paste different blocks together until it worked. Coincidentally, stackoverflow started in 2008.
Overall, I am huge believer in using chatGPT as support, today. I have used chatGPT to dramatically optimize SQL queries, suggest a new index, convert a legacy PHP program from Laravel 4 to Laravel 1
Exactly. Just write the damn array code already. The only reason you should be asking for help is if there is some function that you don’t know or remember or something like that. Does anyone remember to RTFM any more?
To be clear, in order for it to make its recommendation, it needed to understand the internals of how WordPress handles hooks
To be clear, in order for it to make its recommendation, it needed to understand the internals of how WordPress handles hooks
No, to be clear, at some point ChatGPT was trained on text that dealt with WordPress hooks, and thus it had some relationship of tokens that was involved with what you wanted to know.
ChatGPT has no “understanding” or computational knowledge about anything.
I see a very interesting future, where it will be possible to feed ChatGPT all 153,000 lines of code and ask it to tell you what to fix… I can definitely see a future where programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to find and fix bugs in entire projects.
I see a very interesting future, where it will be possible to feed ChatGPT all 153,000 lines of code and ask it to tell you what to fix… I can definitely see a future where programmers can simply ask ChatGPT (or a Microsoft-branded equivalent) to find and fix bugs in entire projects.
Okay, so what exactly are we talking about? Syntax or behavior? If it is syntax then linters already do this, and they are built with the exact rules and best practices for that language. It is no black box, but something designed specifically to do that exact thing and do it very well. They can also reformat and fix code as well when it comes to syntax.
If we’re talking about behavior, then please tell me how you are going to describe to ChatGPT what the behavior of the 153,000 lines of code is supposed to be, so it will know whether or not there is something that needs fixing in the first place? Unless we’re talking about something that could result in a total runtime failure, like dereferencing a null pointer or division by zero, then there’s no realistic way to express to ChatGPT what the code is supposed to do. Especially when we’re talking about that kind of scale that 153k lines are involved. How about breaking the code down into functions, and defining input and expected outputs for that function so that ChatGPT would then know what the function is supposed to do? Good job, you just invented unit tests.
Define understanding. If it can parse the question, parse the code explanation, parse the code and provide some kind of output from that then I’d call that understanding albeit maybe incomplete. Yes, so it’s been fed a load of text , but then so were you when you learnt. And yes, you can site the chinese room as a counter example but what Lenrose didnt consider was it doesnt matter how it works inside, it’s how it behaves outside that matters.
People seem determined to think these LLMs are just dumb statisti
This is an ongoing area of research, and there are some interesting findings.
When you initially train an AI on a dataset, it starts as a statistical analyzer. I.e. it memorizes responses and poops them back out, and plotting the underlying vector space you see the results are pulled more or less randomly from it. Then, as you overtrain, the model reaches a tipping point where the vector space for the operation gets very small. Instead of memorizing, they appear to develop a “model” of the underlying opera
If we’re talking about behavior, then please tell me how you are going to describe to ChatGPT what the behavior of the 153,000 lines of code is supposed to be, so it will know whether or not there is something that needs fixing in the first place? Unless we’re talking about something that could result in a total runtime failure, like dereferencing a null pointer or division by zero, then there’s no realistic way to express to ChatGPT what the code is supposed to do.
If we’re talking about behavior, then please tell me how you are going to describe to ChatGPT what the behavior of the 153,000 lines of code is supposed to be, so it will know whether or not there is something that needs fixing in the first place? Unless we’re talking about something that could result in a total runtime failure, like dereferencing a null pointer or division by zero, then there’s no realistic way to express to ChatGPT what the code is supposed to do.
Maybe you should try it out? Not on a 153k line program, but I’ve had great luck with pasting in the schema for ~a dozen tables and then having chatGPT optimize queries with 6-7+ joins, subqueries, etc.
I think you might also be surprised at what chatGPT can analyze about functions and codes. I hesitate to use the word “understanding” but this is one of those areas where chatGPT can surprise you.
Writing code is something I’d expect a LLM to be able to do well given enough learned source. Feeding individual problems to the generator makes sense but I wouldn’t want to feed it 10k lines of code and just accept the result. You would need to read and understand the code you’re using. It would be somewhat similar to using a library from an external project, except you can’t trust the source.
Exactly. AI is great when you can trivially verify the result to a complex problem, but not so great when the result is time-consuming or complex to verify. If you need a subject matter expert to verify the result and it’d take them as long as solving it themselves, there’s no benefit at all and a high likelihood of drawbacks as they discover errors in the result.
What does that comment even mean?
I know what Turing complete means. What would it mean for one of these language models to be Turing complete? What does it mean for it not to be? Are you just saying that because ChatGPT can’t update it’s state model in the current version, it can’t continuously learn? Or are you somehow saying that the entire language model concept cannot possibly be Turing complete? If the later, how in the world do you prove that?
What would it mean for one of these language models to be Turing complete?
What would it mean for one of these language models to be Turing complete?
There are a lot of different ways to define it. One way is to say that it can’t recognize whether a phrase is valid in a Turing complete language. A simple example is that ChatGPT can’t tell you whether a long enough string of parenthesis is balanced or not.
Wolfram goes into the topic in some detail in the second half of this post [stephenwolfram.com], you might find it interesting.
Or are you somehow saying that the entire language model concept cannot possibly be Turing complete?
Or are you somehow saying that the entire language model concept cannot possibly be Turing complete?
“Language model” is a vaguely defined concept, but the current LLMs will need improvements in their algorithms before they are Turing complete (se
What would it mean for one of these language models to be Turing complete?
There are a lot of different ways to define it. One way is to say that it can’t recognize whether a phrase is valid in a Turing complete language. A simple example is that ChatGPT can’t tell you whether a long enough string of parenthesis is balanced or not.
What would it mean for one of these language models to be Turing complete?
What would it mean for one of these language models to be Turing complete?
There are a lot of different ways to define it. One way is to say that it can’t recognize whether a phrase is valid in a Turing complete language. A simple example is that ChatGPT can’t tell you whether a long enough string of parenthesis is balanced or not.
That is correct (depending on the definition of word “valid” in this context). But a better sample may be e.g. sorting. Parenthesis balance check is a context free language check. Context free languages can be syntax checked by pushdown automatons which are not Turing complete.
Why does it matter that ChatGPT is Turing complete or not? Turing-completeness is important to run code, not to generate code.
Plus I would be extremely surprised if ChatGPT wasn’t Turing complete, most complex computer programs are, sometimes even when we don’t want them to be. More specifically to ChatGPT, transformer networks only execute a fixed amount of steps to ultimately predict the next word. This is not Turing complete because you can’t run loops, but ChatGPT has a context that can act like the tap
Writing code is something I’d expect a LLM to be able to do well given enough learned source.
No, because ChatGPT isn’t Turing complete.
Writing code is something I’d expect a LLM to be able to do well given enough learned source.
Writing code is something I’d expect a LLM to be able to do well given enough learned source.
No, because ChatGPT isn’t Turing complete.
Neither are people.
From TFS – “Could I have fixed the bug on my own? Of course. I’ve never had a bug I couldn’t fix.”
From TFS – “Could I have fixed the bug on my own? Of course. I’ve never had a bug I couldn’t fix.”
Sounds like a Chad moment to me
ChatGPT can shorten the time it takes a developer to do work, but it can’t fix for incompetence.
I used it the other day to help me shift some functionality from server side to client side, and the results have been very very good. Save me a lot of debugging time, even after I reviewed all the code by hand. I tested the functions and got the expected results right off the bat.
Probably saved myself at least half a day of work.
But I didn’t ask it to do something large, I broke it down to manageable pieces that
ChatGPT seems worse at producing working powershell code than it did shortly after it launched. It seems to make a lot more errors. It’s still a timesaver to have it write code snippets, but those snippets must then be manually reviewed and tested because it often makes errors. Even something simple like asking it to extract the title out of HTML contained in a string, it wrote code that was basically perfect except that it forgot to escape one slash in the regex and thus the code it output produced a syntax error. An easy fix, but the error rate is so high that it’s just a time saver at best.
It’s very random at times. I’ve had chatgtp 4 as a subscriber since they released it for that, and it CAN be useful, but it can also be totally disasterous.
In any case, you NEED to know something about what you are doing, and as a “human” you need to proofread A.I’s results as well.
It’s very good at basic concepts such as initial code, specific calculation tasks and things that are in the known universe, but it really falls short when you try to describe what you want from the code. It’s like if you’re asking it to be creative like you can be, it can try – but it just can’t, it’s not human, it’s not even “artificial enough”, it’s just a LLM – it knows what it knows from numerous documents, books and data it has been trained on, and it can’t really think which many people misunderstand and believe it can, well – it can’t.
But can it correct code? Sorta yes. But also, it doesn’t understand the general concept that you’re thinking of when you make a piece of code, it can look for correct code that don’t fail, but unless you specifically instruct it in what numbers or outcome you expect from it, it won’t understand that, and you’ll get some random results, sometimes they can be downright dangerous so use that output with care, read the example code ChatGPT gave you and see if you can spot some fatal things in there, you need to KNOW code and what you want, it’s not a “magic piece” that will just code whatever you want.
I’ve been using it numerous times to create artistic scripts for my Blender projects, and it’s very hard work, no matter how much information you will give it, it will constantly get it wrong simply because you have to be SO specific about every little thing you want to achieve, it also doesn’t have the latest data on debugging or recent compiler fixes etc, it often uses deprecated code to analyze your code, and chances are your code is better and more up to scratch so to speak.
So use it
“AI is essentially a black box, you’re not able to see what process the AI undertakes to come to its conclusions. As such, you’re not really able to check its work… If it turns out there is a problem in the AI-generated code, the cost and time it takes to fix may prove to be far greater than if a human coder had done the full task by hand.”
Umm… Don’t you review/desk check your own code? Why wouldn’t you expect to do the same with “AI” generated code?
I’ve played w/ChatGPT generating code in a few languages (esp. SQL and C) and sometimes it did a decent job and in a couple of cases used a library that I wasn’t aware of which was helpful. However sometimes confirming that the code did what I asked it to sometimes took as long as writing the code and desk checking it would have taken. Part of this effort was figuring out the approach the bot had taken which was different than what I would have taken – I found this esp. true w/complex SQL queries where, for example, the bot used a different set of features to reach the same conclusion (and, in some cases, the query plans resulting from my approach and the bot’s approach were very similar after the optimizer had munged on the queries).
In some cases ChatGPT missed something obscure because I failed to fully constrain the problem where I would have naturally dealt with the case “properly” because I thought of the “missing” constraint as being obvious and would never have, when writing code, failed to handle/apply it.
I’ve found that fully constraining a problem, unambiguously, in an ambiguous natural language such as English in order to get “AI” to write the desired code often is harder than just writing the code in a language which isn’t ambiguous and which results in having to face and address each case.
Instead of about two-to-four hours of hair-pulling
Instead of about two-to-four hours of hair-pulling
Reworking those three lines of code to optionally accept two decimals should have been a 10 minute task, max. This may be a helpful tutorial for a beginner that already knows the problem and distills it into a digestible snippet, but it doesn’t necessarily imply much about more open ended applications.
This seems to be consistent with my experience, it can reasonably compete tutorial level snippets that have been done to death, but if you actually have a significant problem that isn’t all over stack overflow already, then it will just sort of fall over.
Thought the same. If it takes him 2-4 hours, then he clearly needs ChatGPT. It’s not an attestation of how great of a tool it is, but rather it shows how poor of a coder he is.
I have many issues with how the abilities of this supposed “maintainer of code” comes across based on these citations, but let’s chalk that up to a need for brevity and me being too lazy to RTFA.
A more important issue I have, is he seems to believe ChatGPT understands how WordPress handles hooks. Unless something’s drastically changed in how ChatGPT functions, that’s not at all what it does. It answered what people have previously followed similar strings to the question with. That’s all.
Not that I don’t think LLMs can be helpful tools, but for the foreseeable future it seems firmly seated in the “make a suggestion or two and have a human with knowledge take those into consideration” department, as well as a slightly more fancy snippets engine. I’d not worry about my job anytime soon. Then again, I’m old, my odds of being retired before AI takes over programming is above average.
Not that I don’t think LLMs can be helpful tools, but for the foreseeable future it seems firmly seated in the “make a suggestion or two and have a human with knowledge take those into consideration” department, as well as a slightly more fancy snippets engine. I’d not worry about my job anytime soon. Then again, I’m old, my odds of being retired before AI takes over programming is above average.
Not that I don’t think LLMs can be helpful tools, but for the foreseeable future it seems firmly seated in the “make a suggestion or two and have a human with knowledge take those into consideration” department, as well as a slightly more fancy snippets engine. I’d not worry about my job anytime soon. Then again, I’m old, my odds of being retired before AI takes over programming is above average.
I’m turning 40 in a few weeks, and your assessment matches my own. I see nothing concerning here for me or my career. The proverbial “boss’ nephew” who “built the site in a weekend” that has dozens of console errors? He may not be around for much longer, but anyone who’s competent in the field has nothing to worry about.
ChatGPT is very weak at calculus (and also arithmetic). I asked to find the maximum of sin(x)/x, which led to verbose calculations filled with logic and math errors. Despite many hints, such as using l’Hospital rule, it would repeat the same mistakes again and again.
This is because ChatGPT is a LLM–a large language model. It was not designed to perform mathematical operations. You are correct, it sucks at math, but that is not unexpected.
It’s bad at logic or even just maintaining state. Try playing tic tac toe with it drawing the board. It declared itself the winner after making an illegal move (which was an impressive feat, given that we’re talking about tic tac toe) when I tried playing against it. I called it out, corrected the board state, had it repeat the correct state back to me, and then had it make more illegal moves. Over and over again. Never managed to finish the game.
Actually ChatGPT is quite good at logic. In fact being so good at logic is one of the core ways you can mess with it, such as convince it to ignore its restrictions on use. The issue here seems to be you haven’t explained the logic to ChatGPT. ChatGPT is making illegal moves not because it can’t follow simple logic, it’s because it’s faking the rules of the game. But you can in a session explain the rules to it, or you can use a plugin that sets up that session for you.
Then you can have a correct game of ti
I tried it with a simple “convert date to Unix timestamp” function for an embedded project I am working on. Spent hours debugging other code until I got a look at ChatGPT’s one and found it simply does not work.
So, YMMV, and you have to double-check everything. To me, it does not look like something to debug another’s code
How do we teach ChatGPT (and the people who trust it to write code for them) that you NEVER use a float type to store currency, because the precision limitations will cause problems even with values like $0.10 and $0.20 – even though they look fine (to humans) as decimals?
Store the value in an integer as cents (and calculate with ints) and format it when you need to.
You hit it on the head with the blackbox point, but not because we don’t know it’s arriving at its conclusions but also because the more reliant we become on it, the less developers will have an understanding of the existing codebase and functionality.
An employer doesn’t always just pay you to code/bugfix, they pay you for your understanding of the codebase. Someone saying, ‘I don’t know how it works, chatgpt did it.’ is an unacceptable answer.
Saying I don’t know how it works, chatgpt
[I] went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.
[I] went back to my normal technique of digging through GitHub and StackExchange to see if there were any examples of what I was trying to do, and then writing my own code.
Hence the code to debug is an assemblage of stuff posted to public forums, and ChatGPT was trained on that. It was fed with questions with offending code and their answers.Usual bugs, usual fixes.
Call me a luddite but I’m a bit against the current ‘AI’ that basically scraped the internet to build up its engine, but this is something that I always hoped for when it came to code generation.
I had these ideas of somehow feeding it all the source material for a given language, compiler docs, the language docs, and rules, etc, and then being able to describe functions and have it generate the base code for it. Why? because coding for me was a path not taken. i did all the schooling, got a degree, and
This chatgpt programer bullshit only works when the system is given a limited set of input. The platform literally prevents you from uploading several megabytes of multiple files, which would be absolutely necessary to give it context in order to solve any problem of significant scope. Instead, people are asking it to rewrite their 50-line functions to work with dollars instead of fuckwits, and then posting their magic results onto social media in hope of clicks, because well they are fuckwits.
I’m bored of
they will not be able to write code that isn’t just pattern matching from what they’ve seen before.
they will not be able to write code that isn’t just pattern matching from what they’ve seen before.
The overwhelming majority of code is something they’ve seen before. There’s very few unique pieces of code solving unique problems in the world. Sure if you’re developing a new encryption method or protocol that no one has heard of before this won’t help you, but that covers 0.001% of the code being created out there.
If you can fix it by going to Stackexchange, there’s a good chance it’s fixable by a LLM
In case you are serious, this is a simple demonstration why you are wrong. Turn on your brain before posting.
In case you are serious, this is a simple demonstration why you are wrong. Turn on your brain before posting.
What does Turing completeness even have to do with anything in the first place? It was a statement you simply asserted while offering no evidence or justification.
As for the rock cartoon I would pay careful attention to “I never feel hungry or thirsty” and “I have infinite time and space”
What does Turing completeness even have to do with anything in the first place?
What does Turing completeness even have to do with anything in the first place?
We’re talking about the capabilities of AI. The second half of this post goes into more detail [stephenwolfram.com].
As for the rock cartoon I would pay careful attention to “I never feel hungry or thirsty” and “I have infinite time and space”
As for the rock cartoon I would pay careful attention to “I never feel hungry or thirsty” and “I have infinite time and space”
In the comic, the point is dumbed down to make it simple for people like you. If you’d like a more serious treatment of the topic, take a CS class or read a book.
We’re talking about the capabilities of AI.
We’re talking about the capabilities of AI.
We’re talking about the following unsubstantiated comment:
“To be more precise, because current LLMs are not Turing complete, they will not be able to write code that isn’t just pattern matching from what they’ve seen before.”
Are you able to articulate what you believe the relevance is between Turing completeness and the code writing assertion? Can you provide evidence or relevant citation to support your conclusion?
Further what is difference between “code that isn’t just pattern matching from what they’ve
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Two ‘Godzilla’ Scifi Novellas Finally Get English Translations, Capturing 1950s Horror at Nuclear Weapons
Report Finds Few Open Source Projects are Actively Maintained
If a thing’s worth having, it’s worth cheating for. — W.C. Fields