Use of AI Is Seeping Into Academic Journals – and It's Proving … – Slashdot
Please create an account to participate in the Slashdot moderation system
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
When I was in school that was considered cheating.
This. Using AI to write a paper about novel research is impossible. It can improve wording, correct grammar, even help come up with draft taxt. That’s all fine, what’s the problem?
If authors submit a paper that contains crap, like nonexistent references? Then it’s a crap paper, whether generated by AI or by a human. Reject it, done.
This. Using AI to write a paper about novel research is impossible. It can improve wording, correct grammar, even help come up with draft taxt. That’s all fine, what’s the problem?
If authors submit a paper that contains crap, like nonexistent references? Then it’s a crap paper, whether generated by AI or by a human. Reject it, done.
Agreed, I don’t actually find generative AI that useful for writing. If I’m writing I’m trying to say something specific, and if I want to say something specific I might as well write it myself instead of trying to coax is out of an LLM. The AI makes great sounding filler, but unless you’re a high school student desperately trying to hit a word count then why are you wasting words with filler?
Editing is another thing entirely, especially since a lot of researchers can’t write at the level of a native Englis
Reject it, done.
Oh FFS this is not how it works.
It takes time and effort to give a paper a fair shake, especially as many papers are written by non native speakers. Papers are rarely outright rejected, instead given a number of suggestions for improvement and resubmission.
It is a very time consuming process.
The peer review system is already at near collapse, it could easily be tipped over the edge with a flood of crappy AI written papers.
When I was in school that was considered cheating.
When I was in school that was considered cheating.
I’ve often heard it said that a calculator is just a tool that will do you little good if you don’t understand the underlying concept well enough to input the equation properly in the first place.
AI on the other hand, is literally asking a machine to do the work for you.
Clearly stated – by someone who doesn’t understand how these LLM AI models work, and hasn’t used them enough to see how silly their statement is.
There is no technical task sufficiently short enough that AI will not fuck it up. It can answer “What is X?” and “How is x different than y?” fairly reliably, but it falls far short on complex ideas.
Anything beyond that… well, even including that in many cases… you’d better know what you’re doing.
Case in point: I was looking for a quote, for which I could only paraphrase. I knew the meaning of the quote, but I wanted to offer attribution. Search engines weren’t helping. ChatGPT was able to provide me the correct quote based on my paraphrase. It could not have done that if I didn’t know how to paraphrase the quote.
Writing the paper is not the hard part. It’s the gathering of the data and the research. ChatGPT is quite good, however, at helping you jot a bunch of notes and synopsi, data points, etc. and have it help you organize your ideas into a coherent idea others can understand. It’s still -your work-, in the same way that someone using grammar assistance in Word is still writing a paper.
I’m reminded of a line from The Carousel of Progress at Disney World:
“But we do have television, when it works.”
Lots of new tech is rough around the edges in the beginning. Eventually these chatbots will reliably give the correct answers, just the same as how you don’t see many TV repair shops around these days.
There is no technical task sufficiently short enough that AI will not fuck it up.
There is no technical task sufficiently short enough that AI will not fuck it up.
My corollary to this: “There is no technical task sufficient short enough that a random human will not fuck it up”. We’ve all met folk like that after all…
AI on the other hand, is literally asking a machine to do the work for you.
AI on the other hand, is literally asking a machine to do the work for you.
No, it is exactly like a calculator. If you tell a calculator to add two numbers it does all the work for you but, in a paper, nobody cares whether you did the calculation by hand or used a calculator, they only care that the result is correct. This is exactly the same for AI. I don’t care whether an AI, a human assistant, or the author themselves wrote the words in the paper I am reading I only care that the paper is accurate and easy-to-understand description of what was done and the results.
The work that matters in a scientific paper is the experiment, study, calculation etc. that the paper is reporting on and, at least so far, AI is nowhere near being able to formulate and conduct novel and innovative scientific work but if it can help improve the accurate and clear reporting of work that has been done then that’s great!
AI is nowhere near being able to formulate and conduct novel and innovative scientific work but if it can help improve the accurate and clear reporting of work that has been done then that’s great!
AI is also capable of generating a massive crap flood of vaguely plausible looking papers, and will be used in the service by desperate people in the awful mill of academia.
How do you think the already overstretched peer review system is going to cope with that?
AI is also capable of generating a massive crap flood of vaguely plausible looking papers
AI is also capable of generating a massive crap flood of vaguely plausible looking papers
Not without some effort on the submitter’s part because left to its own devices ChatGPT scores under 50% on a first-year undergrad physics exam so no paper it writes will sound plausible without a lot of effort. If you have someone putting that much effort into attempting fraud then you report them to their university and ban further submissions from them.
It doesn’t do it well enough, alas.
I tried to get an AI to write my introduction on a paper. Got 12 paragraphs. I had to remove 8 of them from the start since they weren’t useful at all, just vapid rambling. Of the four I retained, one turned out to be bogus (the dates were off), and two of them turned out to be meaningless once contemplated at any depth. The one paragraph left was too casual to be used, so I refactored that one into a single line, and added two citations to back it up.
In summation, 12 para
Don’t forget the books of 4 place log tables, trig functions etc
After all, doesn’t reality have a liberal bias? ChatGPT seems uniquely qualified to deliver this kind of content.
AI excels at two specific kinds of speech: Corporate speech that doesn’t mean jackshit, and scientific article language where no figure of speech is allowed.
I had to write both kinds within my career and every time I did, I felt that I had to put my humanity aside to write “correctly”.
Does it surprise anyone that AI will flourish when writing for both?
Fighting this change is pointless. What we should do is double down on shaming bad science. AI-assisted or not, the name on the paper is the one who should take the blame if it’s wrong, unsupportable, or slipshod.
Author: “But… but… the AI did that! I didn’t mean that!”
Community: “There, there… we understand. But… you’ll need to wear this special cone-shaped hat for the next ten years or so… we’re not angry. Just disappointed.”
I’d like to hope that although the attorney who made an idiot of himself with AI only got a smack on the wrist this time was treated mercifully because it’s a totally new area; if there’s a repeat by anybody it should result in a significant period of disbarment.
I push the buttons and chat-bing-gpt-ai pumps out a paper.
I send it off for publication.
If it is worthwhile, great, we gained something.
If it is not worthwhile, someone must have disproven it because that is the by default requirement for something to be deemed as not worthwhile. Also great: another thing in the “never attempt again” category.
If your field can not handle AI generated text, your field sucks.
In other words: go try doing something like this in physics.
c.e.: yes I oversimplify and handwave thin
2) Maybe most papers will be actual science, ie reproducible [bbc.com].
If AI merely writes the paper using the data the scientist has and adding genuine references, it is doing nothing more than the scientist would have done. AI potentially adds massive value by referring to overlooked sources (make sure they’re checked!). There is a serious risk that the AI will repeat quotes without crediting them – plagiarism – which needs to be checked for.
Overall AI generated articles seem to add value. However there needs to be greater accountability for them from the authors, who need t
When enough people cheat their way through school their won’t be anyone left who can develop AI.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
Roblox Facilitates ‘Illegal Gambling’ For Minors, According To New Lawsuit
Cellebrite Asks Cops To Keep Its Phone Hacking Tech ‘Hush Hush’
“355/113 — Not the famous irrational number PI, but an incredible simulation!”