Parents claim there was no rule banning AI, but school cites multiple policies.
The lawsuit says the Hingham High School student handbook did not include a restriction on the use of AI.
"They told us our son cheated on a paper, which is not what happened," Jennifer Harris told WCVB. "They basically punished him for a rule that doesn't exist."
I'm guessing they probably have rules against plagiarism, or passing off other people's work as your own.
So then I guess it would be down to whether using AI (without disclosure?) is plagiarism or not
The LLMs can claim whatever they like, it holds no weight or value. They are basically advanced plagiarism engines and the law has already made it clear you cannot copyright the output of an LLM.
This particular case will go nowhere, but there are plenty of legal cases between content creators and AI makers that are slowly moving through the legal system that will go somewhere.
the law has already made it clear you cannot copyright the output of an LLM.
That’s true in this context and often true generally, but it’s not completely true. The Copyright Office has made it clear that the use of AI tools has to be evaluated on a case-by-case basis, to determine if a work is the result of human creativity. Refer to https://www.copyright.gov/ai/ai_policy_guidance.pdf for more details.
For example, they state that the selection and arrangement of AI outputs may be sufficient for a work to be copyrightable. And that’s without doing any post-processing of the AI’s outputs.
They don’t talk about situations like this, but I suspect that, if given a prompt like “Rewrite this paragraph from third person to first person,” where the paragraph in question is copyrighted, the output would maintain the same copyright as the input (particularly if performed faithfully and without hallucinations). Such a revision could be made with non-LLM technology, after all.
It doesn't matter what the LLM license states. Replace the LLM with a person doing exactly what the LLM does and ask yourself if it is plagiarism.
If I do your homework for you and I say, "Because you prompted me with the questions, the answers belong to you." That isn't a free 'get out of plagiarism card' for you. What I tell you isn't relevant.
It's not gray at all.
Edit: that's weird. I got a personal message but the reply showed up here.
I sometimes use an LLM to "tidy up" my work and paste a bunch of writing in to see if it comes up with anything better. Some parts it will, others it won't, and I'll use or tweak some of it. I wonder if that counts? It's all my work going in, but it's using other people's work to make adjustments.
People who proofread only generally make recommendations to edit. LLMs often "rewrite" the vast majority of the document.
If I tell a person who's my editor the concept of my paper and about 20-30% of the actual content that's in the end paper... sounds like someone else wrote the paper to me.
It's all up to how you're using the tool. Lots of kids out there will simple tell chatgpt to write something for them. Other's will simply ask for basic proofreading. It's a bitch to tell the difference on the grading side.
I'm admin on my small instance. I can see the votes. No worries.
In this case the downvote is from xektop@lemmy.world.
Anyway, the most I ever use LLMs professionally for is to help rearrange content for better flow or maybe convert more rambly bits into something that's concise. I tend to be more verbose than I need to be (mostly because my documentation for stuff is wildly verbose since I tend to forget stuff, which is great for documentation... not always great for talking through something for a client).
I write my own papers, but will put paragraphs through an llm and ask it how it can be improved (normally grammarly's 'ai'), and sometimes I take it's advice, but half the time I dislike what it's done. Sometimes I give it a bunch of information on what I need to write, and it'll spit something out, and then I'll sort of use it as a skeleton for my paper, but to be honest, it's kind of shit, regardless of which one I've tried. And it lies. So much.