You don't need to be an 'investor' to invest in Singletrack: 6 days left: 95% of target - Find out more
Hi thanks again for all the help.
I think the fundamental answer to my question can I prove they are cheating is no.
seems more certain now it is dodgy than when he first posted despite people pointing out the irregularities with it having been entirely AI’d
I was certain it was not their words when I read it. It looked, and still does, like a mash of someone else's writing with my student adding some of the case studies from the course correctly. I am fairly sure it's ai, but now certain I can't prove this.
I hope my student takes the time to put the work in their own words and passes the course.
I'll take the advice from @mattyfez
If it seems a bit ‘off’ it probably is. Proving it, is a different matter.
So don’t die on a hill, without a very good reason
I'm absolutely confident the pasted work at the start is not theirs. I won't be putting up work that I'm mostly certain was their own work on here
The suggestions of a meeting with them, their parents and the head of sixth is an excellent one.
Tall_Martin claims this work isn’t the student’s own language. I’d argue that it’s becoming their language and they’re getting better at it, which means that Tall_Martin’s teaching is having the desired effect.
If any my students left my care with writing close to this standard I would be absolutely delighted.
Unfortunately, the student has not handed in a single piece of work to either teacher since September. 26 missed deadlines including resubmissions. So it's not the feedback from criminology that's suddenly improved their writing.
In my view the main problem is that you have a system that is open to abuse (your fault)
The way the qualification has been put together is the exam boards choice. I've been delivering it as per the boards instruction.
The qualification is having it's funding removed in the next round of a post 16 reforms. I'm a bit sad as it's interesting to teach. Currently there won't be a Criminology course from 2026. Their might be a policing qualification to replace it. I'm tempted to retain as an itc teacher. Or leave teaching again.
There are 4 typos I've spotted on rereading, Ransos, others that I haven't spotted are spelling mistakes. 😉 There's a comma missing too. If this were important to me I'd run a spelling and grammar checker, and get Madame to proof read it. If it were really important I get Junior to read it and dig out an ageing copy of La Bonne Correspondance for inspiration.
Valid points, Spin, I'm aware of what I'm doing, it won't sabotage Tall_Martin's career, or leave him with a grudge against teachers or even against academia in general. It might sow doubts about how he's treating his student and his pedagogical methods though, I hope so. Everything I've written I'd happily put my real name to and show to every student I've ever had and every academic boss I've ever had. Would Tall_Martin happily show this thread to the girl and her parents? I think not.
There are 4 typos I’ve spotted on rereading, Ransos, others that I haven’t spotted are spelling mistakes. 😉 There’s a comma missing too. If this were important to me I’d run a spelling and grammar checker, and get Madame to proof read it.
Not sure what your point is: you said your post wasn't plagiarised, and I said I believed you.
I haven't read the whole thread but there is a thing on the BBC website about this:
"I ran the essay through AI detection software. Within seconds, Copyleaks displayed the result on my screen and it was deeply disappointing: 95.9% of the text was likely AI-generated. I needed to be sure, so I ran it through another tool: Sapling, which identified 96.1% non-human text. A third confirmed the first two, but was slightly lower in its scoring: 89% AI. So then I ran it through yet another software called Winston AI. It left no doubt: 1% human. Four separate AI detection softwares all had one clear message: this is an AI cheater."
Not read it all , but I would imagine schools need a policy on how to deal with this sort of stuff ?
I can’t see how you can stop it, but can only imagine it’s discussion about how the inter web can give you all the answers you ever want but it won’t help you learn / improve your understanding of the subject, which is kind of handy if you want to pass an exam
Our 13yr old got a mate to go on chat GPT for some of his homework. When we read it we knew it wasn’t his writing. Just quizzed him on his understanding of the subject and he couldn’t answer, plus then asked him how he thought he would answer the question in an exam
From the article WCA linked.
Labyrinthian mazes
Just shows how superficial AI output can be. It caught the judge's attention because it is an obvious tautology, it simply feels wrong.
AI is just going through the motions, throwing chunks of text together that appear, on the surface, to fit. It's not driven by the writer's need to express a concept, but by which words thousands of other writers have put next to each other in the past. Eventually, it will be mostly plagiarising AI-generated content, layering blandness over blandness.
As you try to express an idea in text, that process often subtly modifies the idea itself, refines it. It helps you rehearse the best way to express it verbally or in written form later if needed. If you actively avoid that process by handing it to an algorithm, you're basically robbing yourself.
Thanks- multiple checkers might help
The school and exam board have policies in place.
Still don't think I can prove AI use, but hopefully having the consequences of its use pointed out in detail will deter the students from using it.
It might not be his own words but rereading it I don'tthink it's AI generated as it's not good enough. Sentence two could be improved by adding one word, and the 'flow' into sentences 3 and 4 is reliant on the question to make sense, and have context.
For example...
Psychological studies including research by lotfus et al, indicate that factors like the timing of the event, discussion with others, the passage of time and questioning methods in court can influence witness’ memory and testimony. This (this what - AI would be more specific) casts doubt on the validity of eye witness evidence, as memories may lack accuracy, especially over extended periods or in heightened focus on specific details such as ‘weapon focus’.(Next sentences are lovely, but what's the oontext to the two previous - complete jump) In complex technical cases, the outcome frequently depends on the testimony provided by an expert, be it a medical specialist or a forensic scientist.(so what) Essentially, these experts are expected to possess superior knowledge in their respected fields compared to legal professionals or laypeople, like jurors (again, context with the first two)
Not AI I think, but rigorous googling. Plus the spelling and imperfect punctuation
Labyrinthian mazes
Just shows how superficial AI output can be. It caught the judge’s attention because it is an obvious tautology
It's not, it's an oxymoron. A labyrinth and a maze are different things.
I'm interested by the fact that in the test above, the checkers were 99% sure that the AI piece was AI generated. Whereas the student piece is iirc, about 50% certainty. On that basis, I'd be sceptical because of the improvement that TM has seen, but seems likely there is still quite a bit of student (or at least, human) input. Which begs the question about what sort of help is reasonable.
When I did o levels (yes that long ago) we all had access to books that dissected Shakespeare's plays, identified key themes, and provided suggested examples and quotes. Indeed, is this not what a teacher does, albeit hopefully not quite as passive, so the students discuss and see where they get to prompted and guided by the teacher and others.
I can see that "write me an essay on.... might be considered abusing AI, but is "what are the key themes of Hamlet and how does Shakespeare show them...?"
Also - how reliable is AI? On a Wedding Present fan site elsewhere on the web, someone asked AI for a brief history of the band and got 75% nonsense. It's a brilliant read mind, I'm not sure Bob Mortimer didn't have a hand in it, not least for the hilarious names of the founder members, David Gedge (true) and Keith Estates (er....no)
I’m interested by the fact that in the test above, the checkers were 99% sure that the AI piece was AI generated. Whereas the student piece is iirc, about 50% certainty
I'm reasonably sure the student correctly added the case studies into chunks of ai generated language.
It is not all their own work- I'm 100% on that.
I can't 100% prove the above- I'm sure on that.
I thought AI generated stuff on our criminology course would be garbage. The stuff turned in was excellent. ,
The thread has reminded me of a story about Graham Greene. Allegedly he entered a writing competition in which the entrants were required to 'write a short story in the style of Graham Greene '
He came second
Let's say the claim made in the BBC article by the AImonger, Turnitin is correct and it has 1% false positive results. With ten written pieces to mark per course year then by the end you could be investigating, accusing or wrongly shaming a tenth of your cohort. Over a three year course, thirty per cent? There isn't an institution in the land that can handle that administratively, nor cope with the effect on admissions once word gets around of the academic atmosphere and dropout/failure rates.
The thread has reminded me of a story about Graham Greene. Allegedly he entered a writing competition in which the entrants were required to ‘write a short story in the style of Graham Greene ‘
He came second
The version of this I heard was that Charlie Chaplin once came second in a Charlie Chaplin lookalike contest.
The version of this I heard was that Charlie Chaplin once came second in a Charlie Chaplin lookalike contest.
My great uncle went to a fancy dress party and Graham Greene was dressed as Charlie Chaplin
True story
Online education has become the norm for many students and teachers due to the COVID-19 pandemic. However, this also poses a challenge for maintaining academic integrity and preventing cheating on tests. How can you tell if your students are cheating on online exams? And how can you prove it?
There are many ways that students can cheat on online tests, such as using Google, study websites, cheat sheets, unapproved tools, or copying from other students. Some of these methods may be more obvious than others, but they all violate the rules of academic honesty and fairness.
One way to detect cheating is to use machine learning techniques to analyze the students’ test scores and performance. Machine learning is a branch of artificial intelligence that can learn from data and make predictions or decisions. For example, you can use machine learning to compare the students’ test scores with their previous grades, assignments, and quizzes, and identify any abnormal or suspicious patterns. You can also use machine learning to check the similarity of the students’ answers and detect any cases of plagiarism or copying.
However, machine learning is not a perfect solution and it may have some limitations or errors. For instance, some students may have legitimate reasons for improving their test scores, such as studying harder, getting tutoring, or having a good day. Some students may also have different writing styles or use different sources of information that may not be detected by machine learning algorithms. Therefore, you should not rely solely on machine learning to prove cheating, but use it as a tool to support your judgment and evidence.
Another way to prove cheating is to use authentic assessment methods that are more difficult to cheat on. Authentic assessment is a type of assessment that requires students to apply their knowledge and skills to real-world problems or scenarios, rather than just recalling facts or information. For example, you can use case studies, scenario-based projects, or word problems that are relevant to your course content and objectives. These types of assessment can measure the students’ understanding, creativity, and critical thinking, and also engage and empower them to demonstrate their learning.
Authentic assessment can also reduce the incentive and opportunity for cheating, as the students have to produce original and meaningful work that cannot be easily copied or found online. Moreover, authentic assessment can enhance the students’ learning experience and motivation, as they can see the value and relevance of their work and receive constructive feedback.
In conclusion, cheating on online tests is a serious problem that can undermine the quality and credibility of online education. However, there are ways to detect and prove cheating, such as using machine learning techniques and authentic assessment methods. By using these strategies, you can uphold the academic integrity and fairness of your online courses and tests, and also improve the students’ learning outcomes and satisfaction.
The above, just reads like it was ai generated, something to do with the typical structure Was this the point perhaps? Interesting discussion though, things are going to get weird fairly quickly in all forms of written content.
'Too perfect' appears obviously wrong these days, @thols. Writing style is like a fingerprint. I'd rather read something with a bit of dodgy grammar in it, some imperfect sentence structure, some spellers and typos, at least it feels like the voice of the writer, if that makes sense.
@cougar I always conflate labyrinth and maze. It's the Minotaur thing.
Someone has hacked thols2's account.
It's bizarre, if I check my own posts on an online plagiarism checker they show up as 100% plagiarised from STW but posts by other members show up as 100% original, even thols2's post above.