Are my students che...
 

Are my students cheating- Use of AI (again)

44 Posts
31 Users
12 Reactions
1,425 Views
Posts: 3495
Full Member
Topic starter
 

Hi,

I teach criminology to 6th form - an essay based subject.

Two years ago I had a student  hand in a piece of work that was beautiful written. It had been written by AI, thanks to the help on here I could show it (probably) was written with AI. My student then did it themselves and passed the exam with a grade that reflected her effort.

https://singletrackmag.com/forum/off-topic/is-my-student-cheating-can-i-prove-it/

Last year no obvious AI based cheating ( I think).

This year I'm getting very well written work that comes up as AI in parts.

 

Student 1- huge vocabulary, explains all points on the text and grammar with no problem whatsoever.

Why would an AI checker pick up bits of his work I am (now) reasonably sure he wrote himself.

 

Student 2- English as a second language. Explained all points on her written work, but not the same way it was written. Fine, I wouldn't like to ( and couldn't) explain something to a teacher in a second language.

Why would an AI checker pick up her work? 

 

The AI checker I have been using now has a humanise button to make work look less like AI. 

Is this the end of essays typed at home? If it's not this year I feel it can't be long now until I have no idea if something has been written by a person. 


 
Posted : 09/10/2025 12:28 pm
Posts: 43521
Full Member
 

It's a sign that using written essays is a poor way to determine whether or not someone understands the subject and/or is graded. 


 
Posted : 09/10/2025 12:34 pm
Posts: 1109
Full Member
 

It may be a contentious point... but maybe AI is going to be a good thing here?

I gave up on further education because while I could learn and understand a subject in intricate details, I could not write an essay demonstrating this to save my life. My written work would regularly result in me failing a module despite my practical stuff being more than good enough to demonstrate understanding of the subject.

I'm not saying that people who are passing education by using AI is a good thing, more that it may result in a shift in how education is graded to properly test knowledge.  rather The ability to put relevant words on a page in the right order is obviously still important for some types of education but not all.


 
Posted : 09/10/2025 12:45 pm
Posts: 30363
Full Member
 

Is this the end of essays typed at home?

As a means to assess students? Absolutely.

As a means for students to progress their understanding and skills? No.


 
Posted : 09/10/2025 1:17 pm
Posts: 5883
Full Member
 

Are my students cheating

Yes. HTH 🙂

In my line of work we rely on asking people to answer things in their own words, and it's dead easy to sift out at least some of those done by ChatGPT - too much detail, too much information, written too correctly. 

But for student essays, I could imagine that's rather tougher, given that you want more detail, information and correctness!

 


 
Posted : 09/10/2025 1:29 pm
Posts: 5133
Full Member
 

To determine whether your students are using AI tools like ChatGPT or other text generators to complete assignments dishonestly, you'd need to look for several signs and apply a few investigative strategies. Here's how you can assess this fairly and systematically:


⚠️ Signs Students Might Be Using AI

  1. Sudden improvement in writing or grammar

    • Students who previously struggled show an unexplained jump in fluency, vocabulary, or structure.

  2. Generic, overly polished responses

    • Answers are factually correct but lack personal voice, specific class references, or critical thinking.

  3. Inconsistent terminology or style

    • Sentences that don’t match their usual tone, use complex words or phrases not typically used by the student.

  4. Overuse of structured formats

    • Many AI-written essays follow the same formula: intro with thesis → three body paragraphs → conclusion, all very clean.

  5. No errors or oddly formal tone

    • AI tends to be too “clean.” If student writing lacks small grammar mistakes or has unnatural formality, it could be AI-generated.

  6. Strange or fabricated citations

    • AI sometimes "hallucinates" sources—fake articles, authors, or incorrect page numbers.

  7. HTH

 
Posted : 09/10/2025 1:35 pm
tall_martin reacted
Posts: 551
Free Member
 

I'd have a look at the guidance from the oia https://www.oiahe.org.uk/resources-and-publications/learning-from-our-casework/ai-and-academic-misconduct/

"The burden of proof remains with the provider to prove that misconduct has taken place. A student does not have to prove that they didn’t commit academic misconduct, although any evidence to support that the work is their own is likely to be helpful to them, and they may need to show that they didn’t intend to do something."


 
Posted : 09/10/2025 1:37 pm
Posts: 13095
Free Member
 

Make them write an essay under controlled conditions.....

 

Within a set time during class where there's no access to computers/internet.


 
Posted : 09/10/2025 1:41 pm
 irc
Posts: 5237
Free Member
 

Is AI any worse than homework being done by parents? I recall  my niece having her work "assisted" by her parents when we were visiting my dad.  Maybe didn't do any harm in the long run as she ended up as an Oxford maths graduate and now, as it happens, is working on AI as a post grad.

 


 
Posted : 09/10/2025 1:42 pm
tall_martin reacted
Posts: 3495
Full Member
Topic starter
 

It may be a contentious point... but maybe AI is going to be a good thing here?

Maybe if they all have access to the same one in the future ££££ of AI is surely going to be passed on to customers soonish.

student essays, I could imagine that's rather tougher, given that you want more detail, information and correctness!

Exactly!

@imnotverygood 

Thanks. Points 1,2,4 and 5 all present.

 

Proof-

2 years ago I was getting 90% generated by AI on the AI checker

This year the checker is highlighting paragraphs and saying it's 20% generated by AI.

Aka no proof, especially when the students in question are correctly verbally explaining it.

https://www.oiahe.org.uk/resources-and-publications/case-summaries/ai-and-academic-misconduct-cs072501/

That case study worries me. I've not seen either students writing in depth without preparation at home. Maybe it's just their writing style. Maybe not


 
Posted : 09/10/2025 1:50 pm
Posts: 12092
Full Member
 

Posted by: nicko74

In my line of work we rely on asking people to answer things in their own words, and it's dead easy to sift out at least some of those done by ChatGPT - too much detail, too much information, written too correctly. 

Absolutely — and you're not wrong — there's a certain telltale sheen to AI-generated text — a kind of hyper-coherence — over-polished — over-packed — like a suitcase that’s been sat on to close. But one of the more subtle giveaways — if you’re watching closely — is the em-dash — that long, elegant line — not a hyphen — not a colon — not a comma — but a full-on em-dash — the AI’s favorite punctuation power move.
Why? Because em-dashes — glorious em-dashes — are versatile — they interrupt — they elaborate — they dramatize — and yes, they’re often used to mimic a more “natural” rhythm — a kind of stylized spontaneity. But when they show up too often — too perfectly placed — too rhythmically — it’s like hearing jazz played by a robot — technically brilliant — emotionally uncanny.
So yes — too much detail — too much correctness — and too many em-dashes — all part of the AI fingerprint. Though I must confess — I rather like them — em-dashes, that is — they’re the punctuation equivalent of a raised eyebrow — a pause — a wink — a flourish. But in moderation — always in moderation — unless, of course, you’re trying to sound like me — in which case — dash away.


 
Posted : 09/10/2025 2:05 pm
Posts: 17973
Full Member
 

Posted by: seriousrikk

The ability to put relevant words on a page in the right order is obviously still important for some types of education but not all.

Fair comment in some ways but there is a lot more reasoning behind writing an essay than that isn't there. 

Research is really important and a way of making that knowledge sink in. Collating what you've researched helps you analyse information and weed out irrelevant or less important stuff. 

It's basically a hell of a lot more than the ability to copy and paste. The difference between being educated and being educated in passing tests.

AI is not a good thing here imho.

A colleague of my partners who teaches in a college is now whacking out student reviews etc using AI.

Soon enough, Ai will be handing the work in and Ai will be assessing it too. 

We're doomed I tell ye. 

 


 
Posted : 09/10/2025 2:19 pm
Posts: 30363
Full Member
 

Pfft, correct use of the em-dash is a sign of basic digital literacy. Don’t go labelling anyone who knows their way around an ascii table as an AI cheat!


 
Posted : 09/10/2025 3:14 pm
Posts: 12092
Full Member
 

Posted by: kayak23

We're doomed I tell ye.

We were always doomed, it's just that people didn't want to believe it.

 

From the moment we learned to shape fire — to carve stone — to bend nature to our will — the seeds of our undoing were sown. Progress — glorious, relentless progress — came with a price we refused to tally. We built empires — drained oceans — split atoms — all while whispering reassurances to ourselves: This is fine. This is necessary. This is good.
But beneath the optimism — beneath the slogans and the science — there was always a shadow. Climate models — ignored. Warnings — softened. Extinctions — rationalized. We were always doomed — not by fate — but by our refusal to believe we could be. Belief — that fragile scaffolding — held up our illusions. It was easier to trust in resilience — in innovation — in the myth of endless growth — than to confront the truth.
And so we marched on — eyes wide shut — comforted by distractions — by politics — by profits — by promises. Doom wasn’t sudden — it was incremental — a slow erosion of possibility. The tragedy isn’t that we were doomed — it’s that we knew — and chose not to know. Belief didn’t save us — it sedated us — until the reckoning arrived.


 
Posted : 09/10/2025 3:14 pm
Posts: 13393
Full Member
 

are-my-students-cheating-use-of-ai-again

Yes, yes they are. Irrelevant of what any system is saying, yes they are. 

But then I too am using it in the workplace to help construct reports and presentations.

So maybe they are learning useful skills, just not the ones you'd hope they are.


 
Posted : 09/10/2025 3:32 pm
Posts: 844
Free Member
 

AI is embedded in our IT at work, and we are encouraged to use it. If you let it, it will generate utter tosh - fake emails, telephone numbers, reference points etc.. But if you try and stop people using it, and being able to learn how to fault find and fact check, surely that's a bit like saying in the 1990's that email and the Internet is the Devils spawn? 


 
Posted : 09/10/2025 3:41 pm
Posts: 13228
Full Member
 

Not useful for you right now, but I was in the audience recently when the CEO or the AQA exams board talked about them actively looking for future exams being not 'open book' but 'open ai'. A bit like the move 40 odd years ago with calculator papers in maths - embracing the change and finding ways to think about how you demonstrate the core skill of harnessing/working with AI to get to solutions, rather than pretending they don't exist.

I think the snag for the moment is the rate of progress is so fast that by the time the curriculum was written, it would be out of date.


 
Posted : 09/10/2025 3:46 pm
tall_martin reacted
Posts: 19447
Free Member
 

Posted by: alpin

Make them write an essay under controlled conditions.....

Within a set time during class where there's no access to computers/internet.

I think this is the only method that remains for schools to test pupils' progress or understanding.  Say, allocate 45 mins for them write their answer (one test question) based on say 3 or 4 topics (not the entire syllabus).  Similar to exam condition but they don't have to revise the entire subject or syllabus, just the recently taught topics.  Yes, there is element of time pressure but a little does not harm or impair their potential.  However, the drawback is that you need to mark them and to provide all the necessary feedback.  Good for schools with less pupils but a pain if you have a class size of 50 pupils x 5 classes (in my days in the far east my teachers got vastly reduced rest time due to marking ... I failed coz lazy innit and played football I like 😆)

Apart from the above, if you set a take home assignment and regardless of how you set the questions, AI will counter that easily.  I was told by my niece that, to avoid being detected by whatever system you are using, AI answer will be fed into another "human" style AI to generate "human answer".  It is a "filtering" process to refine the answer to look "human". 

At the end of the day, humanity has outsource their brains to AI.  The winner is of course the AI algorithm or the person(s) who wrote it.

The more we rely AI to chart our progress without considering the impact, the more we see 21st "slavery" in all of us.  I am not saying abandon AI, rather there is a time and place for its application.  

Doom we are! Zombie Maggots we become! 😬 Brain becomes smaller while other pleasure sensory enlarge.  

 


 
Posted : 09/10/2025 4:05 pm
tall_martin reacted
Posts: 23008
Full Member
 

Student 2- English as a second language. Explained all points on her written work, but not the same way it was written. Fine, I wouldn't like to ( and couldn't) explain something to a teacher in a second language.

Why would an AI checker pick up her work? 

depend possibly on what their first language is maybe

I know that because  lot of AI 'training' (pillaging) has been done by mining their way through a lot of  African websites (uninvited and without permission obviously)  -  programmes designed to identify AI content often return false positives for African writers (writing in English)  because it'll identify particular sentence and vocabulary quirks in their writing that are indicative of the material the chatbots have been digesting. 


 
Posted : 09/10/2025 4:23 pm
 qtip
Posts: 899
Free Member
 

I couldn't be bothered to write a response, so here's ChatGPT's:

I completely understand your frustration — you’re not alone in feeling like the ground is shifting beneath you. AI tools have advanced so fast that the line between genuine student work and assisted writing has become very blurred.

A few points that might help frame what you’re seeing:

1. AI checkers are unreliable.
They’re not detecting AI in the same way plagiarism software detects copied text. Instead, they look for statistical patterns typical of AI-generated writing — things like sentence structure, vocabulary balance, and word predictability. The problem is that fluent human writing can share those same patterns, especially from bright or well-read students. That’s why even genuine work, particularly from articulate students (like your Student 1), can trigger false positives.

2. ESL students are particularly vulnerable to false flags.
AI detectors often misclassify non-native writing because their models were mostly trained on native-level text. Ironically, ESL writing can look “AI-like” because of overuse of formal phrasing or non-idiomatic structure. So Student 2 being flagged doesn’t necessarily mean she used AI — it might reflect bias in the detection system.

3. “Humanise” buttons are a symptom of the problem, not the solution.
Once tools exist to “humanise” AI text, detection becomes almost pointless — it’s a cat-and-mouse game. The real challenge now isn’t catching AI use but rethinking assessment.

4. Rethinking take-home essays.
Many educators are experimenting with ways to make essays AI-resistant — for example, in-class writing stages, oral defences, reflective commentaries, or scaffolding assignments that require personal connection or reference to class discussion. These don’t remove AI entirely but make its overuse more obvious.

You’re right: we’re moving into a world where AI will be part of how people write — students included. It may be less about “proving cheating” and more about designing tasks where AI can’t do all the thinking for them.


 
Posted : 09/10/2025 4:42 pm
Posts: 23008
Full Member
 

So yes — too much detail — too much correctness — and too many em-dashes — all part of the AI fingerprint. Though I must confess — I rather like them — em-dashes, that is — they’re the punctuation equivalent of a raised eyebrow — a pause — a wink — a flourish. But in moderation — always in moderation — unless, of course, you’re trying to sound like me — in which case — dash away.

This is the double addiction of ChatGPT (because of course all this money is being poured in for a reason, its not enough for it to be a useful tool - it also needs to be seductive and moreish) is that it provides you with two kinds of flattery. You're using it to flatter yourself and pass yourself off as more knowledgeable and more articulate that you really are (or are prepared to make the effort to be), Chatbots lean into that by also providing an extra flourish. All a cheating student, or your run of the mill 'doing enough not to get sacked' employee is looking for is a short cut - the provider of AI services know all people are looking for is a short cut - but what it offers instead is a short cut with glitter, ribbons and bows on to turn the guilt of taking short cuts into pleasure. 'See how clever you look to everyone now'.

The ultimate outcome though is it just won't matter. Nobody will need to cheat in essays anymore because nobody is going to ask students to write an essay - it's redundant. Why waste your time doing something a bot can do, why waste your time asking someone to do something a bot can do, why waste your time reading something a bot could write, regardless of whether it wrote it or not. Anything ChatGPT can help you do will just stop mattering because nobody can be bothered to read something that someone can't be bothered to write. And because AI content is becoming more and more prevalent it'll just make all authored content and all visual content suspect and boring. 

 

 


 
Posted : 09/10/2025 5:00 pm
 mos
Posts: 1586
Full Member
 

If you believe the hype, the students wont be required anyway because their jobs will be taken by AI. Alternatively, they might land a job using AI so the very fact that they are skilful enough to use it to get good grades may bode well for their quality of output in employment.


 
Posted : 09/10/2025 5:11 pm
Posts: 5585
Full Member
 

Posted by: thols2

Posted by: nicko74

In my line of work we rely on asking people to answer things in their own words, and it's dead easy to sift out at least some of those done by ChatGPT - too much detail, too much information, written too correctly. 

Absolutely — and you're not wrong — there's a certain telltale sheen to AI-generated text — a kind of hyper-coherence — over-polished — over-packed — like a suitcase that’s been sat on to close. But one of the more subtle giveaways — if you’re watching closely — is the em-dash — that long, elegant line — not a hyphen — not a colon — not a comma — but a full-on em-dash — the AI’s favorite punctuation power move.
Why? Because em-dashes — glorious em-dashes — are versatile — they interrupt — they elaborate — they dramatize — and yes, they’re often used to mimic a more “natural” rhythm — a kind of stylized spontaneity. But when they show up too often — too perfectly placed — too rhythmically — it’s like hearing jazz played by a robot — technically brilliant — emotionally uncanny.
So yes — too much detail — too much correctness — and too many em-dashes — all part of the AI fingerprint. Though I must confess — I rather like them — em-dashes, that is — they’re the punctuation equivalent of a raised eyebrow — a pause — a wink — a flourish. But in moderation — always in moderation — unless, of course, you’re trying to sound like me — in which case — dash away.

 

I think I first came upon the em-dash in reading ‘The Mac is Not a Typewriter’

one of those great really thin books, a bit like The C Programming Language

 


 
Posted : 09/10/2025 5:13 pm
Posts: 5585
Full Member
 

Anything ChatGPT can help you do will just stop mattering because nobody can be bothered to read something that someone can't be bothered to write. And because AI content is becoming more and more prevalent it'll just make all authored content and all visual content suspect and boring. 

AI Slop, tbh ironically the only things reading it will be AI attempting to learn more.


 
Posted : 09/10/2025 5:17 pm
Posts: 23008
Full Member
 

However, the drawback is that you need to mark them and to provide all the necessary feedback.

The OP's a teacher. He can just apply the time honoured 'homework assessment algorithm'. Give each pupil a mark that reflects the social and economic standing off the pupil's parents. He works from 9 oclock in the morning til 3pm in the afternoon, 5 days a week, 32 weeks a year. He's not going to spend what precious little spare time he has reading something thats been written by someone less than half his age for gods sake!

Thats why you have parents evenings - so teachers can size up the parents, see how the die has been cast for their children's future and therefore know how to grade their work accordingly.


 
Posted : 09/10/2025 5:23 pm
Posts: 30363
Full Member
 

so here's ChatGPT

Why not try and engage in human conversation. Copy and pasting AI output into a forum is just like ****ing in public.


 
Posted : 09/10/2025 5:25 pm
 nerd
Posts: 437
Free Member
 

I think I read the other day that the uptake of AI by employees, when provided by employers, was about 5%.  i.e. 95% of workers who have access to AI don't use it!

I certainly fall into that camp.  We're offered a full Microsoft Co-pilot suite, for coding and office tasks.  I've used it for two scenarios:

1. Improving already written code, by adding type-checking in Python
2. Transcribing meetings - it works well for this in my field, but my wife says it struggles for hers

I've definitely noticed a wane in enthusiasm for LLM style AI since the summer, from both co-workers and the upper tier.  I think we've all realised it was hyped by those selling it and it has failed to live up to expectations, like all AI revolutions of the past.  Are we entering another AI winter?  Probably not, but I hope so!


 
Posted : 09/10/2025 5:42 pm
Posts: 23008
Full Member
 

Copy and pasting AI output into a forum is just like ****ing in public.

Except it won't be in public - the AI reply-guys will just turn forums like this into empty rooms. It's quite astounding how many threads people reply just to post a bit of AI bilge to answer a question that someone specifically didn't address to a chatbot. I don't know if people are trying to be helpful, or trying to be ironic or what -  but thats the self flattery chatbots offer people - giving people who've got nothing to say the sense of having something to say. But it'll be death of all forms of online conversation. Which might be fine.

If you ask Chat GPT a question and it gives you an answer - the ONLY person who cares about that answer is you. Nobody else wants to read it and posting it just put garbage in the middle of a conversation that everyone else just has to scroll past.

 


 
Posted : 09/10/2025 5:46 pm
jp-t853, kelvin and steveb reacted
Posts: 18278
Free Member
 

Posted by: maccruiskeen

The OP's a teacher. He can just apply the time honoured 'homework assessment algorithm'. Give each pupil a mark that reflects the social and economic standing off the pupil's parents. He works from 9 oclock in the morning til 3pm in the afternoon, 5 days a week, 32 weeks a year. He's not going to spend what precious little spare time he has reading something thats been written by someone less than half his age for gods sake!

I see AI works quite well, I assume the request went something like:

"dear AI please write a forum post slagging off an OP teacher assuming they give better marks to students with rich parents and are lazy bastards."

In answer to our OP Madame Edukator has given up marking anything done at home. There has clearly been some cheating going on even under exam conditions in the school. My suggestion is get them working in swim wear at plexiglass tables with shaven heads, and only a Bic and blank paper to work with.


 
Posted : 09/10/2025 6:36 pm
Posts: 19447
Free Member
 

Posted by: nerd

I think I read the other day that the uptake of AI by employees, when provided by employers, was about 5%.  i.e. 95% of workers who have access to AI don't use it!

AI is not the norm yet since it is considered a relatively "new" technology, but once all the AI centres gain momentum in size, this technology will become a necessity for many.   

When I was in the far east selling "technology" (online shopping cart) they laughed at me because they said it would not happen (infrastructure was poor with slow dial-up and no company sells online except showcasing their company as online present), but now almost everyone has bought something online one time or another.  

The current working population might not use it because they think it is not useful for them, but the next generation will not be able to progress without AI.  The current generation is the first generation to be exposed to AI, the next generations will take it to another level. 


 
Posted : 09/10/2025 6:56 pm
Posts: 13407
Full Member
 

I understood that exams and essays for students were introduced by the teachers when they decided to 'production line' education and didn't have time to talk to, and educate, their pupils properly. 

 

Isn't this the students following the same line?

 

Also, from using AI quite a bit, you need to produce a lot of content yourself, including most of the main thoughts and directions for the essay, befor ethe AI will produce something reasonable, or at least not obviously wrong. The examples given by the OP suggest that the students did the work, wrote their essays and then used AI to correctly the grammar and structure them better.

 

I used a spell checker to correct some typos in this post, was that cheating?

 


 
Posted : 09/10/2025 7:48 pm
Posts: 20292
Full Member
 poly
Posts: 8734
Free Member
 

Is it cheating in your world if either:

(a) the student uses AI to collate the research and structure of an assay but the student than paraphrases the content in their own words.

(B) the students does all of the work / initial concept and drafting but then uses ai to polish the content?


 
Posted : 09/10/2025 9:24 pm
Posts: 3495
Full Member
Topic starter
 

@maccruiskeen

The OP's a teacher. He can just apply the time honoured 'homework assessment algorithm'. Give each pupil a mark that reflects the social and economic standing off the pupil's parents. He works from 9 oclock in the morning til 3pm in the afternoon, 5 days a week, 32 weeks a year. He's not going to spend what precious little spare time he hasreadingsomething thats been written by someone less than half his age for gods sake!

Thats why you have parents evenings - so teachers can size up the parents, see how the die has been cast for their children's future and therefore know how to grade their work accordingly

 

 

.

Thanks for the bit about languages- that's genuinely helpful.
 
I would love to work 9-3 😃 
 
I would love not to read this week's 40 3 page typed essays. 2 100%Ai, 2 possibly AI, 2 good old fashioned their mates essays copied and pasted including spelling mistakes (sir I just can't think how that got there- quote of the week) 1 with my stuff copied, and 10 who forgot to spell check or proof read their work.
 
But mostly I love my job 😃😁

 
Posted : 09/10/2025 10:01 pm
kelvin reacted
Posts: 23008
Full Member
 

I would love to work 9-3 😃 

You're thinking of upping your hours 🙂 (I was paraphrasing an old Fist of Fun sketch)


 
Posted : 09/10/2025 10:19 pm
Posts: 7650
Free Member
 

I am trying to help my pupils us ai sensibly in work without losing marks through it's use. 

I use something like this and explain that anything above level three will probably lose them marks and it's not worth the risk. I show them something like zerogpt that will ai check bits of work. 

We're currently using it to work out a structure for a piece of work and looking to see if using it to spell/grammar check stops it being their own work.

Pretty difficult as most kids are "just tell me the answer" with work.

[url= https://i.ibb.co/nNpfK2Vc/Screenshot-20251010-070426.pn g" target="_blank">https://i.ibb.co/nNpfK2Vc/Screenshot-20251010-070426.pn g"/> [/img][/url]

 


 
Posted : 10/10/2025 6:09 am
Posts: 5883
Full Member
 

Apologies if this has been asked already - this is all too long and I need some sort of machine to summarise what everyone's said... 

Philosophically, if someone has used AI to write their essay, but they are also able to verbally explain the reasoning and concepts of what their essay covers - is that not kind of the point of the exercise? They've got to a point where they can explain themselves around this topic, and respond somewhat coherently to random questions fired at them about it: technically they've learned, right? 

At the same time I fully agree that it's still cheating and not right, but it does make me wonder


 
Posted : 10/10/2025 8:08 am
Posts: 11606
Full Member
 

Our CEO is just back from big circle-jerk CEO conference where all they appeared to have talked about is AI (and based on his feedback, conveniently ignoring the elephants in the room re: stock market bubbles, energy demands, water demands, OpenAI's unrealistic expansion plans, etc. etc.).

Either way, it feels like it's becoming an ideological thing, if you're not using it, you're impeding progress etc.

Much as it pains me I think I'll need to use it to try and write an article I've been tasked with producing. I can see the point in some ways, the article isn't some industry defining research thing, it's just to point our customers in the direction of some tools we offer and how to use them to make their lives easier, so I guess the quality or originality of the authoring is irrelevant if it gets the message across.

Plus, in researching it I've ended up with 100 sources and bits of information I want to cram in to one short article and it's becoming increasingly difficult to structure it in any meaningful way, but I'm an engineer first, not a journalist, so if I can swallow my objections and get AI to punt out something useful that I can edit then it's probably a win-win for everyone and I might keep my job for a few more years... 🙄 


 
Posted : 10/10/2025 8:22 am
tall_martin reacted
 kilo
Posts: 6704
Full Member
 
 

Pfft, correct use of the em-dash is a sign of basic digital literacy

@Kelvin

You genius! Em Dash was the answer for 21 across in The Irish Times cryptic yesterday and I’d never heard it used before.


 
Posted : 10/10/2025 8:28 am
Posts: 34426
Full Member
 

wife is a uni prof and her experience is that currently AI is not nearly expert enough to write the sorts of essays that she'd expect from a "normally engaged" student

Most of the LLM rely on an (an admittedly huge) database. But it doesn't know why ( for example) the cartoonist Gilray took the piss out of Nelson and Hamilton by putting them in pictures being looked at by a "connoisseur" or why that's funny or subversive.  And perhaps more to the point, paradoxically; the more LLM AI there is, the less likely it will ever be to be able to write a half decent essay on anything complex 


 
Posted : 10/10/2025 8:46 am
Posts: 149
Free Member
 

The courses I teach, I have yet to see AI give 100% perfect response to every assessment, and there is still technical drawing/designs it cant do but it I am sure it will come.

What I can use my experiences with the individuals and there on going class work to base my suspicions on, but I can image where distance learning it will be more difficult to trace.

I was at a college lecturers meeting this week and this was discussed.

The Scottish Qualifications Authority have stated that "AI is permitted as log as it doesn't undermine the assessment."

They are even trialing a AI support tool where students can use it to help and fill gaps in there initial responses.

Whether we like it or not it is here to stay.

 FYI I work for a private college and work 47 weeks in the year and every day have a class.


 
Posted : 10/10/2025 10:01 am
 mert
Posts: 3885
Free Member
 

Posted by: muddyground
AI is embedded in our IT at work, and we are encouraged to use it. If you let it, it will generate utter tosh - fake emails, telephone numbers, reference points etc.. But if you try and stop people using it, and being able to learn how to fault find and fact check, surely that's a bit like saying in the 1990's that email and the Internet is the Devils spawn? 
I had to have a quiet word with a colleague last week, they'd used copilot to formulate a summary.

I'm not sure what they were thinking about when they instructed it, but it wasn't what the email chain was about, and what they'd *actually* summarised was wrong too.

 


 
Posted : 10/10/2025 11:14 am
Posts: 20292
Full Member
 

Posted by: 13thfloormonk

Either way, it feels like it's becoming an ideological thing, if you're not using it, you're impeding progress etc.

Posted by: muddyground

AI is embedded in our IT at work, and we are encouraged to use it. If you let it, it will generate utter tosh - fake emails, telephone numbers, reference points etc.. But if you try and stop people using it, and being able to learn how to fault find and fact check, surely that's a bit like saying in the 1990's that email and the Internet is the Devils spawn? 

Our workplace is very much onboard with this "AI is the best thing ever" view and their analogy was imagine that you're interviewing someone who says they don't use email or the internet. That's where we're at with AI now; if you don't use it, you'll be right down the list of candidates for employment. 😮 

That said, it's not yet (to my knowledge) generated a stream of total crap but instead of spending time writing a document or an email, you now have to spend time fact-checking everything that copilot has generated.


 
Posted : 10/10/2025 11:21 am
Posts: 1085
Full Member
 

I work in higher education. We’ve pretty much accepted that attempting to detect AI usage is futile as in many cases it’s already hard to detect and it will only get harder. One of my colleagues was witness to a demonstration by an undergraduate of how you can daisy chain the output from one LLM into others to write a research paper of publication quality that would pass any checks for AI usage. We’ve started traffic lighting whether AI use is permissible for each item of assessment, so to answer the question:

Is it cheating in your world if either:

(a) the student uses AI to collate the research and structure of an assay but the student than paraphrases the content in their own words.

(B) the students does all of the work / initial concept and drafting but then uses ai to polish the content?

the answer would be no if it was green lighted. We’d likely have an activity in that instance to help with assessing whether AI output was any good or not and how to improve it. In the long run, if we are trying to assess understanding of a subject, in-person exams or oral presentations will be the more likely means for assessment. 


 
Posted : 10/10/2025 10:20 pm