• Welcome to Peterborough Linux User Group (Canada) Forum.
 

Simple Known Dangers with AI, and Possible Future Dangers Too.

Started by buster, June 05, 2025, 05:43:58 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

buster

I'm not speculating on this technology taking over the world, but am including, to start with, some simple day to day occurrences of AI use that has led to problems. And here is a simple one to start.

A prof I meet sometimes while walking told me the story of a student who wanted to stay away from school on a certain day, the day when the test was. Instead of phoning him, or texting, she sent him a detailed email that he could tell, though I don't know how, was AI generated. He described her actions as stupid and thick headed. I never found out what he did about it.

Example two is also from a university, Carleton though rather than Trent. A granddaughter supplements her income by marking undergraduate assignments while working on her Doctorate.  She says you can recognize AI writing pretty well immediately, but to reprimand the culprit, you have to be sure you can prove AI was used, though they do have software to scan essays to find tell tale signs. If we never learn to distinguish real from AI, people will graduate and the BA will come to have no commercial value. There would be no proof the graduate knows anything, such as how to write well.

Another granddaughter at university explained that the bots can find every thing on the Internet, but cannot easily verify the truth of an article, so an essay can have some glaring errors of common sense or even known misinformation.

And my final example is the horrible scam of cloning a child's voice, and using it in the background to demand money be sent to sent with a pretext of helping the youngster, or declaring a hostage situation. The computer program could say whatever the scammer told the program to say, or simple said into the software, but it would sound like the child's voice.

I hope some of the readers will recount some personal examples of danger or worry, or maybe something you read recently. Later we can get into the societal issues that might rip apart the world our fragile culture has constructed. But personal experiences would be interesting.

I have one I like as an example about phoning a medical company to gather some information I wanted and not talking to a human. Maybe tomorrow.
Father Time remains undefeated.

buster

Health insurance for prescriptions and teeth disappears for teachers when they retire. So many of us join an association of teachers and we pay with monthly premiums. This, though exorbitantly expensive, covers 80% of our costs. Seldom did I have to phone them, but a year or two ago I needed an explanation of coverage.

There was no person at the other end but a voice giving me five numbers I could choose for different types of issues. I got through to the 'department' I needed and got a lovely male voice that asked,

'Please state as clearly as you can what your inquiry concerns.'

You know when you are talking to a machine, but this seemed easy enough. I would duplicate my question for you but I can't remember what it was. And after I asked I got a response, I replied as clearly and concisely as I could. The voice answered,

'Please restate your concern more clearly please.'

After the third failure I did my poke about trick, hitting the O key on my phone.

A human answered, to which I exclaimed, 'A living person!' and at least got a laugh. We chatted a bit about the machine answering. She told me I had to learn how the machine processes information as you talk to it, and I thought, though didn't say, 'That is totally ass backwards."

When I repeated my original question, she had the answer for me immediately. Human brains are astonishing devices, needing only bits of info to fill in the rest. And if this machine 'brain' was improved, this lovely, clever woman would lose her job.
Father Time remains undefeated.