• Welcome to Peterborough Linux User Group (Canada) Forum.
 

Simple Known Dangers with AI, and Possible Future Dangers Too.

Started by buster, June 05, 2025, 05:43:58 PM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

buster

I'm not speculating on this technology taking over the world, but am including, to start with, some simple day to day occurrences of AI use that has led to problems. And here is a simple one to start.

A prof I meet sometimes while walking told me the story of a student who wanted to stay away from school on a certain day, the day when the test was. Instead of phoning him, or texting, she sent him a detailed email that he could tell, though I don't know how, was AI generated. He described her actions as stupid and thick headed. I never found out what he did about it.

Example two is also from a university, Carleton though rather than Trent. A granddaughter supplements her income by marking undergraduate assignments while working on her Doctorate.  She says you can recognize AI writing pretty well immediately, but to reprimand the culprit, you have to be sure you can prove AI was used, though they do have software to scan essays to find tell tale signs. If we never learn to distinguish real from AI, people will graduate and the BA will come to have no commercial value. There would be no proof the graduate knows anything, such as how to write well.

Another granddaughter at university explained that the bots can find every thing on the Internet, but cannot easily verify the truth of an article, so an essay can have some glaring errors of common sense or even known misinformation.

And my final example is the horrible scam of cloning a child's voice, and using it in the background to demand money be sent to sent with a pretext of helping the youngster, or declaring a hostage situation. The computer program could say whatever the scammer told the program to say, or simple said into the software, but it would sound like the child's voice.

I hope some of the readers will recount some personal examples of danger or worry, or maybe something you read recently. Later we can get into the societal issues that might rip apart the world our fragile culture has constructed. But personal experiences would be interesting.

I have one I like as an example about phoning a medical company to gather some information I wanted and not talking to a human. Maybe tomorrow.
Father Time remains undefeated.

buster

Health insurance for prescriptions and teeth disappears for teachers when they retire. So many of us join an association of teachers and we pay with monthly premiums. This, though exorbitantly expensive, covers 80% of our costs. Seldom did I have to phone them, but a year or two ago I needed an explanation of coverage.

There was no person at the other end but a voice giving me five numbers I could choose for different types of issues. I got through to the 'department' I needed and got a lovely male voice that asked,

'Please state as clearly as you can what your inquiry concerns.'

You know when you are talking to a machine, but this seemed easy enough. I would duplicate my question for you but I can't remember what it was. And after I asked I got a response, I replied as clearly and concisely as I could. The voice answered,

'Please restate your concern more clearly please.'

After the third failure I did my poke about trick, hitting the O key on my phone.

A human answered, to which I exclaimed, 'A living person!' and at least got a laugh. We chatted a bit about the machine answering. She told me I had to learn how the machine processes information as you talk to it, and I thought, though didn't say, 'That is totally ass backwards."

When I repeated my original question, she had the answer for me immediately. Human brains are astonishing devices, needing only bits of info to fill in the rest. And if this machine 'brain' was improved, this lovely, clever woman would lose her job.
Father Time remains undefeated.

buster

One of my main concerns, because I like art, music, novels, and films, is the encroachment of the commercial empire into what has always been primarily, up to the last, say half century, the primacy of the individual artist in the creation of art. In many cases the story, or paintings, or music, cannot be judged by the number of sales. If this were true, MacDonald's would be judged the best cuisine in the world. Sometimes the most popular film is rubbish. Only time tells us what is best in literature.

I am afraid publishers will simply get AI to check what sells, call up the software, and imitate those books, ignoring the unique awareness that some great writers have to see the world differently. They may not sell a lot, but they last forever. We will lose the artists.

We will lose uniqueness, and gain commercial sameness.
Father Time remains undefeated.

Jason

Quote from: buster on June 08, 2025, 06:15:51 PMI am afraid publishers will simply get AI to check what sells, call up the software, and imitate those books, ignoring the unique awareness that some great writers have to see the world differently. They may not sell a lot, but they last forever. We will lose the artists.

Although I haven't heard of any major cases of publishers using AI to write books, there are already people selling ebooks written by AI. I've only used short story AI generators online and in apps to see what they do, and certain phrases keep coming up repeatedly, almost like cliches written by a similar author. That's also why people can tell that AI wrote the text you're looking at. Here's a relatively simple AI (I presume).

https://perchance.org/ai-story-generator

Try giving it a subject, a character and a plot (if you want). It will generate everything else. Things I've noticed:

1. The same names crop up again and again.
2. The same characters come up. Different names might even look different, but they act the same way.
3. The same tropes come up.

You won't notice these things until you have written several stories. It doesn't write the entire story. It does so many paragraphs, and you can either add suggestions of what happens next or just click next and let 'er rip. It's an interesting exercise.

I play a Dungeons & Dragons game session with some friends every Sunday. Recently, we had a question about a particular spell, and while a couple of us were checking the official rules in books, both online and in hardback, one guy used ChatGPT. It gave him the wrong answer.
Answers are based on training data, but without the ability to know the difference between correct and incorrect information, as you say, it can come up with wrong answers.

With this game, it's probably that accurate information for the game isn't available for free (i.e. beyond paywalls). Or there's so little of it that the incorrect information or house rules, or the like, outweigh the amount of accurate info.

D&D has a lot of versions (5th edition so far). It's been through a lot of incarnations since the 70s, and some players are still playing the original editions and probably post about it on forums, asking questions and receiving different answers because D&D is an open-ended game, and not everything is or even can be in the rules.  But AI doesn't know the differences because it's not actual intelligence. It wouldn't know what a "version" is unless you grouped everything for it under a version.

My understanding is that AI is a pattern recognizer. That's awesome for certain situations. If you have thousands of X-rays showing what real cancer looks like, AI can give a pretty high detection rate of seeing it. But that's based on someone helping it by training it with thousands (maybe more) x-rays. In some cases, AI can even detect conditions better than doctors. But we wouldn't have AI deciding if we should have surgery or chemotherapy, at least not yet.
* Zorin OS 17.1 Core and Windows 11 Pro on a Dell Precision 3630 Tower with an
i5-8600 3.1 GHz 6-core processor, 16 GB of RAM, 512 GB Nvme and a Geforce 1060 6 GB card
* Motorola Edge (2022) phone with Android 14

Jason

After saying all that, I took your posts together and asked ChatGPT to write a snarky response addressing them altogether. Get a gander at this! I'm ashamed to admit that it will probably replace my "job"! :-)

QuoteAh yes, the good old "AI is ruining everything" anthem. First, the student who used AI to dodge a test—brilliant move, right? Because nothing says "responsible adult" like outsourcing your problems to a robot and expecting sympathy. And of course, professors are suddenly AI detectives now, scanning for the slightest whiff of machine-generated prose. Next thing you know, they'll need a degree in AI forensics just to pass judgment.

Then there's the automated phone nightmare. Surprise! Machines can't actually understand humans yet. Who would've guessed? Pressing zero to get a human is basically the new "cheat code" in customer service. Meanwhile, those poor workers will probably lose their jobs because some AI can't quite handle a question on dental coverage. But hey, at least the AI doesn't ask for a coffee break.

And the artistic apocalypse! Publishers hiring AI to churn out cookie-cutter bestsellers while true artists with "unique awareness" get shoved aside. Because clearly, AI doesn't stand a chance against centuries of human creativity and emotional depth. But hey, why bother preserving "lasting" art when you can just mass-produce whatever sells this week? MacDonald's is the culinary king after all.

Look, AI isn't perfect and it definitely raises tricky questions. But let's not pretend it's some evil mastermind out to erase humanity's soul. Sometimes, the real problem is us—our impatience, shortcuts, and fear of change. Instead of blaming the tools, maybe we should figure out how to use them smartly. Or not. Either way, the future's coming whether we like it or not.

I told it to take that and write it like a 12-year old girl. Apparently, AI doesn't think much of tween girls.

QuoteOMG, seriously? So first, the student tries to skip a test by sending an email that's totally AI-made? Like, why not just be honest? That's kinda lazy and dumb. And then the prof can tell it's AI? LOL, teachers are basically detectives now.

And the phone thing? Ugh, automated voices are the WORST! Press zero a million times just to talk to a real person? So annoying. Machines just don't get us, okay? I'd way rather talk to a real human who actually understands what I'm saying.

And then the whole art thing? People think AI is gonna ruin creativity and make everything the same? Pfft, AI can't have feelings or make cool, original stuff like real people can. Just cuz something's popular doesn't mean it's good, duh. Like, McDonald's isn't fancy food, and not every movie that's popular is good.

So yeah, AI might mess some things up, but we're the ones who have to use it right. Let's chill and not freak out too much. Humans still rock!

Oh, and finally, as Trump as separate tweets:

QuoteStudents using AI to SKIP tests? LAZY and WEAK! Back in my day, you studied — not hid behind a robot. Professors can SPOT it a mile away. SAD! #NoCheating #MAGA

QuoteAutomated phones that NEVER understand you? TERRIBLE service, just like Biden's handling of everything. People want REAL HELP, not a computer reading scripts. Bring back HUMAN voices! #MakePhonesGreatAgain

QuoteArt and books copied by AI? TOTAL FAKE NEWS! Real artists have GENIUS and HEART — things AI will NEVER have. Unl
ike Sleepy Joe, art can't be faked. Sales don't mean QUALITY! #TrueArt

QuoteAI is a TOOL, not a REPLACEMENT. Use it SMARTLY, keep JOBS, keep CREATIVITY, and keep AMERICA GREAT! Biden's policies won't save you — we will! #USA #KeepAmericaSmart
* Zorin OS 17.1 Core and Windows 11 Pro on a Dell Precision 3630 Tower with an
i5-8600 3.1 GHz 6-core processor, 16 GB of RAM, 512 GB Nvme and a Geforce 1060 6 GB card
* Motorola Edge (2022) phone with Android 14

Jason

As for your point about the lovely lady who helped, I agree that she probably would lose her job if the system they use were smarter. And you're right, software needs to be written for people. We shouldn't have to adapt even if most of the time we are. A lot of jobs might be replaced by AI, possibly even some software developers! I've seen calls for a GAI (guaranteed annual income) because there may be huge numbers of unemployed people coming. I think we've worried about this with every new technology, though. People will have to learn new jobs. I hate using self-checkout, but sometimes it is faster, especially when they give checkout cashiers fewer hours. I'm sure that's their goal. But some stores, I believe Costco, at least in the US (never been to one), got rid of the self-checkout machines because they cost a lot, broke down a lot, and customers complained.

But you remember when cashiers used to bag your groceries (or somebody else did)? Or somebody would fill your tank for you? I'm sure some lovely people lost their jobs because customers did it themselves.

Where it takes us, who knows?
* Zorin OS 17.1 Core and Windows 11 Pro on a Dell Precision 3630 Tower with an
i5-8600 3.1 GHz 6-core processor, 16 GB of RAM, 512 GB Nvme and a Geforce 1060 6 GB card
* Motorola Edge (2022) phone with Android 14

buster

If you search Google using 'danger' and 'AI' you'll get many sites. This is the one I read:

https://www.forbes.com/sites/jonathanwai/2024/10/09/the-12-greatest-dangers-of-ai/

Some things are quite dramatic, but this writer has spoken to elected officials about possible problems. (One of his ideas is for code to be checked for possible problems before it can be released. Good luck accomplishing that!) Some of the things that he mentions have already occurred. It's a fairly easy read and I'm sure you will find many other articles that are even more enlightening.

AI has many uses but it's in the same category as TNT, and where it is used is needs care.

The little section about hacking websites and businesses indicates to me the whole web can be brought down. Imagine what that would mean.

My doctor, whom I visited today, says she never uses AI to diagnose, but let's it listen to her interviews and write a report, which she always reads and edits.

Brave New World
Father Time remains undefeated.

buster

A funny example of a failed AI response.

Cape Breton Island feels different from the rest of Nova Scotia, but not too much. Marilyn and I have visited a few times. But a spoof appearing in The Beaverton convinced Google and Meta AI that it had a different time zone. Simplest reason? AI apparently doesn't have a sense of humour. This is a short, funny CBC article.

https://www.cbc.ca/radio/asithappens/cape-breton-time-zone-ai-1.7559597
Father Time remains undefeated.

buster

And I'm really interested in what AI would have done with this funny BBC April Fools 'news item'

search    'bbc spaghetti growing on trees'
Father Time remains undefeated.

buster

And probably my last thoughts.

AI seems to have many meanings. In some cases there is a simple algorithm in place where directions are being followed. Sometimes written or spoken speech does a good impression of human speech patterns, though the information gathered may be from suspect sources. In chess or the game go, the computer has the advantage of being able to process multiple outcomes almost instantly. Those of us who played chess also had to process a move sequence, but usually we could see patterns on the board which enabled us to select only a few lines to analyze. We knew certain positions were winning positions and tried to create these on the board. Humans cannot compete with infinite analysis.

Learning computers and semi sentience will bring who knows what.

And here's the final suggestion:

There was a tiny movie released eight years ago that is fascinating. It was interesting that it was criticized for any number of reasons, one being that countries wouldn't use this technology.

The weapon uses facial recognition, which some people have on their phone, small drones, which are easily made, and small but powerful explosives. No serious problems here.

Do an Internet search for Slaughterbots

(There is also a follow up movie later but I've never watched it.)
Father Time remains undefeated.

buster

And one more last thought.

While waiting for Marilyn at the eye doctor's building, I managed to strike up a conversation with a man who had been fortunate enough to grow up in Hamilton, close to the time I was very young there. We had a grand time taking about the city and what we used to do in our youth. Lot's of fun.

Some of his kids had been academics, so we wandered in our topics to AI easily writing essays. Only when I got home did I work out that essays at university will serve no function in the future. Originally it provided a way for profs to see if a student could extract information from what was already known to create some novel idea and present it in a clear and convincing manner.

Consider this analogy. Imagine you are taking a culinary course on baking pies. So you go to the store and buy a pie, and then present it as your finished assignment. This tells the teacher nothing about your ability to cook.

An essay in the future will tell us nothing about what your mind can see and do. We need to grade the mind, not some end product that tells us nothing. So here's what Harry predicts:

The essay will have to be replaced in the university as a grading device. However, how we assess the capabilities and knowledge of a student's mind will require some creative thought and innovations. That will be an interesting challenge.
Father Time remains undefeated.

Jason

Quote from: buster on June 10, 2025, 03:07:11 PMIf you search Google using 'danger' and 'AI' you'll get many sites. This is the one I read:

To be fair, if you search 'danger' and 'bed', you'll find many sites. Here's a BMJ study called "The Dangers of Going to Bed.". It's from 1947 though so it might be out of date.;)

* Zorin OS 17.1 Core and Windows 11 Pro on a Dell Precision 3630 Tower with an
i5-8600 3.1 GHz 6-core processor, 16 GB of RAM, 512 GB Nvme and a Geforce 1060 6 GB card
* Motorola Edge (2022) phone with Android 14

Jason

Quote from: buster on June 14, 2025, 10:30:13 AMAnd I'm really interested in what AI would have done with this funny BBC April Fools 'news item'

search    'bbc spaghetti growing on trees'

That's hilarious. I'd rather have the famed money tree, myself.
* Zorin OS 17.1 Core and Windows 11 Pro on a Dell Precision 3630 Tower with an
i5-8600 3.1 GHz 6-core processor, 16 GB of RAM, 512 GB Nvme and a Geforce 1060 6 GB card
* Motorola Edge (2022) phone with Android 14

Jason

Quote from: buster on June 18, 2025, 11:11:28 AMSome of his kids had been academics, so we wandered in our topics to AI easily writing essays.
</quote>

If it can be written by AI, it can be detected as such by AI. There are already some paid services that let teachers do it. But you could enter shorter essays in the free version of ChatGPT and ask it if the essay is AI-written.

<quote>The essay will have to be replaced in the university as a grading device. However, how we assess the capabilities and knowledge of a student's mind will require some creative thought and innovations. That will be an interesting challenge.

Essays are only one form of evaluating students and can be done at the same time as an exam. My history teacher in high school had most of his tests like this. He'd have two statements to choose from, you picked one and then had to back it up with facts. The province mandated 30% for final exams, which also had a huge essay component in his class, but other than that, most of the mark was from those essays.

What would we be grading when we grade the mind? Does it include the stuff we can't remember consciously, but it's still there somewhere? Would it be passive? Would that be the ultimate form of teaching to the test? I honestly can't picture this replacing essays, multiple-choice, short answers and practical tests until AI can do all those things and not be visible doing it (to prevent cheating). At least not for a long while. But perhaps when AI has mapped every thought in an MRI scan...
* Zorin OS 17.1 Core and Windows 11 Pro on a Dell Precision 3630 Tower with an
i5-8600 3.1 GHz 6-core processor, 16 GB of RAM, 512 GB Nvme and a Geforce 1060 6 GB card
* Motorola Edge (2022) phone with Android 14

buster

Jason: "My history teacher in high school had most of his tests like this. He'd have two statements to choose from, you picked one and then had to back it up with facts."

I wasn't thinking about a high school evaluation. I was thinking mostly in terms of a senior university arts program or courses where a student might be expected to come to grips with say an unusual event in history, or a novelist or poet who dominated a certain period of literary history. Analyzing why this occurred would take maybe weeks of research, reading, and thinking. Some of these things are still not fully understood.

The best students can come up with unique view points. The weaker will show that they have at least a basic understanding of the problem.The essay is not written just to get a mark. In some cases this sort of research is the reason to be in university. This has been a key way to learn for generations. This is how experts came to be, through study, research and writing.

Even during my short lifespan, I've seen that the learning started taking second place to something called 'marks'. When I was first teaching in Hamilton I borrowed from another teacher a text book on the middle ages. He could not believe I would read something like that if it weren't for a university credit. Learning to him was earning credits.

Anyway, profs have used essays for student evaluation. They were pretty good indicators. And the essay distinguished between different levels of achievement. There are other ways that are used such as small groups of students sitting around a table with the prof having a discussion about some philosophical conundrum where contributions are expected from all. Those who haven't worked or simply do not understand are easy to pick out.

My point is the heavily relied upon essay for evaluation is going to be useless. This does not mean the essay is useless. I think it's great tool for sharpening ideas.





 
Father Time remains undefeated.