Peterborough Linux User Group (Canada) Forum

General (non-Linux) => Politics, Society and News. => Topic started by: buster on June 05, 2025, 05:43:58 PM

Title: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on June 05, 2025, 05:43:58 PM
I'm not speculating on this technology taking over the world, but am including, to start with, some simple day to day occurrences of AI use that has led to problems. And here is a simple one to start.

A prof I meet sometimes while walking told me the story of a student who wanted to stay away from school on a certain day, the day when the test was. Instead of phoning him, or texting, she sent him a detailed email that he could tell, though I don't know how, was AI generated. He described her actions as stupid and thick headed. I never found out what he did about it.

Example two is also from a university, Carleton though rather than Trent. A granddaughter supplements her income by marking undergraduate assignments while working on her Doctorate.  She says you can recognize AI writing pretty well immediately, but to reprimand the culprit, you have to be sure you can prove AI was used, though they do have software to scan essays to find tell tale signs. If we never learn to distinguish real from AI, people will graduate and the BA will come to have no commercial value. There would be no proof the graduate knows anything, such as how to write well.

Another granddaughter at university explained that the bots can find every thing on the Internet, but cannot easily verify the truth of an article, so an essay can have some glaring errors of common sense or even known misinformation.

And my final example is the horrible scam of cloning a child's voice, and using it in the background to demand money be sent to sent with a pretext of helping the youngster, or declaring a hostage situation. The computer program could say whatever the scammer told the program to say, or simple said into the software, but it would sound like the child's voice.

I hope some of the readers will recount some personal examples of danger or worry, or maybe something you read recently. Later we can get into the societal issues that might rip apart the world our fragile culture has constructed. But personal experiences would be interesting.

I have one I like as an example about phoning a medical company to gather some information I wanted and not talking to a human. Maybe tomorrow.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on June 07, 2025, 02:09:02 PM
Health insurance for prescriptions and teeth disappears for teachers when they retire. So many of us join an association of teachers and we pay with monthly premiums. This, though exorbitantly expensive, covers 80% of our costs. Seldom did I have to phone them, but a year or two ago I needed an explanation of coverage.

There was no person at the other end but a voice giving me five numbers I could choose for different types of issues. I got through to the 'department' I needed and got a lovely male voice that asked,

'Please state as clearly as you can what your inquiry concerns.'

You know when you are talking to a machine, but this seemed easy enough. I would duplicate my question for you but I can't remember what it was. And after I asked I got a response, I replied as clearly and concisely as I could. The voice answered,

'Please restate your concern more clearly please.'

After the third failure I did my poke about trick, hitting the O key on my phone.

A human answered, to which I exclaimed, 'A living person!' and at least got a laugh. We chatted a bit about the machine answering. She told me I had to learn how the machine processes information as you talk to it, and I thought, though didn't say, 'That is totally ass backwards."

When I repeated my original question, she had the answer for me immediately. Human brains are astonishing devices, needing only bits of info to fill in the rest. And if this machine 'brain' was improved, this lovely, clever woman would lose her job.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on June 08, 2025, 06:15:51 PM
One of my main concerns, because I like art, music, novels, and films, is the encroachment of the commercial empire into what has always been primarily, up to the last, say half century, the primacy of the individual artist in the creation of art. In many cases the story, or paintings, or music, cannot be judged by the number of sales. If this were true, MacDonald's would be judged the best cuisine in the world. Sometimes the most popular film is rubbish. Only time tells us what is best in literature.

I am afraid publishers will simply get AI to check what sells, call up the software, and imitate those books, ignoring the unique awareness that some great writers have to see the world differently. They may not sell a lot, but they last forever. We will lose the artists.

We will lose uniqueness, and gain commercial sameness.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on June 09, 2025, 07:23:36 AM
Quote from: buster on June 08, 2025, 06:15:51 PMI am afraid publishers will simply get AI to check what sells, call up the software, and imitate those books, ignoring the unique awareness that some great writers have to see the world differently. They may not sell a lot, but they last forever. We will lose the artists.

Although I haven't heard of any major cases of publishers using AI to write books, there are already people selling ebooks written by AI. I've only used short story AI generators online and in apps to see what they do, and certain phrases keep coming up repeatedly, almost like cliches written by a similar author. That's also why people can tell that AI wrote the text you're looking at. Here's a relatively simple AI (I presume).

https://perchance.org/ai-story-generator

Try giving it a subject, a character and a plot (if you want). It will generate everything else. Things I've noticed:

1. The same names crop up again and again.
2. The same characters come up. Different names might even look different, but they act the same way.
3. The same tropes come up.

You won't notice these things until you have written several stories. It doesn't write the entire story. It does so many paragraphs, and you can either add suggestions of what happens next or just click next and let 'er rip. It's an interesting exercise.

I play a Dungeons & Dragons game session with some friends every Sunday. Recently, we had a question about a particular spell, and while a couple of us were checking the official rules in books, both online and in hardback, one guy used ChatGPT. It gave him the wrong answer.
Answers are based on training data, but without the ability to know the difference between correct and incorrect information, as you say, it can come up with wrong answers.

With this game, it's probably that accurate information for the game isn't available for free (i.e. beyond paywalls). Or there's so little of it that the incorrect information or house rules, or the like, outweigh the amount of accurate info.

D&D has a lot of versions (5th edition so far). It's been through a lot of incarnations since the 70s, and some players are still playing the original editions and probably post about it on forums, asking questions and receiving different answers because D&D is an open-ended game, and not everything is or even can be in the rules.  But AI doesn't know the differences because it's not actual intelligence. It wouldn't know what a "version" is unless you grouped everything for it under a version.

My understanding is that AI is a pattern recognizer. That's awesome for certain situations. If you have thousands of X-rays showing what real cancer looks like, AI can give a pretty high detection rate of seeing it. But that's based on someone helping it by training it with thousands (maybe more) x-rays. In some cases, AI can even detect conditions better than doctors. But we wouldn't have AI deciding if we should have surgery or chemotherapy, at least not yet.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on June 09, 2025, 07:47:12 AM
After saying all that, I took your posts together and asked ChatGPT to write a snarky response addressing them altogether. Get a gander at this! I'm ashamed to admit that it will probably replace my "job"! :-)

QuoteAh yes, the good old "AI is ruining everything" anthem. First, the student who used AI to dodge a test—brilliant move, right? Because nothing says "responsible adult" like outsourcing your problems to a robot and expecting sympathy. And of course, professors are suddenly AI detectives now, scanning for the slightest whiff of machine-generated prose. Next thing you know, they'll need a degree in AI forensics just to pass judgment.

Then there's the automated phone nightmare. Surprise! Machines can't actually understand humans yet. Who would've guessed? Pressing zero to get a human is basically the new "cheat code" in customer service. Meanwhile, those poor workers will probably lose their jobs because some AI can't quite handle a question on dental coverage. But hey, at least the AI doesn't ask for a coffee break.

And the artistic apocalypse! Publishers hiring AI to churn out cookie-cutter bestsellers while true artists with "unique awareness" get shoved aside. Because clearly, AI doesn't stand a chance against centuries of human creativity and emotional depth. But hey, why bother preserving "lasting" art when you can just mass-produce whatever sells this week? MacDonald's is the culinary king after all.

Look, AI isn't perfect and it definitely raises tricky questions. But let's not pretend it's some evil mastermind out to erase humanity's soul. Sometimes, the real problem is us—our impatience, shortcuts, and fear of change. Instead of blaming the tools, maybe we should figure out how to use them smartly. Or not. Either way, the future's coming whether we like it or not.

I told it to take that and write it like a 12-year old girl. Apparently, AI doesn't think much of tween girls.

QuoteOMG, seriously? So first, the student tries to skip a test by sending an email that's totally AI-made? Like, why not just be honest? That's kinda lazy and dumb. And then the prof can tell it's AI? LOL, teachers are basically detectives now.

And the phone thing? Ugh, automated voices are the WORST! Press zero a million times just to talk to a real person? So annoying. Machines just don't get us, okay? I'd way rather talk to a real human who actually understands what I'm saying.

And then the whole art thing? People think AI is gonna ruin creativity and make everything the same? Pfft, AI can't have feelings or make cool, original stuff like real people can. Just cuz something's popular doesn't mean it's good, duh. Like, McDonald's isn't fancy food, and not every movie that's popular is good.

So yeah, AI might mess some things up, but we're the ones who have to use it right. Let's chill and not freak out too much. Humans still rock!

Oh, and finally, as Trump as separate tweets:

QuoteStudents using AI to SKIP tests? LAZY and WEAK! Back in my day, you studied — not hid behind a robot. Professors can SPOT it a mile away. SAD! #NoCheating #MAGA

QuoteAutomated phones that NEVER understand you? TERRIBLE service, just like Biden's handling of everything. People want REAL HELP, not a computer reading scripts. Bring back HUMAN voices! #MakePhonesGreatAgain

QuoteArt and books copied by AI? TOTAL FAKE NEWS! Real artists have GENIUS and HEART — things AI will NEVER have. Unl
ike Sleepy Joe, art can't be faked. Sales don't mean QUALITY! #TrueArt

QuoteAI is a TOOL, not a REPLACEMENT. Use it SMARTLY, keep JOBS, keep CREATIVITY, and keep AMERICA GREAT! Biden's policies won't save you — we will! #USA #KeepAmericaSmart
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on June 09, 2025, 07:57:36 AM
As for your point about the lovely lady who helped, I agree that she probably would lose her job if the system they use were smarter. And you're right, software needs to be written for people. We shouldn't have to adapt even if most of the time we are. A lot of jobs might be replaced by AI, possibly even some software developers! I've seen calls for a GAI (guaranteed annual income) because there may be huge numbers of unemployed people coming. I think we've worried about this with every new technology, though. People will have to learn new jobs. I hate using self-checkout, but sometimes it is faster, especially when they give checkout cashiers fewer hours. I'm sure that's their goal. But some stores, I believe Costco, at least in the US (never been to one), got rid of the self-checkout machines because they cost a lot, broke down a lot, and customers complained.

But you remember when cashiers used to bag your groceries (or somebody else did)? Or somebody would fill your tank for you? I'm sure some lovely people lost their jobs because customers did it themselves.

Where it takes us, who knows?
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on June 10, 2025, 03:07:11 PM
If you search Google using 'danger' and 'AI' you'll get many sites. This is the one I read:

https://www.forbes.com/sites/jonathanwai/2024/10/09/the-12-greatest-dangers-of-ai/

Some things are quite dramatic, but this writer has spoken to elected officials about possible problems. (One of his ideas is for code to be checked for possible problems before it can be released. Good luck accomplishing that!) Some of the things that he mentions have already occurred. It's a fairly easy read and I'm sure you will find many other articles that are even more enlightening.

AI has many uses but it's in the same category as TNT, and where it is used is needs care.

The little section about hacking websites and businesses indicates to me the whole web can be brought down. Imagine what that would mean.

My doctor, whom I visited today, says she never uses AI to diagnose, but let's it listen to her interviews and write a report, which she always reads and edits.

Brave New World
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on June 14, 2025, 10:25:28 AM
A funny example of a failed AI response.

Cape Breton Island feels different from the rest of Nova Scotia, but not too much. Marilyn and I have visited a few times. But a spoof appearing in The Beaverton convinced Google and Meta AI that it had a different time zone. Simplest reason? AI apparently doesn't have a sense of humour. This is a short, funny CBC article.

https://www.cbc.ca/radio/asithappens/cape-breton-time-zone-ai-1.7559597
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on June 14, 2025, 10:30:13 AM
And I'm really interested in what AI would have done with this funny BBC April Fools 'news item'

search    'bbc spaghetti growing on trees'
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on June 17, 2025, 12:14:24 PM
And probably my last thoughts.

AI seems to have many meanings. In some cases there is a simple algorithm in place where directions are being followed. Sometimes written or spoken speech does a good impression of human speech patterns, though the information gathered may be from suspect sources. In chess or the game go, the computer has the advantage of being able to process multiple outcomes almost instantly. Those of us who played chess also had to process a move sequence, but usually we could see patterns on the board which enabled us to select only a few lines to analyze. We knew certain positions were winning positions and tried to create these on the board. Humans cannot compete with infinite analysis.

Learning computers and semi sentience will bring who knows what.

And here's the final suggestion:

There was a tiny movie released eight years ago that is fascinating. It was interesting that it was criticized for any number of reasons, one being that countries wouldn't use this technology.

The weapon uses facial recognition, which some people have on their phone, small drones, which are easily made, and small but powerful explosives. No serious problems here.

Do an Internet search for Slaughterbots

(There is also a follow up movie later but I've never watched it.)
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on June 18, 2025, 11:11:28 AM
And one more last thought.

While waiting for Marilyn at the eye doctor's building, I managed to strike up a conversation with a man who had been fortunate enough to grow up in Hamilton, close to the time I was very young there. We had a grand time taking about the city and what we used to do in our youth. Lot's of fun.

Some of his kids had been academics, so we wandered in our topics to AI easily writing essays. Only when I got home did I work out that essays at university will serve no function in the future. Originally it provided a way for profs to see if a student could extract information from what was already known to create some novel idea and present it in a clear and convincing manner.

Consider this analogy. Imagine you are taking a culinary course on baking pies. So you go to the store and buy a pie, and then present it as your finished assignment. This tells the teacher nothing about your ability to cook.

An essay in the future will tell us nothing about what your mind can see and do. We need to grade the mind, not some end product that tells us nothing. So here's what Harry predicts:

The essay will have to be replaced in the university as a grading device. However, how we assess the capabilities and knowledge of a student's mind will require some creative thought and innovations. That will be an interesting challenge.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on June 18, 2025, 08:04:21 PM
Quote from: buster on June 10, 2025, 03:07:11 PMIf you search Google using 'danger' and 'AI' you'll get many sites. This is the one I read:

To be fair, if you search 'danger' and 'bed', you'll find many sites. Here's a BMJ study called "The Dangers of Going to Bed." (https://www.bmj.com/content/2/4536/967). It's from 1947 though so it might be out of date.;)

Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on June 18, 2025, 08:04:52 PM
Quote from: buster on June 14, 2025, 10:30:13 AMAnd I'm really interested in what AI would have done with this funny BBC April Fools 'news item'

search    'bbc spaghetti growing on trees'

That's hilarious. I'd rather have the famed money tree, myself.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on June 18, 2025, 08:25:00 PM
Quote from: buster on June 18, 2025, 11:11:28 AMSome of his kids had been academics, so we wandered in our topics to AI easily writing essays.
</quote>

If it can be written by AI, it can be detected as such by AI. There are already some paid services that let teachers do it. But you could enter shorter essays in the free version of ChatGPT and ask it if the essay is AI-written.

<quote>The essay will have to be replaced in the university as a grading device. However, how we assess the capabilities and knowledge of a student's mind will require some creative thought and innovations. That will be an interesting challenge.

Essays are only one form of evaluating students and can be done at the same time as an exam. My history teacher in high school had most of his tests like this. He'd have two statements to choose from, you picked one and then had to back it up with facts. The province mandated 30% for final exams, which also had a huge essay component in his class, but other than that, most of the mark was from those essays.

What would we be grading when we grade the mind? Does it include the stuff we can't remember consciously, but it's still there somewhere? Would it be passive? Would that be the ultimate form of teaching to the test? I honestly can't picture this replacing essays, multiple-choice, short answers and practical tests until AI can do all those things and not be visible doing it (to prevent cheating). At least not for a long while. But perhaps when AI has mapped every thought in an MRI scan...
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on June 18, 2025, 10:47:42 PM
Jason: "My history teacher in high school had most of his tests like this. He'd have two statements to choose from, you picked one and then had to back it up with facts."

I wasn't thinking about a high school evaluation. I was thinking mostly in terms of a senior university arts program or courses where a student might be expected to come to grips with say an unusual event in history, or a novelist or poet who dominated a certain period of literary history. Analyzing why this occurred would take maybe weeks of research, reading, and thinking. Some of these things are still not fully understood.

The best students can come up with unique view points. The weaker will show that they have at least a basic understanding of the problem.The essay is not written just to get a mark. In some cases this sort of research is the reason to be in university. This has been a key way to learn for generations. This is how experts came to be, through study, research and writing.

Even during my short lifespan, I've seen that the learning started taking second place to something called 'marks'. When I was first teaching in Hamilton I borrowed from another teacher a text book on the middle ages. He could not believe I would read something like that if it weren't for a university credit. Learning to him was earning credits.

Anyway, profs have used essays for student evaluation. They were pretty good indicators. And the essay distinguished between different levels of achievement. There are other ways that are used such as small groups of students sitting around a table with the prof having a discussion about some philosophical conundrum where contributions are expected from all. Those who haven't worked or simply do not understand are easy to pick out.

My point is the heavily relied upon essay for evaluation is going to be useless. This does not mean the essay is useless. I think it's great tool for sharpening ideas.





 
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: fox on July 06, 2025, 11:27:10 AM
I would have said in the past, that an essay is a good way to evaluate a person's ability to use information creatively to tell a story or test a hypothesis. But in one of the courses I taught called Fisheries Assessment and Management, I learned before the days of AI that it was too easy to plagiarize an essay on a relevant topic - from past ones written, from other students, or buy purchasing one online. After awhile I felt that I no longer had the ability to catch these violations, and which ones I caught was just a matter of luck. (Most of the cheating I caught in this course was from students lifting paragraphs from published papers, not the other kinds of plagiarism mentioned above.) So I came up with a creative way to issue the assignment that would be much, much harder to plagiarize or buy. I called it "advances in ....". You would take a fisheries topic, identify five relatively recent advances in the science around that topic, indicate why they were; advances, and review the five published papers where the advances came from. The paper also had to summarize the state of the art on that topic. It worked very well and probably cut the rate of plagiarism to zero or near zero. But I'm not sure that AI couldn't handle this; I never had to find out.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on July 14, 2025, 06:54:33 PM
Quote from: fox on July 06, 2025, 11:27:10 AMThe paper also had to summarize the state of the art on that topic. It worked very well and probably cut the rate of plagiarism to zero or near zero. But I'm not sure that AI couldn't handle this; I never had to find out.

Unfortunately, that is one of the things that AI is excellent at. What it wouldn't be good at is writing the essay in the student's voice. But unless you have your students writing several essays and you don't have too many students or an amazing memory, that would be hard to pick up on, I'd imagine. But, AI does format its answers in a particular fashion, and if you use it enough, you might pick up on its voice. Handing out take-home assignments for high school teachers must be particularly awful. I would bet that more students are using it to cheat than at the university level. Could be wrong, though.

Have you used any AI bots, Fox? I've used ChatGPT occasionally but not the others. I did use Grok recently to ask if it was still racist. If you don't understand why I asked that, check some recent US news having to do with Elon Musk and Grok. The answers seemed to be too "woke," so Elon Musk made some changes to fix that, and the bot started acting racist with positive comments about Hitler. You'd have to read them yourself. I don't want to offend anyone.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: fox on July 29, 2025, 02:03:32 AM
Quote from: Jason on July 14, 2025, 06:54:33 PM....Have you used any AI bots, Fox? ....
I might have used ChatGPT once or twice, but the only AI I think I have been using is the AI you get on MacOS, iOS, and iPadOS.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on August 07, 2025, 03:53:36 PM
Another interesting unfortunate 'invasion' of the nonliving shows up in a CBC website article. It makes sense, but I didn't realize that bots were used as moderators for social media because of the huge numbers of participants, many of whom are also bots disguised as humans.

There are a number of people apparently who have been removed from sites by non-humans because of some perceived transgression, and phoning to complain gets you nowhere because it's impossible to connect to a human.

   https://www.cbc.ca/news/canada/toronto/teacher-wrongly-accused-child-exploitation-meta-account-apology-1.7599595
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on August 09, 2025, 05:50:05 PM
I was one of those people who had my account disabled because of unclear reasons from Instagram, which also made me lose my Facebook account. Some of my friends were formerly Facebook friends, and their accounts don't let you send friend requests to them, so there are friends I'll never get back unless they look for me. Facebook and Instagram don't have numbers or emails you use to contact them when it happens. And new subscriber requirements make it extremely difficult to create a new account. They make you do a short video to see if your picture matches a disabled account, as well as other measures. I asked ChatGPT how to bypass them so I could create a new account. Hard to bypass a video unless I got somebody else to pose for me!

I finally got back on with a new account by using an existing account that I created for a family member who ended up not using it and passed. Not the way I wanted to do it, but I was able to change the name.

At least I have friends outside social media. There are a lot of shut-ins who don't, or people with mental illness who go on Facebook with other similar conditions to get peer support. And they're just cut off, just like that, and probably by AI and a faceless corporation that won't help you if you could find a flesh-and-blood person to do it.

Btw, it's not just mods that are AI-controlled now, tech companies are using AI as the front line of support. You click on the chatbot to ask questions, and many require you to use the bot first before you can even try to reach a real person. It can be frustrating. Most of the time, the questions I have can't be answered by a bot. It's because it only searches the help files to answer your question, and I've already gone through the help files. If it was in there, I wouldn't be trying to find a real person.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on August 11, 2025, 12:03:00 PM
Reading the local edition of our newspaper yielded two more examples of unpleasantness.

a) There is a picture of a massive fire in British Columbia near a highway taken from the air appearing in social media. The authorities warned people to disregard the fire, because it does not exist. The picture was created by AI.

b) AP reported a study done using fake 13 and 14 year old teens in contact with ChatGPT getting information on drugs, best ways to commit suicide, and even how to write notes to leave for the parents. This age group seems to be the most committed to asking the program for advice.

A simple search AP and ChatGPT will produce the article.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on August 14, 2025, 01:29:18 PM
Interesting article about AI in The Examiner today by Gwynne Dyer about the unforeseen consequences. Found this copy in the Maritimes that might let you in:

https://www.saltwire.com/cape-breton/opinion-cape-breton/gwynne-dyer-investigating-the-looming-artificial-intelligence-crash

Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on August 30, 2025, 08:29:17 PM
I wish people would stop referring to it as artificial intelligence. It gives it more status than we should and in the worse possible way. It's not smarter than we are. It's a reflection of what we are, a conglomeration of what we know, without ethics, without humanity and prone to "hallucinations". I think it's valuable. I'm using it for distilling information but I wouldn't ask it for marriage advice. It's not a counselor, it doesn't know your life and it doesn't care about you. And it's engineered to compliment you and pat you on the back for every stupid thing you might think of.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on September 26, 2025, 02:01:17 PM
From a variety of sources I picked up these as possible outcomes from our introduction of AI into the world.

1. The gigantic companies such as Alphabet, Microsoft, Nvidia, Tesla,and Walmart will become richer and more powerful. This will allow them to invest in and develop AI, which second and third level companies will not be able to. And this has been true historically as minor industries are absorbed or eliminated leaving fewer and fewer in any industry.

2. One of the best way for a company to economize is to replace humans, even in the computer business. Coding is not a sure thing now. AI can do much of that at astonishing speeds. Many of you do not talk to a human but buy through Amazon. Coffee shops encourage us to use a screen to buy coffee.

And the result of this will be far fewer jobs.

3. The employment available will often be low pay, and the rich will continue to get richer by moving money about. So we will devolve into a Middle Ages social structure, with an extremely rich and small segment, and a large group with little money.

So Buster's prediction is fewer, larger, and more powerful companies, more unemployment, and a pyramid social structure in society. The storms, droughts and fires are only the aperitif.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: ssfc72 on September 27, 2025, 09:05:44 AM
Sounds like another Netflix series plot. :-)


Quote from: buster on September 26, 2025, 02:01:17 PMSo Buster's prediction is fewer, larger, and more powerful companies, more unemployment, and a pyramid social structure in society. The storms, droughts and fires are only the aperitif.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on September 28, 2025, 08:58:14 AM
On the day I posted that Bill, I got three emails from the bank. They are trying to get me to do digital banking rather than going to the bank. I do some banking on the Internet, but I like a human's ability to help me if there is a difficulty. I do not use my phone to bank. For large transactions of any sort I have an advisor who I can visit or phone. But I do not use my phone for digital banking.

The bank of course wishes to cut staff and close branches but keep our business. So this was part of the third email of the day:

We noticed you haven't been using digital banking.

And then it assured me they would help me with the transition.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on October 07, 2025, 02:56:00 AM
Quote from: buster on September 28, 2025, 08:58:14 AMOn the day I posted that Bill, I got three emails from the bank. They are trying to get me to do digital banking rather than going to the bank. I do some banking on the Internet, but I like a human's ability to help me if there is a difficulty. I do not use my phone to bank. For large transactions of any sort I have an advisor who I can visit or phone. But I do not use my phone for digital banking.

Do you mean online banking? I don't know what digital banking is. While I agree that banks shouldn't be requiring their customers to use online banking, I'm confused about you saying you like to have an advisor you can phone. You won't lose that with online banking. You can still get support. You can still visit a branch. Am I missing something?

I don't imagine branches in cities will close, although they have in rural areas and small towns. But businesses will still need to do their nightly cash deposits, and they will expect staff to be there to negotiate loans. I suppose the latter can technically be moved online, but banks lending money (and mortgages obviously) is their bread and butter. The first bank to do that in a city like ours will immediately start losing business, so none of them are going to do that anytime soon.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on October 07, 2025, 03:19:25 AM
Quote from: buster on September 26, 2025, 02:01:17 PM3. The employment available will often be low pay, and the rich will continue to get richer by moving money about. So we will devolve into a Middle Ages social structure, with an extremely rich and small segment, and a large group with little money.

So Buster's prediction is fewer, larger, and more powerful companies, more unemployment, and a pyramid social structure in society. The storms, droughts and fires are only the aperitif.

Did you become an overnight socialist, Buster? ;) It sounds suspiciously like you think capitalism might be a bad thing. Careful, you might end up on a watchlist! Anyway, I think we're already there and have been for some time regarding the pyramid structure and the gap between rich and poor. The rich are obscenely rich, and they are making it off the backs of the poor.

I agree with most of what you said regarding jobs, etc. But I disagree with the part about coding, at least in the near future. Remember that AI is an LLM (large language model). The input it gets comes from freely accessible services online. And there are examples of bad code, too. I'm not sure commercial companies are posting their source code.

And as any programmer knows, the bulk of coding is spent on fixing bugs. And oh boy, is AI going to introduce a lot of bugs. Not necessarily because of mistakes, but because it's hard to explain software in words. And developers, especially in making code accessible online, are notorious for not commenting well, which is critical in debugging code or updating it years later. If AI copies that, it will produce code that is very hard to debug, as it will pantomime that behaviour.

And harder yet is to develop good user interfaces for humans. AI isn't going to understand how humans perceive simplicity and what is easy to us.

Of course, it might be good at finding them, too. But unless they can upload the knowledge of programmers themselves, I don't see AI replacing developers anytime soon. Simple programs, sure. But try using an LLM to develop an office suite or an antivirus, and you're going to quickly discover its limits.

I'm not saying it won't happen eventually, but I think we're decades away from that.

I think part of the confusion about what AI can do is that people don't understand that AI isn't "thinking". AI is like autotext on phones, on steroids. It understands how words, sentences and paragraphs are put together. And it uses neural networks to learn more. It doesn't understand what any of it means. And it is prone to "hallucinations," which is something humans can account for in many fields. But not in coding.

If we think that sentience is an emergent property of language, then we may be in big trouble. But I think it's the other way around based on what I've read.

When I use AI for answers, I'm sure to be polite, though. When we have the robot uprising, I  want it to remember that I'm one of the good guys.;D
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on October 07, 2025, 03:20:54 AM
Damn, I didn't mean to write an essay before. But with university, I have no time to edit my verbiage. I have to save that for my assignments!
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: BusterE on October 09, 2025, 10:19:59 PM
The process of coding by non-humans is complex but interesting.If you wish to peek into the near future, or our attempts to predict the future (always iffy), you could read this:

https://sdh.global/blog/ai-ml/will-ai-replace-software-engineers-heres-what-the-data-really-shows/
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on October 12, 2025, 10:43:21 PM
Quote from: BusterE on October 09, 2025, 10:19:59 PMThe process of coding by non-humans is complex but interesting.If you wish to peek into the near future, or our attempts to predict the future (always iffy), you could read this:

https://sdh.global/blog/ai-ml/will-ai-replace-software-engineers-heres-what-the-data-really-shows/

Thanks for sharing. Interesting article. I agree that predictions are usually iffy. And there are some assertions at the beginning of the article that aren't evidentially strong:

QuoteNearly 30% of 550 surveyed software developers already believe their development work will be replaced by artificial intelligence within the foreseeable future.

That's a pretty small sample. And it's basically a poll. Who are these developers? What kind of code do they develop? Is AI in the workplace? What do they know about AI that they didn't glean from mainstream media? So many questions.

QuoteCurrent job market indicators reflect this emerging reality. IT sector unemployment spiked from 3.9% to 5.7% in a single month, surpassing the US national average of 4% as of January 2024.

One month? Really? How does that compare to unemployment in other sectors? The US is experiencing an economic slowdown. The IT sector includes much more than software developers. Is the software development sub-category experiencing similar numbers?

QuoteA comprehensive survey of 9,000 software engineers revealed that 90% consider job hunting significantly more challenging now compared to 2020.

This could mean something, but it could still be the economic slowdown. Many of the largest companies are shedding employees who used to do simple tasks or using AI as the first line of tech support.

Much of the article confirms what I said, particularly that AI-produced code has a large error rate (30%). Developers will have to oversee the code that AI produces. What will likely happen, and is happening, is that developers will use AI to assist in their work. The author hit that on the nose. Maybe in time, AI will replace developers, which will be sad since it's such a well-paying job.

As you suggest, predictions are iffy. Let's lay down our bets, and if we're still around, then the winner can collect.  ;)

Thanks for sharing, Buster. It was an interesting article.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on October 14, 2025, 06:43:18 PM
Today's Examiner had a  relevant  (https://www.thepeterboroughexaminer.com/thestar/business/computer-science-grads-used-to-be-a-hot-commodity-now-they-fear-theyre-being-replaced/article_02ec5121-e074-5b0f-9542-0e30a41329af.html?gift=1&gift_token=765085e7-3e7e-49b1-bcf7-1e8c38a0afa1&token=eyJhbGciOiJSUzI1NiIsImtpZCI6InNpZ24tZ2lmdC1saW5rLWtleSJ9.eyJ1cmwiOiJodHRwczovL3d3dy50aGVwZXRlcmJvcm91Z2hleGFtaW5lci5jb20vdGhlc3Rhci9idXNpbmVzcy9jb21wdXRlci1zY2llbmNlLWdyYWRzLXVzZWQtdG8tYmUtYS1ob3QtY29tbW9kaXR5LW5vdy10aGV5LWZlYXItdGhleXJlLWJlaW5nLXJlcGxhY2VkL2FydGljbGVfMDJlYzUxMjEtZTA3NC01YjBmLTk1NDItMGUzMGE0MTMyOWFmLmh0bWw_Z2lmdD0xJmdpZnRfdG9rZW49NzY1MDg1ZTctM2U3ZS00OWIxLWJjZjctMWU4YzM4YTBhZmExIiwiaWF0IjoxNzYwNDgyMDE0LCJleHAiOjE3NjA3NDEyMTR9.uFT-3vofpxrMDkLDiKJM7wUElfyohG3xFJCaBlXVTAjMxy6ZKYLsmhEV0TS_B1WpFiWAub31X7uLODuktvfYkW8ZLh5ypKOkPBDLd6PhVtmU6zxpOxtuvxCtiNvOs-TlcmNjPuZSA-Zs37yQpKncjFH41tPSyfMhVmaPW4gRCQ_eRTpXEg4-GR5IBqQvN-3gchFg_Hpo8dW1IbU6DQz3KKLKxvZYl8h349sMv3dZ156ns83uVvJeENfanQuVW26ae6TWmAMcqiXU00MabY21SrTUyiwDmnt97LYF0koGd-FYZmqWo_K3G7PMxi3snz-GWTAN7kgVe6T7JKmgh_bb_Q)article.

It makes me think I made a good choice in pursuing my degree in Health Science rather than Computer Science. There seems to be a glut of software developers. And employers using AI to screen resumes and cover letters is disquieting. It must be hard for your resume to be looked at if you don't know the exact keywords to get past the AI (there are some, apparently).

Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on October 20, 2025, 06:22:37 PM
Jason once commented on the error of calling AI intelligent. It's stupidity has shown up quite a bit on my information searches lately, and these are the sorts of things I don't remember being wrong before, at least so often. And I always try to  avoid any click that will include AI in anything I do on a computer. (Human to the bitter end.)

Example one on Google: I asked the simple question 'When will such and such a program release series 3?' The surprising answer was that this program would release series two in August, which helped me not at all because I'd watched series two in the spring already. A notice said the answer was generated by AI.

Example two, also on Google: There are two shows released a month apart in 2023 called The Diplomat. My searches kept going to the US Netfix series. So I typed "reviews for the diplomat BBC British series". That seems pretty clear to me, but it insisted on taking me to reviews of the US production.

I can't remember Google being that stupid. Maybe there is a switch that insists on using AI to formulate answers rather than some of the older algorithms that took you to the places that matched the limited clues given.

But at least with AI we don't have to wait as long for errors as we do with humans. That's one good thing.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on October 20, 2025, 06:42:50 PM
I was so impressed by the post that I did some searches and found this:

https://www.tomshardware.com/tech-industry/artificial-intelligence/cringe-worth-google-ai-overviews
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: buster on October 20, 2025, 07:43:09 PM
This solution seems to work but not always. I want to get rid of Googles starting with an AI Summary, where information can be gleaned by AI from unreliable sources.

So at the end of my search entry I put (space) -noai

General this jumps right past the summary into several pages found by the search engine.
Title: Re: Simple Known Dangers with AI, and Possible Future Dangers Too.
Post by: Jason on October 31, 2025, 09:20:26 AM
Interesting article. Giving training material across the web as being equally authoritative is a really brain-dead idea, especially with the most dangerous of bad information, in health. I would say out of the webpages out there, probably 90% of the health-related sites are misinformation and often, disinformation.

As far as search engines go, I don't use Google unless I can't find what I'm looking for using a different engine. I figure they already have enough of my personal data. They also have singularly destroyed news media and given nothing back (because of stealing the advertising market). However, I will give them kudos for agreeing to pay an organization of news media in Canada, so maybe it won't all die. Facebook still refuses.