• AI a Lie

    From Mortar@VERT/EOTLBBS to jimmylogan on Mon Oct 6 11:25:01 2025
    Re: Re: ChatGPT Writing
    By: jimmylogan to Rob Mccart on Sat Oct 04 2025 17:02:50

    And I think if we stop calling it AI, which is technically
    a misnomer, it might make it less frightening... It is
    LLM - Language Learning Module - and has absolutely no
    sentience behind it. It's not 'intellegent,' it is just
    programmed to respond and such in a way that is comfortable
    to us.

    Thank you! I'm constantly trying to explain this to others. Unfortunetly, trying to explain LLM makes peoples eyes glaze over, so I can understand why marketing and media folks labeled it as AI.

    ---
    þ Synchronet þ End Of The Line BBS - endofthelinebbs.com
  • From DaiTengu@VERT/ENSEMBLE to Mortar on Mon Oct 6 12:52:28 2025
    Re: AI a Lie
    By: Mortar to jimmylogan on Mon Oct 06 2025 11:25 am

    Thank you! I'm constantly trying to explain this to others.
    Unfortunetly, trying to explain LLM makes peoples eyes glaze over, so I can understand why marketing and media folks labeled it as AI.

    AI isn't a terrible name. The "intelligence" is artificial. (not real intelligence)

    That said, all modern "AI" is just a very advanced version of the system that predicts what word you're going to type next on your phone.

    ...I don't deserve this, but I have arthritis and I don't deserve that either

    ---
    þ Synchronet þ War Ensemble BBS - The sport is war, total war - warensemble.com
  • From Dumas Walker@VERT/CAPCITY2 to JIMMYLOGAN on Tue Oct 7 09:02:20 2025
    And I think if we stop calling it AI, which is technically
    a misnomer, it might make it less frightening... It is
    LLM - Language Learning Module - and has absolutely no
    sentience behind it. It's not 'intellegent,' it is just
    programmed to respond and such in a way that is comfortable
    to us.

    It might be a misnomer, but I am not sure about "no" sentience. It has
    been proven that AI/LLM is more likely than a human to "cheat" in order to
    get the outcome it wants. Whether that is a sign of "some" sentience, or
    if it is merely a sign that machines don't have ethics, is a subject for debate.


    * SLMR 2.1a * ...a host of holy horrors to direct our aimless dance...
    ---
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From phigan@VERT/TACOPRON to Dumas Walker on Tue Oct 7 10:13:51 2025
    Re: AI a Lie
    By: Dumas Walker to JIMMYLOGAN on Tue Oct 07 2025 09:02 am

    been proven that AI/LLM is more likely than a human to "cheat" in order to get the outcome it wants. Whether that is a sign of "some" sentience, or
    if it is merely a sign that machines don't have ethics, is a subject for debate.

    Not sure sentience really has anything to do with it. What the computer knows is that it has an objective. If "cheating" allows it to achieve its objective faster, what is really stopping it? Some algorithm that says it won't cheat some X percent of the time?

    ---
    þ Synchronet þ TIRED of waiting 2 hours for a taco? GO TO TACOPRONTO.bbs.io
  • From poindexter FORTRAN@VERT/REALITY to Mortar on Wed Oct 8 06:37:39 2025
    Mortar wrote to jimmylogan <=-

    Thank you! I'm constantly trying to explain this to others.
    Unfortunetly, trying to explain LLM makes peoples eyes glaze over, so I can understand why marketing and media folks labeled it as AI.

    I thought Machine Learning summed it up nicely.



    --- MultiMail/Win v0.52
    þ Synchronet þ .: realitycheckbbs.org :: scientia potentia est :.
  • From Ogg@VERT/CAPCITY2 to Dumas Walker on Tue Oct 7 18:31:00 2025
    Hello Dumas!

    ** On Tuesday 07.10.25 - 09:02, Dumas Walker wrote to JIMMYLOGAN:

    It might be a misnomer, but I am not sure about "no"
    sentience. It has been proven that AI/LLM is more likely
    than a human to "cheat" in order to get the outcome it
    wants. Whether that is a sign of "some" sentience, or if
    it is merely a sign that machines don't have ethics, is a
    subject for debate.

    Nah.. no sentience. It's just reacting to what it was
    programmed to do/choose.

    --- OpenXP 5.0.64
    * Origin: Ogg's Dovenet Point (723:320/1.9)
    * Synchronet * CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From Rob Mccart@VERT/CAPCITY2 to MORTAR on Wed Oct 8 11:13:06 2025
    By: jimmylogan to Rob Mccart on Sat Oct 04 2025 17:02:50

    J > > And I think if we stop calling it AI, which is technically
    > > a misnomer, it might make it less frightening... It is
    > > LLM - Language Learning Module - and has absolutely no
    > > sentience behind it. It's not 'intellegent,' it is just
    > > programmed to respond and such in a way that is comfortable
    > > to us.

    Thank you! I'm constantly trying to explain this to others. Unfortunetly, t
    >ng to explain LLM makes peoples eyes glaze over, so I can understand why mark
    >ng and media folks labeled it as AI.

    To add a bit to what I said last night, I don't want to think of these
    systems as being truly sentient but a story that cropped up a while back
    would make you stop and think.

    As I said, they often tend to make up an answer if they can't find
    one, trying too hard to 'please', but it came out a while back
    that one 'AI' system that they had lots of things for it to work
    on was given limited time for whatever it was working on at the
    moment.. Things seemed a bit off a while later and they checked
    and found that when the system couldn't complete a job it was
    working on in the allotted time, it actualy rewrote part of its
    own programming to give itself more time to complete the work.

    I'm not sure what you'd call that but it sounds rather 'intelligent'
    to me, almost worrisome since it is overriding the orders it had
    been given by the human users..

    I'm not sure where the name LLM came from since it's not just
    about language, although the 'learning' part of it might be a
    big part of that.. B)

    ---
    þ SLMR Rob þ Caution... Tagline under construction...
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From Rob Mccart@VERT/CAPCITY2 to DAITENGU on Wed Oct 8 11:13:06 2025
    Thank you! I'm constantly trying to explain this to others.
    Unfortunetly, trying to explain LLM makes peoples eyes glaze over, so I
    can understand why marketing and media folks labeled it as AI.

    AI isn't a terrible name. The "intelligence" is artificial. (not real intel
    >ence)

    That said, all modern "AI" is just a very advanced version of the system tha
    >redicts what word you're going to type next on your phone.

    That last is a relly good example of what an advanced LLM could do, but
    a lot of AI systems seem able to invent things on their own and make
    decisions against their original programming/orders.

    I think it's come a long way in recent years and must do some
    pretty spectacular things or we wouldn't be spending many
    $Billions to create and support more of them..

    Obviously some will be much more complex than others and able
    to do a lot more.

    ---
    þ SLMR Rob þ Please hold... All our Taglines are busy at the moment
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From Ogg@VERT/CAPCITY2 to Rob Mccart on Wed Oct 8 19:09:00 2025
    Hello Rob!

    ...was given limited time for whatever it was working on at the
    moment.. Things seemed a bit off a while later and they checked
    and found that when the system couldn't complete a job it was
    working on in the allotted time, it actualy rewrote part of its
    own programming to give itself more time to complete the work.

    Nah.. it didn't rewrite anything. It's totally possible that
    the original code failed to implement a hard fast rule when
    time runs out thus allowing the process to continue.


    --- OpenXP 5.0.64
    * Origin: Ogg's Dovenet Point (723:320/1.9)
    * Synchronet * CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From Dumas Walker@VERT/CAPCITY2 to ROB MCCART on Thu Oct 9 11:00:25 2025
    I'm not sure what you'd call that but it sounds rather 'intelligent'
    to me, almost worrisome since it is overriding the orders it had
    been given by the human users..

    I'm not sure where the name LLM came from since it's not just
    about language, although the 'learning' part of it might be a
    big part of that.. B)

    IMHO, if it is "learning" (which they are supposed to do), it might not be sentient, but it has the capacity to become intelligent enough to take
    actions that have previously been available to only those of use who are sentient.


    * SLMR 2.1a * Southern Serves the South
    ---
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From Rob Mccart@VERT/CAPCITY2 to OGG on Fri Oct 10 08:54:54 2025
    ...was given limited time for whatever it was working on at the
    moment.. Things seemed a bit off a while later and they checked
    and found that when the system couldn't complete a job it was
    working on in the allotted time, it actualy rewrote part of its
    own programming to give itself more time to complete the work.

    Nah.. it didn't rewrite anything. It's totally possible that
    >the original code failed to implement a hard fast rule when
    >time runs out thus allowing the process to continue.

    I decided to have another look for that original story because
    I figured if it was a common problem then it wouldn't have been
    newsworthy enough to get on a National News broadcast..

    So, this may be a case of both of us being right at some level
    but the original story involved Sakana AI and it mentioned the
    potential risks related to AI autonomy when their AI 'Attempted'
    to modify its own code to extend the runtime of it's experiments,
    which they said could lead to unexpected behaviors and challenges
    in control.. and they go on to needing more robust safety
    protocols and isolating AI systems from critical infrastructure
    to prevent unintended consequences.

    So the battle begins.. Every time people try to rein in what
    their AI can do on its own, it will try to find a way around
    that.. The old better mouse trap = better mouse problem.. B)

    ---
    þ SLMR Rob þ click...click...click..Damn... out of taglines!
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP
  • From Rob Mccart@VERT/CAPCITY2 to DUMAS WALKER on Sat Oct 11 08:00:47 2025
    I'm not sure where the name LLM came from since it's not just
    >> about language, although the 'learning' part of it might be a
    >> big part of that.. B)

    IMHO, if it is "learning" (which they are supposed to do), it might not be
    >sentient, but it has the capacity to become intelligent enough to take
    >actions that have previously been available to only those of use who are
    >sentient.

    My niece is a teacher and she tried something new this year for preparing report cards for her students. She estimates that it takes her about
    60 hours to come up with all the comments required to add to the report
    cards. It is expected there will be detailed comments for every student.
    No 'boilerplate' allowed..

    She got an AI involved where she could put in a few comments and the AI
    took her comments and the marks and such and apparently amazed her and
    everyone she showed it to how it came up with a report that was at least
    as good as she'd have come up with in a matter of seconds.

    But in this case it is using a little new information and making
    a modified copy of typical old comments, so it's not so much
    intelligence as manipulating data into an expected format.

    ---
    þ SLMR Rob þ (A)bort, (R)etry, (U)se a better tagline
    þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP