[conspire] Discussion: Using LLMs the Right Way: 10/1/2025 7pm Eastern Daylight time

Deirdre Saoirse Moen deirdre at deirdre.net
Wed Oct 1 23:51:29 PDT 2025


On Oct 1, 2025, at 23:25, Steve Litt <slitt at troubleshooters.com> wrote:
> 
> If used right, I think an LLM is a big benefit to an excellent
> programmer. But the minute the LLM does more than advise, there's a big
> potential for trouble. It's only 2025, and LLMs aren't good enough to
> be in the drivers seat.

Disagree with your first sentence, agree with the second, and feel you are far too charitable about the third.

I like water. I like natural resources. I like them far, far more than the negligable benefits LLMs offer.

> Let's take "in the drivers seat" literally for a second. You know why
> I'm not against self-driving cars? I'm not against them because a
> self-driving car, with all its bugs, is still much less likely to crash
> than 98% of the imbeciles driving regular cars. Sure, self-driving cars
> will cause crashes, but they'll prevent even more crashes related to
> idiot drivers.

And then there’s idiot programming of cars, which should *stop* when it doesn’t understand a situation rather than put a pax in danger where where the CHP needs to call dispatch and take over.

https://www.mv-voice.com/video/2025/09/09/officer-steers-wayward-waymo-to-safety-after-highway-fire/

…as a recent example (and from a firm that does it more right than anyone).

Deirdre
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://linuxmafia.com/pipermail/conspire/attachments/20251001/6f0dd60d/attachment-0001.html>


More information about the conspire mailing list