[conspire] Discussion: Using LLMs the Right Way: 10/1/2025 7pm Eastern Daylight time
Don Marti
dmarti at zgp.org
Fri Oct 3 16:56:26 PDT 2025
On 10/3/25 13:36, Deirdre Saoirse Moen wrote:
> LLMs are a parlor game that use far more resources than they can provide value to society. They don’t “work” in the sense that they are not intelligent, and are not true AI. At *best* you will get a junior dev out of them, but a junior dev who makes mistakes like deleting code that shouldn’t be deleted. Many people do not have enough exposure to LLMs to realize they are basically attempting to recreate that kid who constantly screws up then pretends nothing bad happened, then wonders why you stopped talking to them.
I was talking about a web publishing/advertising thing that's fairly
complicated and someone said they would use ChatGPT to get some
background explanation of it. That seemed like a bad place to start,
for two reasons.
1. It's about a market in which a lot of people have an incentive to
persuade others that things are a certain way, so there's a lot of sales
copy from one point of view.
2. The technology is changing fairly quickly, so most of the content out
there would apply to older code or practices.
> That said, I did find it useful to be able to interactively ask questions that were specific to $FIRM’s processes and have a live answer that was usually at least close enough to correct I could find the answer from where it led me.
Now I'm wondering about the categories of things for which an LLM would
be useful. Syntax for doing something that appears in publicly
available scripts might be a good example -- I have been able to "vibe
code" some correct Python code to extract material from an HTML file
with Beautiful Soup, and get the jq(1) syntax right for getting a subset
of the info from a big JSON file. (For both of those I could test the
output, though)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://linuxmafia.com/pipermail/conspire/attachments/20251003/a6709f60/attachment.html>
More information about the conspire
mailing list