[conspire] Discussion: Using LLMs the Right Way: 10/1/2025 7pm Eastern Daylight time

Don Marti dmarti at zgp.org
Wed Oct 1 15:59:19 PDT 2025


On 10/1/25 2:56 PM, Deirdre Saoirse Moen wrote:

> So I want to talk about the "risk" (lol) of dev jobs being taken over by AI, courtesy of a recent consulting contract that didn't go well. Suffice to say that, if it can't get *this* simple task correct, then it's going to just wreak more havoc than it could possibly solve.

Remember what Linus Torvalds wrote about microkernels?

"Speed matters a lot in a real-world operating system, and so a lot of 
the research dollars at the time were spent on examining optimization 
for microkernels to make it so they could run as fast as a normal 
kernel. The funny thing is if you actually read those papers, you find 
that, while the researchers were applying their optimizational tricks on 
a microkernel, in fact those same tricks could just as easily be applied 
to traditional kernels to accelerate their execution. In fact, this made 
me think that the microkernel approach was essentially a dishonest 
approach aimed at receiving more dollars for research."

https://www.oreilly.com/openbook/opensources/book/linus.html

I have a feeling that we're going to get to something similar with LLMs. 
The tricks that people are coming up with in order to avoid introducing 
severe breakage because of LLMs are also going to help improve software 
quality in general, and the actual LLM will turn out to be like the 
pumpkin in the pumpkin spice.




More information about the conspire mailing list