[conspire] Discussion: Using LLMs the Right Way: 10/1/2025 7pm Eastern Daylight time

Rev Anon Rev_Anon at Atheist.com
Fri Oct 3 20:05:59 PDT 2025


I was using Rufus to find something and it came up with totally 
unrelated stuff to the query.  So I then asked if Rufus was a dufas.  
The response was stunning, it actually acted as if I had insulted it.

On 10/3/25 17:17, Ivan Sergio Borgonovo wrote:
> Sometimes I feel your graciousness a bit performative.
>
> Have you any better argument supporting your thesis other than 
> illustrating the history of VCS?
> That's not the right way to look authoritative on LLM.
>
> Deirdre just wrote that in other companies LLM can do their work to 
> manage Mercurial.
> So it seems it's not a problem of the technology itself.
>
> How does it come that a company is able to teach to an LLM to do its 
> job managing Mercurial and another not only screw up the training but 
> instrument it to make a mess of the work of their developers?
>
> You've 3 separate problems and NONE of them is inherently linked to 
> the technology itself.
> One is related to training as I suggested
> The other is related to instrumentation as I affirmed.
> The third is related to the expectations on LLM Deirdre declare she 
> doesn't have but then apparently relay on them or doesn't but it is 
> too tired to handle it [*].
>
> I don't need any specifically internally trained LLM to get sound 
> suggestion for git. I even wrote a tool with NO previous experience on 
> gemini's API to automate my git workflow in few hours [**].
> Because gemini has a fucking huge corpus on git related problems.
> It not only knows the syntax but it has a lot of context about when a 
> sequence of commands was used to solve which problem.
> It will screw up because LLM machines have no concept of truth or 
> coherency that is essential to programming but it will generally 
> (statistics plays a big role in the way they work) be more useful than 
> when it has to help you using Mercurial (over training is a well known 
> problem etc... etc...).
>
> When you're able to automate something... some jobs get redundant.
> You free up time for developers and you need less or you don't need 
> some competences.
>
> How you're going to invest these savings is a social and economic 
> problem. You invest them in better products/services or to keep the 
> illusion of ever growing stocks and higher dividends.
>
> Not to mention that yeah AI is hyped exactly for economic and social 
> reasons one of them is scaring the shit out of unionizing devs.
>
> What you'll end up is a lot of way less competent devs. Instead of 
> hordes of people in sweatshops deciding if an image is a cat or a 
> traffic light, you'll probably end up with hordes of people in 
> sweatshops deciding if the output of a function is correct.
> And this is going to produce sloppy code.
>
> Again LLM are not a revolution, or not a big revolution as advertised.
> Everybody by now know how they can screw up and how they can be 
> employed in ways they make them obnoxious.
> And again for social and economic reasons they are more hyped exactly 
> in the places where they are most obnoxious.
>
> It's no wonder that people, precisely because of this, feel a 
> particular unease and end up being hostile to the technology itself, 
> and not to those who control it.
>
> Coming back to your "I prefer tools that are deterministic"...
> I'm a non deterministic developer.
> If I was ever able to write deterministic code (I get the same 
> requirement and I write the same exact code) it would probably be time 
> to abstract it into a library... there is a lot of gray area between 
> feeling the need to write a library and writing almost the same code.
> Actually that area is HUGE because, trust me, sincerely creative 
> moments are rare.
>
> People that advertise themselves has living constantly in creative 
> moments are exactly the kind of people that DON'T KNOW SHIT because 
> they  never studied and have always delegated true work and think 
> others are machines, stupid sheep, so they think their work can't be 
> taken by machines... but others people work can. Traits that are 
> common with many conspiracists that don't believe in experts...
>
> Now you still will need a human in at least 2 positions in the loop: 
> the prompt and the evaluation and integration.
> You can produce mediocre code at a fraction of the cost with 
> reverse-centaurs or you can produce better code giving power to 
> developers.
>
> Guess what's going to make the difference in what is currently happening?
> "Smart" developers hostile to AI[***] for "technical reasons", which 
> most of the times are wrong otherwise they would be great fan of AI or 
> greed?
>
> Initially textile produced with power looms was sub-par and qualified 
> workers got turned into slaves. But automation is way more pervasive 
> now and they are not planning to supplant just devs. Or at least 
> that's what they shout to the market and to unions.
>
> https://www.businessinsider.com/klarna-reassigns-workers-to-customer-support-after-ai-quality-concerns-2025-9 
>
>
> And this is damaging "consumers" too.
>
> [*] reverse-centaur situation. I probably would prefer to spot a typo 
> or a variable used in place of another or a compiler parameter that 
> doesn't even exist in LLM generated code from a comfortable apartment 
> in Milan than blindly run LLM generated code and check if it "works" 
> in a sweatshop in Bangalore but the fact that I think the greatest 
> difference is the apartment in Milan says something.
> And that's what actually happened before I learnt how to save time and 
> not feel pain, but I was free to chose how to use a LLM.
> A programmer that use an LLM to speed up its coding has still some 
> bargaining power, he still has some valuable know-how, a "tester" is a 
> commodity.
>
> [**] we could talk about why I can't fully exploit the tool as I 
> wished and why LLM still have a choke point and you could read about 
> what a really "open source" LLM should be and talk about copyright and 
> what plagiarism in code is... But you can run an LLM on consumer 
> hardware and you don't need to destroy an ecosystem for it.
> But it is way more fun and *popular* and easy and pleasing to use an 
> LLM to be reassured of the cogency of your conspiracy theory and 
> that's going to be way more dangerous to your ecosystem than the power 
> and water you're going to consume.
>
> [***] Deirdre uhm so your "sloppy use of language" has to be excused 
> but mine (Y'all) is incompetence?
>
> BTW the things that mostly resemble AI in terms of looking 
> "intelligent" and LLM works on the same basis they just have different 
> specializations and most of the techniques are common to both.
>
> Deepblue won't look "intelligent" for today standards and yet it was 
> considered AI... LLM can't win a chess match. AlphaZero is neural 
> network based.
>
> Techniques like decision trees, SVM, rule-based systems are equally 
> used in "traditional" programs and stuff in the chain of neural networks.
>
> The "core" that makes this stuff look "intelligent" is neural 
> networks... there is still a lot of things to do.
> Filtering, preprocessing and this depends on the type of input any "a 
> priori" knowledge of the kind of data you're processing so you can 
> "push" some knowledge of the model into the neural network, noise 
> filtering, learning optimization bla bla bla...
> You don't do stemming on images, you do it on sentences, you don't 
> apply rotations to phrases, you do it on images etc... plenty of work, 
> of knowledge in tons of fields.
>
> And that's why current state of AI and LLM are far from being a 
> revolution but rather an incremental development that blossomed due to 
> several factors (faster hardware, huge data for training, economic 
> interest to push research in a certain direction...).
> The probably most important successive step in AI was the discovery of 
> the transformer model that made affordable the computations and opened 
> the doors to gen AI. It is revolutionary in terms of impact but 
> somehow it was inspired by the availability of a certain kind of 
> hardware and some previous work on computer vision.
>
> BTW there are "deterministic" AI and they are generally most of the 
> techniques above minus the neural network.
>
> I was playing with neural networks when I was 20ish, I'm 53. I don't 
> fucking consider myself an expert nor even a practitioner and I feel 
> ashamed to have to brag about it to make a point.
>
> On 10/3/25 7:02 PM, Rick Moen wrote:
>> Quoting Ivan Sergio Borgonovo (mail at webthatworks.it):
>>
>> [nothing much]
>>
>> Fine, you want to concentrate on a core competency of gratuitous
>> asshattery when I go out of my way to be gracious, go ahead.  I have
>> much better things to spend my time on.
>>
>>
>> _______________________________________________
>> conspire mailing list
>> conspire at linuxmafia.com
>> http://linuxmafia.com/mailman/listinfo/conspire
>



More information about the conspire mailing list