Imagining that LLM assistants will help with what is essentially an exercise in theory-building seems deeply mistaken.
- Creating stable and maintainable software requires knowing the properties of the environments in intimate clarity (envs e.g.: Elixir, the web, AWS as a whole, HTTP in particular, whatever domains you cross), and having a clear theory of the software under development: as a whole, working project; as an implementation of ideas in a certain language or framework or otherwise; as an artefact being worked on over time (vis-à-vis version control, project management, etc.); and so forth with different lenses.
- LLMs can help with none of these, and actively hinder several aspects.
- Today’s best approaches only really serve to distance the user from all of these concepts. The work becomes instead “convince the generative model to produce code closest to what I want”, but “closest” is a non-specific property that a generative model will naturally exploit. It will give code a certain way (per its (quite literally) illegally and unethically sourced training data! woo!), and that will then dictate a bit more of the shape of the code you write (or generate) next. There is an entire class of tiny decisions you are repudiating, and difference in craft between the two (and I do mean by craft a proxy for intelligibility, performance, reliability, etc.) is one that will become more and more obvious as time goes.
- rarely bother to “predict” anything, so this is interesting. Usual assumptions apply: don’t particularly feel it or anything will be true, mostly because always extremely prepared to be disappointed even more. e.g. may well not happen, well-written/reliable software might just cease to exist in the large instead ¯\_(ツ)_/¯
- If typing speed has even been the bottleneck for your programming you are Doing It Wrong. LLM-centric approaches, even in an agentic scenario (or whatever! the model fundamentally hallucinates! in these approaches, they always will. stop falling for the next thing every 6 months, it’s boring!), decentre all of the theory-building aspect and the hands-on experience necessary if you ever again see yourself having to work on this by hand (vs. declaring it write-only, hoping the agent definitely gives you good things to paste in the console when there’s an incident and you can’t understand the data flow yourself!).
- Where do you honestly see this going? Has there ever been any indication that this isn’t another bubble? Do you not already see the horror stories? I haven’t even mentioned the environmental costs; the ones that threaten to displace all other costs with its effects. Or are you going soft on “global warming” too?