Counter point: AI makes mainstream languages (for which a lot of data exists in the training data) even more popular because those are the languages it knows best (ie, has the least rate of errors in) regardless of them being typed or not (in fact, many are dynamic, like Python, JS, Ruby).
The end result? Non-mainstream languages don't get much easier to get into because average Joe isn't already proficient in them to catch AI's bugs.
People often forget the bitter lesson of machine learning which plagues transformer models as well.
bluetomcat 2 hours ago [-]
It’s good at matching patterns. If you can frame your problem so that it fits an existing pattern, good for you. It can show you good idiomatic code in small snippets. The more unusual and involved your problem is, the less useful it is. It cannot reason about the abstract moving parts in a way the human brain can.
carlmr 2 hours ago [-]
>It cannot reason about the abstract moving parts in a way the human brain can.
Just found 3 race conditions in 100 lines of code. From the UTF-8 emojis in the comments I'm really certain it was AI generated. The "locking" was just abandoning the work if another thread had started something, the "locking" mechanism also had toctou issues, the "locking" also didn't actually lock concurrent access to the resource that actually needed it.
bluetomcat 2 hours ago [-]
Yes, that was my point. Regardless of the programming language, LLMs are glorified pattern matchers. A React/Node/MongoDB address book application exposes many such patterns and they are internalised by the LLM. Even complex code like a B-tree in C++ forms a pattern because it has been done many times. Ask it to generate some hybrid form of a B-tree with specific requirements, and it will quickly get lost.
practice9 2 hours ago [-]
Humans cannot reason about code at scale. Unless you add scaffolding like diagrams and maps and …
Things that most teams don’t do or half-ass
samrus 16 minutes ago [-]
Its not scaffolding if the intelligence itself is adding it. Humans can make their own diagrams ajd maps to help them, LLM agentsbneed humans to scaffold for them, thats the setup for the bitter lesson
RedNifre 27 minutes ago [-]
I'm not sure, I have a custom config format that combines a CSV schema with processing instructions that I use for bank CSVs and Claude was able to generate a perfect one for a new bank only based on one config plus CSV and the new bank's CSV.
I'm optimistic that most new programming languages will only need a few "real" programmers to write a small amount of example code for the AI training to get started.
greener_grass 35 minutes ago [-]
More people who are not traditionally programmers are now writing code with AI assistance (great!) but this crowd seems unlikely to pick up Clojure, Haskell, OCaml etc... so I agree this is a development in favor of mainstream languages.
__loam 31 minutes ago [-]
Imo there's been a big disconnect between people who view code as work product vs those who view it as a liability/maintenance burden. AI is going to cause an explosion in the production of code, I'm not sure it's going to have the same effect on long term maintenance and I don't think rewriting the whole thing with ai again is a solution.
minebreaker 3 hours ago [-]
From what I can tell, LLMs tend to hallucinate more with minor languages than with popular ones. I'm saying this as a Scala dev. I suspect most discussions about the LLM usefulness depend on the language they use. Maybe it's useful for JS devs.
noosphr 57 minutes ago [-]
Its more useful for python devs since pretty much all ml code is python wrappers around c++.
rapind 3 hours ago [-]
I’m having a good time with claude and Elm. The correctness seems to help a lot. I mean it still goes wonky some times, but I assume that’s the case with everyone.
golergka 18 minutes ago [-]
Recently I wrote a significant amount of zig first time in my life thanks to Claude Code. Is zig a mainstream language yet?
arrowsmith 2 hours ago [-]
Ehhhh, a year ago I'd have agreed with you — LLMs were noticeably worse with Elixir than with bigger langs.
But I'm not noticing that anymore, at least with Elixir. The gap has closed; Claude 4 and Gemini 2.5 both write it excellently.
Otoh, if you wanted to create an entirely new programming language in 2025, you might be shit outta luck.
echelon 3 hours ago [-]
AI seems pretty good at Rust, so I don't know. What sort of obscure languages are we talking about here?
mrheosuper 10 minutes ago [-]
Rust is far from obscure.
some HDLs should fit the bill: VHDL, Verilog or SystemC
behnamoh 3 hours ago [-]
Haskell, Lisps (especially the most Common one!), Gleam or any other Erlang-wrapper like Elixir, Smalltalk, etc.
josevalim 2 hours ago [-]
Phoenix.new is a good example of a coding agent that can fully bootstrap realtime Elixir apps using Phoenix LiveView: https://phoenix.new/
I also use coding agents with Elixir daily without issues.
arrowsmith 2 hours ago [-]
Yes, Claude 4 is very good at Elixir.
smackeyacky 1 hours ago [-]
Old stuff like VB.NET it’s really struggling on here. But c# its mostly fine
m00dy 2 hours ago [-]
Rust is the absolute winner of LLM era.
bugglebeetle 2 hours ago [-]
I’m blown away by how good Gemini Pro 2.5 is with Rust. Claude I’ve found somewhat disappointing, although it can do focused edits okay. Haven’t tried any of the o-series models.
jongjong 2 hours ago [-]
Can confirm, you can do some good vibe coding with JavaScript (or TypeScript) and Claude Code.
I once vibe coded a test suite for a complex OAuth token expiry issue while working on someone else's TypeScript code.
Also, I had created a custom Node.js/JavaScript BaaS platform with custom Web Components and wanted to build apps with it, I gave it the documentation as attachment and surprisingly, it was able to modify an existing app to add entire new features. This app had multiple pages and Claude just knew where to make the changes. I was building a kind of marketplace app. One time it implemented the review/rating feature in the wrong place and I told it "This rating feature is meant for buyers to review sellers, not for sellers to review buyers" and it fixed it exactly right.
I think my second experience (plain JavaScript) was much more impressive and was essentially frictionless. I can't remember it making a single major mistake. I think only once it forgot to add the listener to handle the click event to highlight when a star icon was clicked but it fixed it perfectly when I mentioned this. With TypeScript, it sometimes got confused; I had to help it a lot more because I was trying to mock some functions; the fact that the TypeScript source code is separate from the build code created some confusion and it was struggling to grep the codebase at times. Though I guess the code was also more complicated and spread out over more files. My JavaScript web components are intended to be low-code so it's much more succinct.
cultofmetatron 3 hours ago [-]
I think AI will push programming languages in the direction of stronger hindly milner type type checking. Haskell is brutally hard to learn but with enough of a data set to learn from, its the perfect target language for a coding agent. its high level, can be formally verified using well known algos and a language server could easily be connected with the ai agent via some mcp interface.
js8 35 minutes ago [-]
I wish but the opposite seems to be coming - Haskell will have less support from coding AIs than mainstream languages.
I think people, who care about FP, should think about what is appealing about coding in natural language and is missing from programming in strongly typed FP languages such as Haskell and Lean. (After all, what attracted me to Haskell compared to Python was that the typechecking is relatively cheap thanks to type inference.)
I believe that natural language in coding has allure because it can express the outcome in fuzzy manner. I can "handwave" certain parts and the machine fills them out. I further believe, to make this work well with formal languages, we will need to use some kind of fuzzy logic, in which we specify the programs. (I particularly favor certain strong logics based on MTL but that aside.) Unfortunately, this line of research seems to have been pretty much abandoned in AI in favor of NNs.
tsimionescu 3 hours ago [-]
> can be formally verified using well known algos
Is there any large formally verified project written in Haskell? The most well known ones are C (seL4 microkernel) and Coq+OCaml (CompCert verified C compiler).
aetherspawn 55 minutes ago [-]
Well, Haskell has GADTs, new type wrappers and type interfaces which can be (and are often) used to implement formal verification using meta programming, so I get the point he was making.
You pretty much don’t need to plug another language into Haskell to be satisfied about certain conditions if the types are designed correctly.
seanmcdirmid 3 hours ago [-]
We might see wider adoption of dependently typed languages like Agda. But limited corpus might become the limiting factor, I’m not sure how knowledge transfers as the languages get more different.
ipnon 3 hours ago [-]
It's getting cheaper and cheaper to generate corpora by the day, and Agda has the advantage of being verifiable like Lean. So you can simulate large amounts of programs and feed these back into the model. I think this is a major reason why we're seeing remarkable improvements in formal sciences like the recent IMO golds, and yet LLMs are still struggling to generate aesthetically pleasing and consistent CSS. Imagine a high schooler who can win an IMO gold medal but can't center a div!
andrewflnr 3 hours ago [-]
It seems like "generating" a corpus in that situation is more like a search process guided by prompts and more critically the type checker, rather than a straight generation process right? You need some base reality or you'll still just have garbage in, garbage out.
iparaskev 2 hours ago [-]
> The real breakthrough came when I stopped thinking of AI as a code generator and started treating it as a pairing partner with complementary skills.
I think this is the most important thing mentioned in the post. In order for the AI to actually help you with languages you don't know you have to question its solutions. I have noticed that asking questions like why are we doing it like this and what will happen in the x,y,z scenario, really helps.
solids 1 hours ago [-]
My experience is that each question I ask or point I make produces an answer that validates my thinking. After two or three iterations in a row in this style I end up distrusting everything.
samrus 13 minutes ago [-]
This is very true. Constant insecurity for me. One thing that helps a little is asking it to search for sources to back up what its saying. But claude has hallucinated those as well. Perplexity seems to be good at being true to sources, but idk how good it is at coding itself
tietjens 35 minutes ago [-]
yes, this. biggest problem and danger in my daily work with llms. my entire working method with them is shaped around this problem. instead of asking it to give me answers or solutions, i give it a line of thought or logical chain, and then ask it to continue down the path and force it to keep explaining the reasoning while i interject, continuing to introduce uncertainty. suspicion is one of the most valuable things i need to make any progress. in the end it's a lot of work and very much reading and reasoning.
Maro 4 hours ago [-]
This is great, and I think this is the right way to use AI: treat it as a pair programming partner and learn from it. As the human learns and becomes better at both programming and the domain in question (eg. a Ruby JIT compiler), the role of the AI partner shifts: at the beginning it's explaining basic concepts and generating/validating smaller snippets of code; in later stages the conversations focus on advanced topics and the AI is used to generate larger portions of code, which now the human is more confident to review to spot bugs.
46 minutes ago [-]
karmasimida 2 hours ago [-]
AI has basically removed my fear with regards to programming languages.
It almost never misses on explaining how certain syntax works.
1 hours ago [-]
1 hours ago [-]
sillycube 2 hours ago [-]
Yes, I try to port 200 lines of js to Rust, the features remain the same. Using Claude 4.0 Sonnet with a prompt and it's done. Work perfect.
I still spend a few days studying Rust to grasp the basic things.
graynk 1 hours ago [-]
Get back to me once you successfully write a Vulkan app with LLMs
physicsguy 2 hours ago [-]
I've noticed this at work where I use Python frameworks like Flask/FastAPI/Django and Go, which has the standard library handlers but within that people are much less likely to follow specific patterns and where there are various composable bits as add ons.
If you ask an LLM to generate a Go handler for a REST endpoint, it often does something a bit out of step with the rest of the code base. If I do it in Python, it's more idiomatic.
SubiculumCode 3 hours ago [-]
Seems like it would make people more adverse..the variability of AI expertise by language is pretty large.
Paradigma11 4 minutes ago [-]
LLMs learn and apply pattern. You can always give some source code examples and language docs as context and it will apply those adapted patterns to the new language.
Context windows are pretty large (Gemini 2.5 pro with 1 mill tokens (~ 750k words the largest) so it does not really matter.
MattGaiser 3 hours ago [-]
It just needs to be better than the human would be and less effort. It does not need to be great.
karmasimida 2 hours ago [-]
Let me just say this way.
AI is a much better, so in some case worse, language lawyer than humans could ever be.
alentred 2 hours ago [-]
I wonder, are some programming languages more suitable for AI coding agents (or, rather LLMs) than the others? For example, are heavy on syntax languages at disadvantage? Is being verbose a good thing or a bad thing?
P.S. Maybe we will finally see M-expressions for Lisp developed some day? :)
nikolayasdf123 3 hours ago [-]
true. doing pair programming with AI for last 10 months I got my skills from zero to sufficient profficiency (not expert yet) in totally new language — Swift. entry barrier is much lower now. research advanced topics is much faster. typing code (unit tests, etc.) is much faster. code review is automated. it is indeed makes barrier for new languages and tools lower.
iLoveOncall 1 hours ago [-]
I would expect anyone to get proficient in Swift after 10 months of using it, with or without AI...
If AI had really a multiplying factor here, I'd expect you to BE an expert.
Ozzie_osman 3 hours ago [-]
Agree. My team and I were just discussing that the biggest productivity unlock from AI in the dev workflow is that it enables people to more easily break out of their box. If you're an expert backend developer, you may not see huge lift when you write backend code. But when you need to do work on infrastructure or front-end, you can now much more easily unblock yourself. This unlocks a lot of productivity, and frankly, makes the work a lot more enjoyable.
iLoveOncall 1 hours ago [-]
I don't think I've ever seen an experienced software engineer struggling to adapt to a new language.
I have worked in many, many languages in the past and I've always found it incredibly easy to switch, to the point where you're able to contribute right away and be efficient after a few hours.
I recently had to do some updates on a Kotlin project, having never used it (and not used Java in a few years either), and there was absolutely no barrier.
RedNifre 24 minutes ago [-]
Bash might not be difficult, but it is very annoying, so I'm happy that the AI edits my scripts for me.
3 hours ago [-]
globular-toast 2 hours ago [-]
We learn natural languages by listening and trying things to see what responses we get. Some people have tried to learn programming the same way too. They'd just randomly try stuff, see if it compiles then see if it gives what they were expecting when they run it. I've seen it with my own eyes. These are the worst programmers in existence.
I fear that this LLM stuff is turning this up 11. Now you're not even just doing trial and error with the compiler, it's trial and error with the LLM and you don't even understand what it's output. Writing C or assembly without fully reasoning about what's going on is going to be a really bad time... No, the LLM does not have a working model of computer memory, it's a language model, that's it.
andrewstuart 3 hours ago [-]
I’ve been enjoying doing a bunch of assembly language programming - something I never had the experience of or capability to learn to competence or time to learn previously.
11 minutes ago [-]
kaptainscarlet 3 hours ago [-]
I was thinking the same the other day. No need for high-level languages anymore. AI, assumming it will get better and replace humans coders. has eliminated the labour constraint. Moores law death will no longer be a problem as performance gains are realised in software. The days of bloated electron apps are finally behind us.
Rendered at 08:54:22 GMT+0000 (Coordinated Universal Time) with Vercel.
The end result? Non-mainstream languages don't get much easier to get into because average Joe isn't already proficient in them to catch AI's bugs.
People often forget the bitter lesson of machine learning which plagues transformer models as well.
Just found 3 race conditions in 100 lines of code. From the UTF-8 emojis in the comments I'm really certain it was AI generated. The "locking" was just abandoning the work if another thread had started something, the "locking" mechanism also had toctou issues, the "locking" also didn't actually lock concurrent access to the resource that actually needed it.
Things that most teams don’t do or half-ass
I'm optimistic that most new programming languages will only need a few "real" programmers to write a small amount of example code for the AI training to get started.
But I'm not noticing that anymore, at least with Elixir. The gap has closed; Claude 4 and Gemini 2.5 both write it excellently.
Otoh, if you wanted to create an entirely new programming language in 2025, you might be shit outta luck.
some HDLs should fit the bill: VHDL, Verilog or SystemC
I also use coding agents with Elixir daily without issues.
Also, I had created a custom Node.js/JavaScript BaaS platform with custom Web Components and wanted to build apps with it, I gave it the documentation as attachment and surprisingly, it was able to modify an existing app to add entire new features. This app had multiple pages and Claude just knew where to make the changes. I was building a kind of marketplace app. One time it implemented the review/rating feature in the wrong place and I told it "This rating feature is meant for buyers to review sellers, not for sellers to review buyers" and it fixed it exactly right.
I think my second experience (plain JavaScript) was much more impressive and was essentially frictionless. I can't remember it making a single major mistake. I think only once it forgot to add the listener to handle the click event to highlight when a star icon was clicked but it fixed it perfectly when I mentioned this. With TypeScript, it sometimes got confused; I had to help it a lot more because I was trying to mock some functions; the fact that the TypeScript source code is separate from the build code created some confusion and it was struggling to grep the codebase at times. Though I guess the code was also more complicated and spread out over more files. My JavaScript web components are intended to be low-code so it's much more succinct.
I think people, who care about FP, should think about what is appealing about coding in natural language and is missing from programming in strongly typed FP languages such as Haskell and Lean. (After all, what attracted me to Haskell compared to Python was that the typechecking is relatively cheap thanks to type inference.)
I believe that natural language in coding has allure because it can express the outcome in fuzzy manner. I can "handwave" certain parts and the machine fills them out. I further believe, to make this work well with formal languages, we will need to use some kind of fuzzy logic, in which we specify the programs. (I particularly favor certain strong logics based on MTL but that aside.) Unfortunately, this line of research seems to have been pretty much abandoned in AI in favor of NNs.
Is there any large formally verified project written in Haskell? The most well known ones are C (seL4 microkernel) and Coq+OCaml (CompCert verified C compiler).
You pretty much don’t need to plug another language into Haskell to be satisfied about certain conditions if the types are designed correctly.
I think this is the most important thing mentioned in the post. In order for the AI to actually help you with languages you don't know you have to question its solutions. I have noticed that asking questions like why are we doing it like this and what will happen in the x,y,z scenario, really helps.
It almost never misses on explaining how certain syntax works.
I still spend a few days studying Rust to grasp the basic things.
If you ask an LLM to generate a Go handler for a REST endpoint, it often does something a bit out of step with the rest of the code base. If I do it in Python, it's more idiomatic.
Context windows are pretty large (Gemini 2.5 pro with 1 mill tokens (~ 750k words the largest) so it does not really matter.
AI is a much better, so in some case worse, language lawyer than humans could ever be.
P.S. Maybe we will finally see M-expressions for Lisp developed some day? :)
If AI had really a multiplying factor here, I'd expect you to BE an expert.
I have worked in many, many languages in the past and I've always found it incredibly easy to switch, to the point where you're able to contribute right away and be efficient after a few hours.
I recently had to do some updates on a Kotlin project, having never used it (and not used Java in a few years either), and there was absolutely no barrier.
I fear that this LLM stuff is turning this up 11. Now you're not even just doing trial and error with the compiler, it's trial and error with the LLM and you don't even understand what it's output. Writing C or assembly without fully reasoning about what's going on is going to be a really bad time... No, the LLM does not have a working model of computer memory, it's a language model, that's it.