This paper is far too long and poorly written, even considering that the topic of expert systems was once "a thing."
There are three key parallels that I see applying to today's AI companies:
1. Tech vs. business mismatch. The author points out that AI companies were (and are) run by tech folks and not business folks. The emphasis on the glory of tech doesn't always translate to effective results for their businesses customers.
2. Underestimating the implementation moat. The old expert systems and LLMs have one thing in common: they're both a tremendous amount of work to integrate into an existing system. Putting a chat box on your app isn't AI. Real utility involves specialized RAG software and domain knowledge. Your customers have the knowledge but can they write that software? Without it, your LLM is just a chatbot.
3. Failing to allow for compute costs. The hardware costs to run expert systems were prohibitive, but LLMs invoke an entirely different problem. Every single interaction with them has a cost, both inputs and outputs. It would be easy for your flat-rate consumer to use a lot of LLM time that you'll be paying for. It's not the fixed costs amortized over the user base, like we used to have. Many companies' business models won't be able to adjust to that variation.
rjsw 21 minutes ago [-]
It is a masters thesis, the length seems fine to me, spotted a few typos though.
MontyCarloHall 1 hours ago [-]
The sentiment of the title is reflected in this comment [0] from a few hours ago:
We use so much AI in production every day but nobody notices because as soon as a technology becomes useful, we stop calling it AI. Then it’s suddenly “just face recognition” or “just product recommendations” or “just [plane] autopilot” or “just adaptive cruise control” etc
Predicting the trajectory of a cannonball is applied mathematics. Aircract autopilot and cruise control are only slightly more elaborate. You can't label every algorithmic control system as "AI".
MontyCarloHall 42 minutes ago [-]
I agree that aircraft autopilot/other basic applications of control theory are not usually considered "AI," nor were they ever — control theory has its roots in mechanical governors.
Certain adaptive cruise control systems certainly are considered AI (e.g. ones that utilize cameras for emergency braking or lane-keep assist).
The line can be fuzzy — for instance, are solvers of optimization problems in operations research "AI"? If you told people in the 1930s that computers would be used in a decade by shipping companies to optimally schedule and route packages or by militaries to organize wartime logistics at a massive scale, many would certainly have considered that some form of intelligence.
hamilyon2 35 minutes ago [-]
Yes. Chess engine is clever tree search at it's core. Which in turn is just loops, arithmetic and updating some data structures.
And every AI product in existence is the same. Map navigation, search engine ranking, even register allocation and query planning.
Thus they are not AI, they're algorithms.
The frontier is constantly moving.
jagged-chisel 36 minutes ago [-]
It’s “AI” until it gets another name. It doesn’t get that other name until it’s been in use for a bit and users start understanding its use.
So you’re right that some of these things aren’t AI now. But they were called that at the start of development.
layer8 1 hours ago [-]
I don’t think that’s the sentiment of the title.
MontyCarloHall 57 minutes ago [-]
It is exactly the sentiment of the title. From the paper's conclusion:
Although most financiers avoided "artificial intelligence" firms in the early 1990s, several successful firms have utilized core AI technologies into their products. They may call them intelligence applications or knowledge management systems, or they may focus on the solution, such as customer relationship management, like Pegasystems, or email management, as in the case of Kana Communications. The former expert systems companies, described in Table 6.1, are mostly applying their expert system technology to a particular area, such as network management or electronic customer service. All of these firms today show promise in providing solutions to real problems.
In other words, once a product robustly solves a real customer problem, it is no longer thought of as "AI," despite utilizing technologies commonly thought of as "artificial intelligence" in their contemporary eras (e.g. expert systems in the 80s/90s, statistical machine learning in the 2000s, artificial neural nets in the 2010s onwards). Today, nobody thinks of expert systems as AI; it's just a decision tree. A kernel support vector machine is just a supervised binary classifier. And so on.
layer8 48 minutes ago [-]
The paper is picking up a long-standing joke in its title. From https://www.cia.gov/readingroom/docs/CIA-RDP90-00965R0001002... (1987):
All these [AI] endeavors remain at such an experimental stage that a joke is making the rounds among computer scientists: “If it works, it’s not AI.”
The article is re-evaluating that prior reality, but it isn’t making the point that successful AI stops being considered AI. In the part you quote, it’s merely pointing out that AI technology isn’t always marketed as such, due to the negative connotation “AI” had acquired.
clbrmbr 1 hours ago [-]
Fascinating reading the section about why the 1980s AI industry stumbled. The Moore’s law reasoning is that the early AI machines used custom processors which were commoditized. This time around we really are using general purpose compute though. Maybe there’s an analogy to open weight models but it’s a stretch.
Also the section on hype is informative, but I really see (ofc writing this from peak hype) a difference this time around. I fund $1000 in Claude Code Opus 4 for my top developers over the course of this month, and I really do expect to get >$1000 worth of more work output. Probably scales to $1000/dev before we hit diminishing returns.
Would be fun to write a 2029 version of this, with the assumption that we see a similar crash as happened in ~87 but in ~27. What would some possible stumbling reasons be this time around?
clbrmbr 1 hours ago [-]
Title should end (1999), as 1977 is the birth year of the author not the publication date.
api 48 minutes ago [-]
Something like the AI effect exists in the biological sciences too. You know what you call transhumanist enhancement and life extension tech when it actually works? Medicine.
Hype is fun. When you see the limits of a technology it often becomes boring even if it’s still amazing.
cs702 1 hours ago [-]
[flagged]
1 hours ago [-]
Rendered at 15:33:31 GMT+0000 (Coordinated Universal Time) with Vercel.
There are three key parallels that I see applying to today's AI companies:
1. Tech vs. business mismatch. The author points out that AI companies were (and are) run by tech folks and not business folks. The emphasis on the glory of tech doesn't always translate to effective results for their businesses customers.
2. Underestimating the implementation moat. The old expert systems and LLMs have one thing in common: they're both a tremendous amount of work to integrate into an existing system. Putting a chat box on your app isn't AI. Real utility involves specialized RAG software and domain knowledge. Your customers have the knowledge but can they write that software? Without it, your LLM is just a chatbot.
3. Failing to allow for compute costs. The hardware costs to run expert systems were prohibitive, but LLMs invoke an entirely different problem. Every single interaction with them has a cost, both inputs and outputs. It would be easy for your flat-rate consumer to use a lot of LLM time that you'll be paying for. It's not the fixed costs amortized over the user base, like we used to have. Many companies' business models won't be able to adjust to that variation.
https://en.wikipedia.org/wiki/AI_effect
Certain adaptive cruise control systems certainly are considered AI (e.g. ones that utilize cameras for emergency braking or lane-keep assist).
The line can be fuzzy — for instance, are solvers of optimization problems in operations research "AI"? If you told people in the 1930s that computers would be used in a decade by shipping companies to optimally schedule and route packages or by militaries to organize wartime logistics at a massive scale, many would certainly have considered that some form of intelligence.
And every AI product in existence is the same. Map navigation, search engine ranking, even register allocation and query planning.
Thus they are not AI, they're algorithms.
The frontier is constantly moving.
So you’re right that some of these things aren’t AI now. But they were called that at the start of development.
The article is re-evaluating that prior reality, but it isn’t making the point that successful AI stops being considered AI. In the part you quote, it’s merely pointing out that AI technology isn’t always marketed as such, due to the negative connotation “AI” had acquired.
Also the section on hype is informative, but I really see (ofc writing this from peak hype) a difference this time around. I fund $1000 in Claude Code Opus 4 for my top developers over the course of this month, and I really do expect to get >$1000 worth of more work output. Probably scales to $1000/dev before we hit diminishing returns.
Would be fun to write a 2029 version of this, with the assumption that we see a similar crash as happened in ~87 but in ~27. What would some possible stumbling reasons be this time around?
Hype is fun. When you see the limits of a technology it often becomes boring even if it’s still amazing.