The evangelist and the critic
This Hot Take was written by Joel Goodman
I swear I didn’t mean to turn this newsletter into Yet Another AI Missive, but there’s so much blind trust in these products floating around higher ed that I have really felt the need to bring a critical lens to how we assess these things. I’ve been saying it a lot lately — I’m a critic, not a skeptic. From here on out, I’m going to be using the term LLM because if you read the last newsletter, you’ll know that AI is purely a marketing term taken from another technology and slapped onto machine learning by Sam Altman and OpenAI.
Did you know that OpenAI lost $5 billion after revenue in 2024? This is the company that owns ChatGPT and powers many of the other products your institution is probably paying five-figure sums for. According to financial reporting, it costs OpenAI $1 billion more than all of its revenue to run its software.
This a problem. OpenAI loses money on every call to its API. And the same is true for Anthropic (the company behind Claude). That means that the price you’re paying for LLM-powered software is well below the cost it takes to actually run it. These products have been backed by venture capital based on a lot of smoke and mirrors.
With news out today that Microsoft (who runs the datacenters powering OpenAI’s GPTs) is pulling back on its datacenter build-outs, there is a very real chance this bubble we’ve been ignoring is about to burst.
Two kinds of AI people #
If I can generalize a bit, there are two kinds of AI thought leaders — and only one kind gets listened to.
AI Evangelists #
First, there’s the Evangelist who knows enough about how to use these systems and the general framework for how the models are constructed. The evangelists are all-in either because they’re super optimistic or they see the potential for a short-term cash return and are pretty cynical about society. If they can get rich off of your lack of knowledge, they will.
AI evangelists fall into two camps. There are the ones who started or shifted their businesses to take advantage of this new wave of LLM dominance. They built chat interfaces, CRMs, and search frontends that are wrapped around OpenAI or Anthropic. They feed your data back to these companies and have pinned their business to this bubble. They don’t actually own the tech. They rent GPU cycles and create marketing buzz.
Then there are the so-called experts who played with the early ChatGPT web app and saw an opportunity to own a thought leadership lane. You’ve seen them at conferences trying to help you figure out how to use LLMs in your own workflows. We’ve all learned a ton from them. Maybe they’ve written a book about “AI” already. They are genuinely trying to help us stay ahead of the curve.
AI Critics #
Yeah, this is me. I’ve been using these tools longer than most of the Evangelists. I’m always an early adopter and I’m always critical. When I lived in Austin and was plugged into the startup scene I used to tell people I’d be a terrible VC Investor because I think everyone’s idea is bad. I am a critic and I keep a watchful eye.
Critics aren’t skeptical, though. We count data scientists, Machine Learning engineers, marketers, journalists, and plenty of others among our ranks. We don’t believe the hype, but we test it against reality. If the bubble is about to burst, we need to sound the alarm.
The problem #
So wait, do you really think that chatting with a computer is a good experience and super useful as a general, every day activity? Like, I get it for doing deep research but the Chat part of ChatGPT is not the product. In fact, LLMs shouldn’t be the product even though they’re sold like they are. LLMs should be part of the infrastructure making products better and the best all of these supposedly brilliant people can come up with is chat.
Meanwhile, university websites are still slow, institutions are adding more AI chatbot junk to them, and search engines are eating our traffic for breakfast. What are we doing?
👉🏼 AI tools are not a replacement for content strategy.
👉🏼 AI tools are not a replacement for strong UX design.
👉🏼 AI tools are not a replacement for Admissions counselors.
But we’re still out here paying lip service to action and dumping cash into middleware that ruins the environment, guts the arts, and steals from the very people we educate.
What are LLMs good for? #
What I have taken away from the AI Craze™ is this. The billions and billions of dollars set on fire by these companies and the possibly irreparable damage to our planet they have done have brought the promise of big data analysis to our personal computers. When you clear past the hype and marketing and evangelists, this is an incremental gain.
Every institution now has the power to analyze all of its data, find trends, do market analysis, and reduce the workload of its already overworked staff. The bloated, expensive operational side of running a college or university can be made more efficient, freeing up the messy humanity-needed side of the house to provide better care.
Why are you spending money on chatbots and target ad-style contact forms? Why are you giving money to a company that will either fail, bringing an entire industry down with it or raise its prices so high you’ll never be able to recover?
There are other ways. Smarter ways. More effective ways.
And right now, can you really afford to be betting the house on someone else’s failing business?