Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
For the past 18 months, I have observed the burgeoning conversation around large language models (LLMs) and generative AI. The breathless hype and hyperbolic conjecture about the future have ballooned— perhaps even bubbled — casting a shadow over the practical applications of today’s AI tools. The hype underscores the profound limitations of AI at this moment while undermining how these tools can be implemented for productive results.
We are still in AI’s toddler phase, where popular AI tools like ChatGPT are fun and somewhat useful, but they cannot be relied upon to do whole work. Their answers are inextricable from the inaccuracies and biases of the humans who created them and the sources they trained on, however dubiously obtained. The “hallucinations” look a lot more like projections from our own psyche than legitimate, nascent intelligence.
Furthermore, there are real and tangible problems, such as the exploding energy consumption of AI that risks accelerating an existential climate crisis. A recent report found that Google’s AI overview, for example, must create entirely new information in response to a search, which costs an estimated 30 times more energy than extracting directly from a source. A single interaction with ChatGPT requires the same amount of electricity as a 60W light bulb for three minutes.
Who is hallucinating?
A colleague of mine, without a hint of irony, claimed that because of AI, high school education would be obsolete within five years, and that by 2029 we would live in an egalitarian paradise, free from menial labor. This prediction, inspired by Ray Kurzweil’s forecast of the “AI Singularity,” suggests a future brimming with utopian promises.
I will take that bet. It will take far more than five years — or even 25 — to progress from ChatGPT-4o’s “hallucinations” and unexpected behaviors to a world where I no longer need to load my dishwasher.
There are three intractable, unsolvable problems with gen AI. If anyone tells you that these problems will be solved one day, you should understand that they have no idea what they are talking about, or that they’re selling something that doesn’t exist. They live in a world of pure hope and faith in the same people who brought us the hype that crypto and Bitcoin will replace all banking, cars will drive themselves within five years and the metaverse will replace reality for most humans. They are trying to grab your attention and engagement right now so that they can grab your money later, after you are hooked and they have jacked up the price and before the floor bottoms out.
Three unsolvable realities
Hallucinations
There is neither enough computing power nor enough training data on the planet to solve the problem of hallucinations. Gen AI can produce outputs that are factually incorrect or nonsensical, making it unreliable for critical tasks that require high accuracy. According to Google CEO Sundar Pichai, hallucinations are an “inherent feature” of gen AI. This means that model developers can only expect to mitigate the potential harm of hallucinations, we cannot eliminate them.
Non-deterministic outputs
Gen AI is inherently non-deterministic. It is a probabilistic engine based on billions of tokens, with outputs formed and re-formed through real-time calculations and percentages. This non-deterministic nature means that AI’s responses can vary widely, posing challenges for fields like software development, testing, scientific analysis or any field where consistency is crucial. For example, leveraging AI to determine the best way to test a mobile app for a specific feature will likely yield a good response. However, there is no guarantee it will provide the same results even if you input the same prompt again — creating problematic variability.
Token subsidies
Tokens are a poorly-understood piece of the AI puzzle. In short: Every time you prompt an LLM, your query is broken up into “tokens”, which are the seeds for the response you get back — also made of tokens —and you are charged a fraction of a cent for each token in both the request and the response.
A significant portion of the hundreds of billions of dollars invested into the gen AI ecosystem goes directly toward keeping these costs down, to proliferate adoption. For example, ChatGPT generates about $400,000 in revenue every day, but the cost to operate the system requires an additional $700,000 in investment subsidy to keep it running. In economics this is called “Loss Leader Pricing” — remember how cheap Uber was in 2008? Have you noticed that as soon as it became widely available it is now just as expensive as a taxi? Apply the same principle to the AI race between Google, OpenAI, Microsoft and Elon Musk, and you and I may start to fear when they decide they want to start making a profit.
What is working
I recently wrote a script to pull data out of our CI/CD pipeline and upload it to a data lake. With ChatGPT’s help, what would have taken my rusty Python skills eight to ten hours ended up taking less than two — an 80% productivity boost! As long as I do not require the answers to be the same every single time, and as long as I double-check its output, ChatGPT is a trusted partner in my daily work.
Gen AI is extremely good at helping me brainstorm, giving me a tutorial or jumpstart on learning an ultra-specific topic and producing the first draft of a difficult email. It will probably improve marginally in all these things, and act as an extension of my capabilities in the years to come. That is good enough for me and justifies a lot of the work that has gone into producing the model.
Conclusion
While gen AI can help with a limited number of tasks, it does not merit a multi-trillion-dollar re-evaluation of the nature of humanity. The companies that have leveraged AI the best are the ones that naturally deal with gray areas — think Grammarly or JetBrains. These products have been extremely useful because they operate in a world where someone will naturally cross-check the answers, or where there are naturally multiple pathways to the solution.
I believe we have already invested far more in LLMs — in terms of time, money, human effort, energy and breathless anticipation — than we will ever see in return. It is the fault of the rot economy and the growth-at-all-costs mindset that we cannot just keep gen AI in its place as a rather brilliant tool to produce our productivity by 30%. In a just world, that would be more than good enough to build a market around.
Marcus Merrell is a principal technical advisor at Sauce Labs.
DataDecisionMakers
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.
If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.
You might even consider contributing an article of your own!
Read More From DataDecisionMakers
Be the first to comment