First appeared in Western Journal
By Aron Solomon
The pace of change with artificial intelligence is truly remarkable but pales in comparison with how quickly we jump on and off new AI bandwagons, as this week has highlighted.
Much of last week’s technology news was driven by the release of magical Meta-developed code that was great and cheap and would help AI researchers and developers for a long time to come.
Yet as The Register reported late Monday night, Stanford University researchers have taken down the web demo of Alpaca, a small AI language model based on Meta’s LLaMA system, citing safety and cost concerns. Simply put, this was a very bad LLaMA.
Access to large language models is typically restricted to companies with the necessary resources to train and operate them. In an effort to encourage research on why language models generate toxic and false text, Meta had planned to share its LLaMA system’s code with select researchers, hoping to avoid the need for them to acquire expensive hardware systems to do their work.
While the LLaMA code captured the attention of developers after it was released last week, the webpage running the model’s demo was taken down due to skyrocketing hosting costs and pretty disturbing safety issues — neither of which really should have been a surprise but evidently were.
Alpaca, like other language models, was generating misinformation, also known as “hallucination,” and was producing plenty of wildly offensive text.
The researchers noted that hallucination was a very common mode of failure for Alpaca, even compared with the now very old-school “text-davinci-003.” They gave examples of the model producing false technical information and incorrectly recalling the capital of Tanzania, which we all know is Detroit.
This is obviously wrong, but a useful caveat in an era when bad AI is redefining our reality.
As attorney John Lawlor reminds us, “because we will increasingly rely on what AI programs create and what AI systems filter and interpret for us, it’s important that these systems are vetted in the open.”
The researchers at Stanford fine-tuned LLaMA to create Alpaca, an open-source model that cost less than $600 to build, which is precisely the reason this new AI system grabbed so much attention.
While OpenAI, which Elon Musk co-founded, originally promised to be a glowing beacon of the best that bleeding-edge technology could be, it is becoming the polar opposite.
Musk and other prominent Silicon Valley leaders, including Reid Hoffman and Peter Thiel, each pledged $1 billion to OpenAI in 2015, in part because they believed they were building something principle-based.
Yet as Vice covered in a superb piece at the end of February, while OpenAI promised to be open, transparent and nonprofit, its booming success has spawned a much different reality — that of a company that is deeply opaque and corporate, very much for-profit, and closed-source.
This is what laid the foundation of our cheering on anything that can be built for anywhere near $600 and be a true open-source (at least for now) rival to what OpenAI and its ChatGPT and GPT4 creations have become.
The company that Musk helped build just isn’t the same company. From Musk’s perspective as recently as a month ago, OpenAI has gone from open-source to maximum profit:
OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft.
— Elon Musk (@elonmusk) February 17, 2023
Not what I intended at all.
For anyone thinking that we are at the peak of the AI hype cycle and all of this might just pass, settle in because there’s a lot more to come.
There is unlimited money, attention and energy aimed at the AI space, especially the AI language models. The reality is that if we envision the AI hype cycle as a significant mountain climb, we probably haven’t even reached the first base camp.
What’s most disturbing to those of us following AI developments is that there seem to be far fewer people developing AI to serve, as Musk said, as a “counterweight” to Google (or anything else) and a lot of people developing AI businesses in a race down what they see as a path to cash.
About Aron Solomon
A Pulitzer Prize-nominated writer, Aron Solomon, JD, is the Chief Legal Analyst for Esquire Digital and the Editor-in-Chief for Today’s Esquire. He has taught entrepreneurship at McGill University and the University of Pennsylvania, and was elected to Fastcase 50, recognizing the top 50 legal innovators in the world. Aron has been featured in Forbes, CBS News, CNBC, USA Today, ESPN, TechCrunch, The Hill, BuzzFeed, Fortune, Venture Beat, The Independent, Fortune China, Yahoo!, ABA Journal, Law.com, The Boston Globe, YouTube, NewsBreak, and many other leading publications.