The skeptical case on generative AI

Receive free AI updates

Even by the breathless standards of previous rounds of tech hype, generative AI enthusiasts hyperventilated hard.

Trillion-dollar companies including Alphabet and Microsoft are declaring AI to be the new electricity or fire and are redesigning their entire businesses around it. Never knowingly ignored, venture capital investors have also been pumping money into the industry. Fifty of the most promising generative AI startups, identified by CB Insights, have raised more than $19 billion in funding since 2019. Of those, 11 now count as unicorns with valuations over $1 billion.

Even the sober bosses at McKinsey estimate the technology could add between $2.6 trillion and $4.4 trillion in economic value annually in 63 usage examples analyzed, ranging from banking to life sciences. In other words, in very rough terms, generative AI could create a new British economy every year (the country’s gross domestic product was $3.1 trillion in 2021).

But what if they are wrong? In a series of provocative posts, technologist Gary Marcus explores the possibility that we could see a massive and shocking correction in valuations as investors realize that generative AI doesn’t work very well and lacks killer business applications. The revenue is not there yet and may never come, he writes.

Marcus, co-founder of the Center for the Advancement of Trustworthy AI who testified in US Congress this year, has long been skeptical of the intelligence of neural network models that preceded the latest chatbots, such as OpenAI’s ChatGPT. But he also raises some new truths about generative AI. Take the unreliability of the models themselves. As is now clear to millions of users, one of the biggest drawbacks of technology is that it hallucinates or confabulates facts.

In his previous book AI restart, Marcus provides a clear example of how this can happen. Some AI models work like probabilistic machines, predicting answers from data patterns rather than exhibiting reasoning. A native French speaker would understand instinctively Je mange un avocado pour le djeuner in the sense that I eat an avocado for lunch. But, in its earliest iterations, Google Translate rendered it like I’ll eat a lawyer for lunch. In French, the word lawyer means both avocado and lawyer. Google Translate has chosen the statistically most probable translation, rather than the sensible one.

Tech companies say they’re reducing errors by improving contextual understanding of their systems (Google Translate now accurately translates that French sentence). But Marcus argues that hallucinations will remain a feature, rather than a bug, of generative AI models, unfixable using their current methodology. There is a fantasy that if you add more data it will work. But you can’t squash the problem with data, he tells me.

For some users, this inherent unreliability is a deal breaker. Craig Martell, chief AI officer of the US Department of Defense, said last week that he would ask for five 9s [99.999 per cent] level of accuracy before implementing an AI system. I can’t hallucinate saying Oh yeah, put widget A plugged into widget B and it blows up, he said. Many generative AI systems placed too much cognitive load on the user to determine what was right or wrong, she added.

Even more troubling is the idea that content produced by generative AI is polluting the datasets on which future systems will be trained, threatening what some have called model collapse. By adding more imperfect information and deliberate disinformation to our knowledge base, generative AI systems are producing a further enshittification of the internet, to use Cory Doctorow’s evocative term. This means that training sets will produce more nonsense, rather than less.

Undeterred, investors typically make three arguments about how to make money with generative AI. Even with its imperfections, they say, it can still be a valuable productivity tool, accelerating the industrialization of efficiency. There are many uses too, ranging from copywriting to call center operations, where an accuracy level of two 9s is fine.

Second, investors are betting that some companies can implement generative AI models to solve real-world problems. The latest advances in artificial intelligence enable real-time data analysis, says Zuzanna Stamirowska, managing director of French start-up Pathway, helping to optimize maritime trade or the performance of aircraft engines, for example. We really focus on business use cases, she says.

Third, generative AI models will enable the creation of new services and business models hitherto unimaginable. During the mass electrification of the economy in the late 19th century, businesses profited from the generation and distribution of electricity. But big fortunes were made later by using electricity to transform ways of making things, like steel, or by inventing entirely new products and services, including household appliances.

For now, it’s only cloud computing vendors and chip makers that are really making money in the generative AI boom. No doubt Marcus will also be right that much of the corporate money invested in technology will go to waste and most startups will fail. But he who knows what new things will be invented and will last? That’s why God invented bubbles.

#skeptical #case #generative
Image Source :

Leave a Comment