Adrian Wooldridge: An MIT verdict on artificial intelligence

A worthy article to read + insights I found compelling:

+ In an important new book, two leading economists survey 1,000 years of tech history to discover the likely impact of AI

+ In Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity, they look at a millennium of technological innovation to understand the likely impact of AI.

+ The answer they come to is not cheerful — though they’ve reached that conclusion by way of an irritating Brahmin populism.

+ Throughout history, powerful elites have seized control of new technologies and used them to enrich themselves and extend control over their subordinates.

+ The authors concede that technological progress is often the work of challengers to the status quo.

+ Technologies and their makers were eventually co-opted by the ruling class.

+ Countervailing forces can come along and redirect technology from elite enrichment to creating shared gains.

+ In Acemoglu and Johnson’s view, the digital revolution has already been hijacked by self-seeking elites.

+The book proposes an interesting set of policies to produce a better version of the future: Provide government subsidies to develop more socially beneficial technologies; refuse to give patents to technologies that are aimed at worker or citizen surveillance; eliminate tax incentives to replace labor with machines; break up the big tech companies that enjoy market shares not seen since the days of the American industrialists John D. Rockefeller and Andrew Carnegie; repeal Sector 230 of the 1996 Communications Decency Act that protects internet platforms against legal action or regulation because of the content they host; and impose a digital advertising tax.

+ The authors rightly worry about the way the Chinese government is using the digital revolution to monitor and repress its people.

+ But what about India? Thanks largely to Nandan Nilekani, a tech billionaire and chairman of Infosys Ltd., India has introduced the world’s biggest biometric ID system that provides 1.3 billion Indians with a digital identity.

+ The internet is now used as much to tempt us to buy stuff we don’t need as it is to democratize information. The emancipatory power of AI will surely be limited and distorted in the same way.

+ There is nothing inevitable about the direction of technology. Powerful people can direct it towards narrow interests rather than the common good. Clear-sighted coalitions of the concerned can lead it in more enlightened ways. Time may be running short given the pace of AI’s advance but there is still time to save ourselves from digital slavery.

Read the full article here.

Kevin Dugan: Congress isn’t ready for the AI revolution

+ The first mention of artificial intelligence in the Congressional Record dates back to 1964, when Senator Hubert Humphrey marveled at machines “that read, that remember, that improve their performance.”

+ (AI) is not the kind of predicament that can be solved by investing in new job training — that old Washington solution to the slow death of the manufacturing sector and the spread of global free trade.

+ We are not just consumers of AI — we are its competitors. And by the time the government realizes this, it may already be too late.

Full article here.

Quanta Magazine: Chatbots don’t know what stuff isn’t

Worth a read + insights I found compelling:

+ While chatbots have improved their humanlike performances, they still have trouble with negation. They know what it means if a bird can’t fly, but they collapse when confronted with more complicated logic involving words like “not,” which is trivial to a human.

+ It’s hard to coax a computer into reading and writing like a human. Machines excel at storing lots of data and blasting through complex calculations, so developers build LLMs as neural networks: statistical models that assess how objects (words, in this case) relate to one another.

+ Each linguistic relationship carries some weight, and that weight — fine-tuned during training — codifies the relationship’s strength.

+ For example, “rat” relates more to “rodent” than “pizza,” even if some rats have been known to enjoy a good slice.

+ In the same way that your smartphone’s keyboard learns that you follow “good” with “morning,” LLMs sequentially predict the next word in a block of text.

+ The bigger the data set used to train them, the better the predictions, and as the amount of data used to train the models has increased enormously, dozens of emergent behaviors have bubbled up.

+ Unlike humans, LLMs process language by turning it into math. This helps them excel at generating text — by predicting likely combinations of text — but it comes at a cost.

+ “The problem is that the task of prediction is not equivalent to the task of understanding,” said Allyson Ettinger, a computational linguist at the University of Chicago.

+ Negations like “not,” “never” and “none” are known as stop words, which are functional rather than descriptive.

+ So why can’t LLMs just learn what stop words mean? Ultimately, because “meaning” is something orthogonal to how these models work.

+ Models learn “meaning” from mathematical weights: “Rose” appears often with “flower,” “red” with “smell.” And it’s impossible to learn what “not” is this way.

+ When children learn language, they’re not attempting to predict words, they’re just mapping words to concepts. They’re “making judgments like ‘is this true’ or ‘is this not true’ about the world,” Ettinger said.

Full article here.