World News Intel

Noam Chomsky, the revered and reviled genius once famously described as “the most important intellectual alive”, turns 95 today. He is a monumental figure in modern linguistics, and only a slightly lesser deity in psychology, philosophy and political activism.

His work establishing cognitive science as a discipline is so fundamental to the rise of AI that it’s rarely acknowledged anymore.

Amid the ongoing alarm that language-simulating machines could become a net negative for humanity, have we wandered too far from Chomsky’s vision of a science of the human mind?

The root of Chomsky’s fame

Chomsky burst onto the academic scene in 1957 with Syntactic Structures, a highly technical linguistics monograph that revolutionised the study of language.

His real stardom, however, came in 1959 with his legendary review of B. F. Skinner’s Verbal Behaviour. Skinner, a psychologist and behaviourist, was enjoying the limelight in psychology circles with his theory of “operant conditioning”.

It explains how reinforcement and punishment can be used to create associations in people’s minds, which then encourage certain behaviours. For instance, a gold star awarded by a teacher for good behaviour will encourage more of it from students.

In Verbal Behaviour, Skinner tried to expand this idea into linguistics by breaking language down into components that are supposedly acquired via operant conditioning.

Chomsky completely disagreed. He tore Skinner’s theories apart, showing language couldn’t possibly be understood in this way.

For one thing, he pointed out, children don’t get enough exposure to language to learn every possible sentence. For another, language is creative: we frequently utter sentences that have never been heard before, meaning they can’t have been acquired through a simple process of reward and punishment.




Read more:
Will AI kill our creativity? It could – if we don’t start to value and protect the traits that make us human


The cognitive revolution and AI

Chomsky’s scathing review did more than shut behaviourism out of linguistics.

He showed how useful it could be to examine the mind instead of just behaviour in language-related areas like anthropology, psychology and neuroscience. This helped set the cognitive revolution in motion, eventually giving rise to the field of cognitive science.

A core idea Chomsky pioneered, together with other cognitive scientists, is that human cognition (thinking, memory, learning, language, perception and decision-making) can be understood in terms of computational processes. While there were already various theories to explain different aspects of cognition, none offered the seductive framework of the computer metaphor: our brains are the hardware, cognition is the software, and our thoughts and feelings are the outputs.

Chomsky’s approach is a thread that has connected generations of AI researchers, arguably beginning with his MIT colleague and AI pioneer Marvin Minsky – one of the organisers of the 1956 Dartmouth research workshop that kicked off AI research.

In those early days of AI, Chomsky’s theories about language paved the way to expand Alan Turing’s ideas about machine intelligence into language processing.

 

Chomsky was awarded the 2011 US Peace Prize for his antiwar campaigning.
Mick Tsikas/AAP

Generative and deep

Specifically, two key concepts popularised by Chomsky are still embedded in AI today.

The first is “generative grammar”. This is the idea that there is a specific set of rules that determines what makes a sentence grammatically correct (or incorrect) in any given language.

The second idea is that of “deep structure”. Chomsky said linguists were paying too much attention to the traditional grammar, or “surface structure”, of particular languages. This refers to the various components (such as words, syllables and phrases) that comprise a spoken sentence.

Instead, Chomsky wanted to work out the “deep structure” of all language, of which we are largely unconscious. This deep structure is what determines the semantic component of a sentence – that is, its underlying meaning.

It’s not hard to see how Chomsky’s ideas of generative grammar and deep structure jibe with today’s generative AI and deep learning.

Chomsky set the basic challenge for this entire effort: work out the deep rules that generate language. Without this, experts couldn’t have delved so deeply into neural networks. They wouldn’t have understood language well enough to even begin.

Chomsky’s thoughts on AI

Sixty years later, models such as ChatGPT have caught up with Chomsky.

While some linguists believe the success of large language models (LLMs) invalidates Chomsky’s approach to language, he argues the models simply imitate rather than truly “learn”. According to Chomsky, the knowledge of the deep rules of language they contain is a statistical mess, not a meaningful analysis.

In a New York Times guest essay with Ian Roberts and Jeffrey Watumull titled The False Promise of ChatGPT, Chomsky says it is “comic and tragic” that “so much money and attention should be concentrated on so little a thing”.

His main complaint is that such systems are a dead end in the search for true artificial general intelligence (AGI). Rather, he views them as a souped-up autocomplete – useful for creating computer code or cheating on essays, but not much else.

He worries their popularity will delay the exploration of other AI architectures that don’t rely on the brute-force statistical crunching of data. Above all, he doesn’t believe neural networks (the basis of much of today’s AI) are the correct architecture for replicating human intelligence.

Despite being unimpressed by ChatGPT, Chomsky does see potential for AI to play the monster in a grim future. In the essay, he wrote ChatGPT’s responses can exude “the banality of evil: plagiarism and apathy and obviation”.

Still, he seems to regard AI as a secondary worry compared with climate change.




Read more:
Is it time to reconsider the idea of ‘the banality of evil’?


Commercial AI: the revenge of behaviourists?

There’s an important difference between Chomsky’s ethical and optimistic work in cognitive science and what’s currently going on in the AI industry.

Advances in modelling cognition are no longer happening mainly at universities. Instead, huge firms such as Google, Microsoft and OpenAI are hoarding resources.

Some researchers are now turning to AI models for clues about human thinking. If you agree with Chomsky and others this is unlikely to yield much insight. But that’s not the point of these models, is it?

Their purpose is to make money. Users prompt them with a stimulus and get a response. If it’s useful, they’ll prompt again. Over time, the model will learn which stimulus and response patterns work, and will use this knowledge to become more addictive and influential – reinforcing our use of them and potentially even changing our behaviours.

Stimulus, response, reinforcement and behaviour. Sound familiar?

Chomsky fought hard to keep behaviourism out of linguistics and contributed greatly to our understanding of how language may be linked with processes in the mind. Ironically, it seems these contributions have driven us into the perfect arena for behaviourist experimentation facilitated by AI.

Source link

Share.
Leave A Reply

Exit mobile version

Subscribe For Latest Updates

Sign up to best of business news, informed analysis and opinions on what matters to you.
Invalid email address
We promise not to spam you. You can unsubscribe at any time.
Thanks for subscribing!