19 Apr 2023
by Pete Rai

Can Machines Think Like People (Guest blog by Cisco)

Guest blog by Pete Rai, Principal Engineer in the Emerging Technologies and Incubation group at Cisco #AIWeek2023

2023 will bring more innovations in the area of Artificial Intelligence. Many of these will excite and delight us, but others may give rise to concerns and even fears. This year we will witness computers and robots performing actions which may cause us to ask ourselves: “What does it even mean to be human then?”

Recently, I created a video discussing the topic “Can Machines Think Like People?”. That is, can they become capable of creativity, intuition, emotion and original thought. This question, which for so long was an abstract thought experiment, will soon become one of the most pressing issues our time. The video covered the history of thought and analysis on this topic. From historical perspectives from the likes of Aristotle and Descartes, to modern thinkers like Turing and Penrose. It discussed the major arguments, both for and against.

This blog post is a short summary of that presentation.

The Turing Paper and the Dartmouth Conference

Alan Turing, was the first person to explore the question “Can computers think”, in his seminal paper "Computing Machinery and Intelligence" (1950). In this paper he proposed a test of machine intelligence called The Imitation Game, which we later called The Turing Test. Turing was clear that “thinking machines is not a contradiction”. A few years later at the now legendary Dartmouth Conference, the term “Artificial Intelligence” was born. Right here, on day one, the attendees put forward a bold agenda: “Every aspect of learning, or any other feature of intelligence, can in principle be so precisely described that a machine can be made to simulate it”.

Limitations of Human Intelligence and Cognition

It’s natural for humans to think that the intelligence which they possess is the highest form that can be. We do, after all, sit at the top of the pyramid of intelligence of the animal world which we are part of. But, as technology advances, we need to consider if the pyramid may extend above us. That our cognitive ability might be limited by our brain’s construction and capacity. That, perhaps, machines can operate at an intelligence level above those of their creators. In the presentation, I discuss the thoughts on this by great thinkers like Chomsky, Nagel and Feynman.

The Mind and the Body.

Rene Descartes was the first to make a distinction between a mind and body. Your body contains your brain – that brain contains your mind – that mind contains yourself. The exact relationship between these is not understood and, in the video, we examine centuries of study in the workings of the human brain. The mind “runs on” the brain, in the same way software “runs on” hardware. So, does that mean that your mind is software?

Cognitivism vs. Anti-Formalism

Cognitivism is a philosophical position that asserts that intelligence and thinking can arise naturally from a mass of facts or knowledge primitives. This approach has been influential in philosophy, with proponents such as Hobbes, Leibniz, and Wittgenstein. However, anti-formalist views, such as those of Heidegger, argue that experience is all there is, and that intelligence cannot be reduced to a set of rules or symbols. This age-old argument is now playing out in a very real sense in the latest innovations of generative AI.

The Lucas/Penrose Argument

The Lucas/Penrose argument, based on the earlier work of mathematician Kurt Gödel, asserts that machines can never become conscious because they are subject to the incompleteness of formal systems. Gödel proved that no system can be complete to the extent that it can fully describe itself. Lucas and Penrose use this to assert that consciousness needs something other than a formal system and that, hence, it cannot emerge as a property of a deterministic, digital systems.

Tests of Machine Intelligence

The Turing Test, is a classic test of machine intelligence that involves a human judge interacting with a machine and a human, without knowing which is which. If the judge cannot reliably distinguish between the machine and the human, then the machine is said to have passed the test. But this test relies on a burden shift to the interpreter – hence it can only assess “the appearance of” intelligence. The Chinese room argument, proposed by philosopher John Searle, challenges the validity of the Turing Test by arguing that it is possible to create a machine that can pass the test without truly “understanding” what it is doing.

Conclusion

The question of whether machines can think like people has been debated for centuries, and we are only now beginning to scratch the surface of what is possible with AI. As we continue to develop thinking agents and explore the limits of human models of intelligence and cognition, we must also grapple with the philosophical debates surrounding machine intelligence. The future of AI is both exciting and uncertain, but we can be certain that it will continue to challenge our understanding of what it means to be human.

Wednesday 9.png

Get our tech and innovation insights straight to your inbox

Sign-up to get the latest updates and opportunities from our Technology and Innovation and AI programmes.

 

 

Authors

Pete Rai

Pete Rai

Cisco