Are Machines Capable of Thinking?
In the early half of the twentieth century, science fiction introduced the concept of artificially intelligent robots to the globe. It began with the Wizard of Oz’s “heartless” Tin Man and continued with the humanoid robot that posed as Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers who had culturally internalised the concept of artificial intelligence malaysia (or AI). Alan Turing, a young British polymath who investigated the mathematical possibilities of artificial intelligence, was one such person. Turing argued that humans solve issues and make judgments by combining available knowledge and reasoning; why can’t machines do the same? This was the logical basis for his 1950 study, Computing Machinery and Intelligence, in which he addressed how to construct intelligent machines and measure their intelligence.
Facilitating the Pursuit
Regrettably, talk is cheap. What prevented Turing from getting to work immediately? To begin, computers needed to undergo a fundamental transformation. Prior to 1949, computers lacked a critical requirement for intelligence: they were incapable of storing commands; they could only execute them. In other words, we may instruct computers to perform certain tasks but could not recall what they performed. Second, computation was prohibitively costly. In the early 1950s, leasing a computer might cost up to $200,000 per month. Only prominent colleges and large technology businesses can afford to take a cautious approach in these unexplored waters. A proof of concept was required, as well as support from high-profile individuals, to convince funding sources that machine intelligence was worthwhile.
The Inaugural Conference
Five years later, Allen Newell, Cliff Shaw, and Herbert Simon’s Logic Theorist established the proof of concept. The Logic Theorist was a programme developed by the Research and Development (RAND) Corporation to replicate human problem solving abilities. As regarded as first artificial intelligence programme, having been presented in 1956 at John McCarthy and Marvin Minsky’s Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). McCarthy, envisioning a huge collaborative endeavour, convened this historic conference, bringing together leading experts from diverse fields for an open-ended discussion on artificial intelligence, a phrase he coined at the occasion. Regrettably, the conference fell short of McCarthy’s expectations; participants came and went as they wanted, and there was no agreement on field-standard methodologies. Despite this, everyone was unanimous in their belief that AI was possible. We cannot overstate this event’s significance, as it sparked the next two decades of AI development.
Success and Setbacks on a Roller Coaster
Between 1957 and 1974, artificial intelligence flourished. Computers gained the ability to store more data and became faster, cheaper, and more accessible. Additionally, machine learning algorithms developed, and individuals became more adept at determining which method to use for a given situation. Earlier demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA demonstrated promise for problem solving and spoken language interpretation, respectively.
These accomplishments, together with the support of prominent researchers (namely, the DSRPAI attendees), encouraged government bodies like as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at multiple institutions.