By Abdulbaqi Sulaemon Olawale
Artificial intelligence is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.
Alan Turing was the first person to conduct substantial research in the field that he called machine intelligence.Artificial intelligence was founded as an academic discipline in 1956, by those now considered the founding fathers of AI, John McCarthy, Marvin Minksy, Nathaniel Rochester, and Claude Shannon.The field went through multiple cycles of optimism,followed by periods of disappointment and loss of funding.Since their development in the 1940s, digital computers have been programmed to carry out very complex tasks such as discovering proofs for mathematical theorems or playing chess with great proficiency. Despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match full human flexibility over wider domains or in tasks requiring much everyday knowledge.
The growing use of artificial intelligence in the 21st century is influencing a societal and economic shift towards increased automation, data driven decision making, and the integration of AI systems into various economic sectors and areas of life, impacting job markets, healthcare, government, industry, education, propaganda, and disinformation. This raises questions about the long-term effects, ethical implications, and risks of AI, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.
There are many advantages of Artificial intelligence one of which is Reasoning and problem-solving early researchers developed algorithms that imitated step by step reasoning that humans use when they solve puzzles or make logical deductions.By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics. Many of these algorithms are insufficient for solving large reasoning problems because they experience a “combinatorial explosion” They become exponentially slower as the problems grow.Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.Accurate and efficient reasoning is an unsolved problem.
Also, Learning is another advantage of Artificial intelligence, there are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that, the next time the computer encountered the same position, it would recall the solution. This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogous new situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless the program was previously presented with jumped, whereas a program that is able to generalize can learn the “added” rule for regular verbs ending in a consonant and so form the past tense of jump on the basis of experience with similar verbs.
Some high-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); interacting via human speech (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT, Apple Intelligence, and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go).
However, many AI applications are not perceived as AI, A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.