What is Artificial Intelligence
“Any fool can know. The point is to understand.” — Albert Einstein
John McCarthy, who coined the term Artificial Intelligence in 1955, defined it as “the science and engineering of making intelligent machines.”
Currently, the term “artificial intelligence” refers to a specific field of computational science that creates systems capable of collecting data and making decisions and/or resolving problems, which typically require human intelligence.
But how to approach something as broad as intelligence?
John McCarthy adds: “All aspects of learning, or any other characteristic of intelligence, can in principle be described so precisely that a machine will be able to simulate them.”
So, it is this same approach that AI follows. The human being is the smartest being known, so the machines that employ AI seek to imitate the human being.
The human being can see. This is the field of Computer Vision.
The human being can recognize the environment and move. This is the field of Robotics.
The human being can write, read, speak, and listen. These are the fields of Natural Language Processing and Dialogue Recognition.
The human being can recognize patterns. This is the Pattern Recognition field. Machines are even better than humans recognizing patterns because they can use more data and learn more data dimensions. This is the field of Machine Learning.
The human being has a brain, composed of a network of neurons that allows learning new things. If it is possible to replicate the structure and functions of the human brain, we can achieve cognitive abilities. This is the field of Neural Networks. When these neural networks are deeper and more complex, and we use them to learn more complex things, this is the field of Deep Learning.
What are the real applications of AI?
Although AI was introduced in the 1950s, the world is only now beginning to understand the impact it can have and how we deal with data analysis and decision planning.
LinkedIn 2020’s “Emerging Jobs Report” states that “Artificial Intelligence will require the entire workforce to learn new skills, whether to stay up-to-date with its current role or to achieve a new career as a result of automation.”
And the results of the growing application of AI are visible on social networks, in the media, and even in the world with which we interact.
Some examples of artificial intelligence applications
- Artificial Creativity:
- MuseNet: a deep neuronal network that can generate 4 minutes of musical compositions with 10 different instruments.
- Wordsmith: a natural language generation platform that transforms data into an insightful narrative.
- Social Networks:
- Identification of friends by facial recognition.
- Identification of hate speech.
- Chatbots:
- Virtual personal assistants (Siri, Cortana, Alexa, etc.): dialogue recognition and natural language processing to answer requests/questions.
- Autonomous vehicles
- Vehicles parking autonomously.
- Vehicles with autonomous driving.
- Gaming Industry
- AlphaGo: First AI program to defeat a human in the board game Go.
- Banking and Finance
- AI systems can analyze patterns of high amounts of data and produce market predictions, identify fraud, etc.
- Agriculture
- AI systems can monitor and identify plant and soil status and act on the results.
- Health
- IBM Watson: analyzes medical data and makes medical diagnoses.
- Deep Mind Health: analyzes retinal images and diagnoses eye problems.
- Marketing
- Targeted advertising (product recommendations related to a purchased product).
- Recommendation of movies based on the movies viewed.
- Recommendation of songs based on the songs heard.
But how does it work in practice?
Where do Chatbots, Machine Learning, Data Mining, neural networks, algorithms come in?
AI is an area in constant evolution and with this, the terminology used is also evolving at a very fast pace, without reaching a consensus on how to organize all concepts in a well-defined taxonomy.
And with all its success, Artificial Intelligence for being a victim of that. The number of people who talk about AI exceeds their education and knowledge in the subject, generating unrealistic misconceptions and expectations.
It is not necessary to be an Expert in AI, but AI literacy as part of digital literacy will be a key requirement in the 21st century.
This literacy begins by naming and understanding things. As there are numerous AI glossaries, here is a brief taxonomy where we’re going to dissect Artificial Intelligence over several dimensions.
1. Intelligence amplitude: Narrow vs. General AI
An artificial intelligence can be called weak/narrow AI or strong/general AI.
A weak AI operates in a strict scope, such as AI that recommends a film, that recognizes faces, that identifies tumors, or that drives a car.
A General Artificial Intelligence can demonstrate intelligence at a human level throughout a set of cognitive activities that the human being performs in his life, and is capable of easily going from one subject to another and relate them. There are still no strong AI systems and forecasts indicate that only around 2050-2060 will we see this type of systems.
2. Learning ability: Symbolic AI vs Machine Learning
Symbolic AI boils down to systems that simulate human behavior through human-imported knowledge and rules, which translate into instructions that are themed by systems, such as robotics. Symbolic AI was mainly used between the 1970s and the turn of the century, and was abandoned because of its complexity, inefficiency, and limitations.
Parallel to symbolic AI, Machine Learning was explored as a sub-symbolic approach without a specific representation of knowledge.
In Machine Learning, a computer system is able to learn without following explicit instructions and performs tasks without being explicitly programmed for them, by using algorithms and statistical models to analyze and draw inferences from patterns in data. It is trained on how to learn and its performance increases with experience.
In general, Machine learning techniques dramatically outperform symbolic AI. Deep Learning, a sub-area of Machine Leaning, composes some of its most effective and popular approaches, but it has a big weakness: it is a black box. In other words, its result cannot be explained logically because the learning process and how it represents data is too complex to be interpreted. The inability to explain a decision is problematic, something that does not happen in symbolic AI, so maybe a hybrid of both solutions may be the best solution in the future.
3. Machine Learning segmentation by application type: Classification, Regression, Clustering …
The tasks performed by Machine Learning can be grouped into several applications, of which we will present three of the most popular.
At the top of the list is the ranking. Most image processing or computer vision is based on classification, from automatic tagging of friends on Facebook to detecting tumors on an MRI, from quality control on a manufacturing line to identifying obstacles by autonomous vehicles.
Regression is used to predict continuous values. Determining the likely price of a house or the annual sales of a product, predicting the demand for electricity or the number of years an employee can stay in a certain position are continuous estimation problems. These generally benefit from the use of many input variables.
The third, clustering, boils down to the act of sorting and grouping a population-based on common characteristics. It is one of the main task of exploratory data analysis and a common technique for statistical data analysis. It has many applications, such as to identify market segments for consumers or students with similar competencies and challenges, or words that belong to similar semantic groups. In the broader sense, it also includes recommendation systems, prescribing the next product to be offered to a customer.
4. Machine Learning by learning paradigm and data use: Supervised, Unsupervised, By Reinforcement, etc…
There are different paradigms for training algorithms:
In Supervised Learning, pre-labeled data is used by the algorithm to train a model which later learns to categorize new unlabeled data. Recent applications of supervised learning in images, video or sound are numerous: from the development of heart disease in eye exams to the selection of mature vegetables, from school attendance to content engagement, from mood analysis to law enforcement, from the sense of feelings in the voices of customers in a call center to the relief of patients in psychic suffering.
Unsupervised Learning uses data that is typically assigned to the system to group or reduce dimensionality. It is said that the process is not supervised because a model is trained to work on its own to discover patterns and information. Contrary to supervised learning, it works on unlabeled data. For example, if sales data is entered into an unsupervised system to extract four customer segments, it will do so without additional help.
In Reinforcement Learning, the system simply receives a goal, but is not shown how to achieve it. It is about taking suitable action to maximize reward in a particular situation, and, in the absence of a training dataset, it is bound to learn from its experience. By repeating the execution of the task, the process learns what strategies perform well and what strategies do not. The more repetitions it does, the more it learns. Reinforcement learning became famous when DeepMind’s Alpha Go Zero artificial intelligence defeated Li Ke Jie, the world champion in the game Go, in 2017. Alpha Go Zero, was given the rules of the game and played repeatedly against older versions of itself until he became unbeatable even for the world champion.
These different learning paradigms also denote a very different use of data. Essentially, supervised learning requires labeled data to learn and evaluate its performance, unsupervised learning acts upon the exploration of the features and patterns of unlabeled data. Data is the raw material that makes Machine Learning work and so it has been dubbed “the new oil”. Consequently, many concerns are expressed about data availability, privacy, ownership, etc. In addition, recent approaches try to accommodate data scarcity. For example, in Transfer Learning, a system previously trained for one context learns to act on a different, but compatible situation, for which it has not been specifically trained. Alpha Go Zero, for example, applied this learning paradigm to use its knowledge to play chess, becoming Alpha Zero, and won the best chess software in the world after a four-hour training session!
5. Artificial neural networks by depth: simple vs deep
An Artificial Neural Network is based on a collection of connected units or nodes called artificial neurons, which loosely model the neurons in a biological brain. These nodes are organized in layers, which perform different transformations to their inputs. The first layer is the input layer and the last layer is the output layer. The middle layers are called hidden layers. When there is only one middle layer the ANN is considered a simple neural network, otherwise, it is considered a deep neural network, thus the subject of Deep Learning. Increasing network depth usually increases the performance of the algorithm and the computational capacity needed to train.
6. Artificial neural networks by algorithm type: Simple Feed-forward, CNN, RNN, GAN, etc…
The basic artificial neural network algorithm is called “feedforward”. The algorithm “travels” along the neuronal network in a single direction, from the input layer to the output layer without ever going back or looping. However, the training of the net, to determine the weights associated with each node (“neuron”), uses a calculation of “back-propagation”, which flows in the opposite direction.
There are numerous variations of the feedforward algorithm. Two of the most popular neural network algorithms are CNNs and RNNs. Convolutional Neural Networks (CNNs), the patterns of connectivity between the nodes are like those of the visual cortex of animals. CNNs are particularly suitable for image recognition. Recurrent Neuronal Networks (RNNs) capture the notion of sequence and are very useful in the context of Natural Language Processing.
Conclusion
This article presents a very small introduction to the goals and main concepts of Artificial Intelligence.
As AI technology is becoming more influential, reaching many levels of our professional and social lives, it is important that people understand some AI vocabulary and achieve a foundational level of understanding.
Ready to transform your business with Artificial Intelligence?
Contact us for:
- Enhance Decision Making: Leverage AI to make data-driven decisions with precision.
- Automate Processes: Increase efficiency by automating repetitive tasks.
- Personalized Experiences: Deliver customized experiences to your customers using AI-driven insights.
- Predictive Analytics: Anticipate trends and outcomes with advanced predictive models.
- Intelligent Integration: Seamlessly integrate AI with your existing systems.
- Innovative Solutions: Develop cutting-edge AI applications tailored to your business needs.
- Boost Productivity: Utilize AI to optimize workflows and improve productivity.
- Expert Guidance: Receive support from our AI specialists to maximize your AI strategy.
- Scalable AI: Implement AI solutions that grow with your business.
- Cost-Effective: Reduce operational costs with intelligent automation.
If you are interested in learning more about us and how we can help you, contact us.
You can also check out our blog for more articles and insights on Microsoft 365 technologies.