Introduction to Artificial Intelligence (AI)


Note: This article is part of Extended SAFe Guidance and represents official SAFe content that cannot be accessed directly from the Big Picture.


Artificial intelligence (AI) has quickly evolved from being a topic of science fiction movies to a practical reality, in the business context as well as in individuals’ personal lives. AI has been the focus of scientific research and experimentation since the 1950s and the pioneering work of Alan Turing. While the theories developed by Turing and other researchers were groundbreaking, technology at that time was insufficient and too costly to realize the potential of these early algorithms.

With the rapid emergence of generative AI supported by massive amounts of data and computing power, this technology has become the topic of everyday conversation and use. Virtually every business is exploring ways to use AI to improve operations and add innovative features to products. Individuals are discovering ways that AI can augment their efforts in their personal and professional lives. The challenges inherent in this rapidly evolving technology are also in the daily news.

The sections that follow will provide a basic introduction to different types of AI and their most common use cases.

Details

Artificial intelligence (AI) is a category of software that can perform tasks that typically require human intelligence. There are multiple types of AI patterns that are in use today.

Understanding the Fundamental Types of AI

The potential applications represented by AI are extensive and affect almost every facet of business and consumer life. Many of today’s AI systems are based on Machine Learning (ML). ML-based solutions are designed to autonomously improve based on experiences and data. However, some AI architectures do not involve Machine Learning and instead are based on a comprehensive set of static rules that encode some complex reasoning. Other AI architectures, including Generative AI, are built on Deep Learning and neural networks. Figure 1 and the text that follows provide a typology and explanation of various AI and machine learning approaches. The figure also illustrates some of the capabilities that these technologies enable. Note that some illustrated capabilities in Figure 1 can be built with more than one AI approach.

Figure 1. Types of Artificial Intelligence
Figure 1. Types of Artificial Intelligence

This graphic illustrates distinctive types of AI:

  • Supervised learning
  • Unsupervised learning
  • Reinforcement learning
  • Deep learning

Each type is differentiated by how the learning is achieved. All types involve three primary components: the data, the learning algorithm, and the learning model, as Figure 2 illustrates.

Figure 2. Three critical components of machine learning
Figure 2. Three critical components of machine learning

Each of the types shown in Figure 1 are described in greater detail in the sections that follow.

Supervised Learning

Supervised learning utilizes training data to teach the model how to produce the desired output (Figure 3). The training data must contain the inputs and the desired outputs as labels. The learning algorithm runs inputs through the model, compares them with the labels, and computes the model output.

Figure 3. Supervised learning
Figure 3. Supervised learning

The algorithm adjusts model parameters and repeats the process until it reaches a sufficiently low number of errors. It is called supervised because the desired outputs are supplied alongside the inputs and are used to ‘supervise’ or ‘guide’ the learning process. Unless the data initially includes both the inputs and the labels, it requires a ‘labeling’ process before training the model.

Supervised learning can help detect known patterns (fraudulent transactions, spam messages) and data categorization (image recognition, text sentiment analysis). In some instances, the output data may be readily available or easily attainable in an automated manner (such as a customer name alongside the profile photo for face recognition or a five-star rating score next to the product review text for sentiment detection; the situation often referred to as self-supervised learning). Identifying such facets of data opens excellent opportunities for applying supervised learning to organizational processes.

Unsupervised Learning

Unlike the previous approach, Unsupervised Learning does not utilize any feedback mechanism. Instead, it extracts valuable information merely by analyzing the internal structure of the data.

Figure 4. Unsupervised learning
Figure 4. Unsupervised learning

Unsupervised learning has a significant advantage because the input data doesn’t need to be labeled, allowing the learning algorithms to use vast volumes of data. This approach supports easier scaling of the unsupervised learning-enabled capabilities.

This type of AI algorithm is applied to data clustering, anomaly detection, association mining, and latent variable extraction tasks. These processes partition the data by similarity and establish existing relationships within data to be used by other solution capabilities or functions. Some common use cases of such tasks are customer or product segmentation, similarity detection, and recommendation systems. Unsupervised learning can also be leveraged as a link in a broader chain of a supervised learning process to extend data labeling to unlabeled datasets.

Reinforcement Learning

Reinforcement Learning is like supervised learning because it also involves a feedback mechanism that verifies the model. However, the feedback does not rely on labeled data in this case. Instead, the system acts in a particular environment and is supplied with a reward function that helps the model learn what action leads to successful outcomes. Therefore, the learning algorithm generates exploratory activity and selects scenarios that lead to the highest reward.

Figure 5. Reinforcement learning
Figure 5. Reinforcement learning

Reinforcement learning finds applications in robotics, gaming, decision support systems, personalized recommendations, bidding and advertising, and other contexts where simulated exploratory behaviors can be evaluated in terms of their value.

Deep Learning

Deep learning is the label for machine learning models based on Artificial Neural Networks (ANN). Deep learning can be effectively applied to supervised, unsupervised, and reinforcement learning and, in many practical tasks, has produced results comparable to or surpassing human expert performance. An artificial neural network is loosely modeled after the structure of neurons in the brain. An ANN has inputs, outputs, and consists of a connected set of neurons. An example of such a model could be a neural network that accepts pixel colors in an image as an input and determines what type of object is in that image as an output.

Figure 6. A deep neural network applied to pattern recognition
Figure 6. A deep neural network applied to pattern recognition

Every connection has a specific weight that either strengthens or inhibits the signal. When all the connectors leading to a particular neuron convey a sufficiently strong cumulative signal, the neuron activates and transmits the signal to other neurons further downstream.

A neural network with multiple hidden layers is called a deep neural network and is the foundational architecture for deep learning.

Generative AI

Generative AI is a type of deep learning AI that focuses on creating new content and experiences through machine learning algorithms. It is revolutionizing the way businesses operate and create value. Hundreds of new startup companies are being launched every week, providing previously unimaginable capabilities built on this technology. Market-leading software companies are adding generative AI features to their existing products and creating entirely new products powered by AI.

Generative AI differs from other types of AI in its focus on creating original content. Unlike other AI applications built on supervised learning and reinforcement learning, generative AI algorithms are trained to generate entirely new output, such as images, videos, or text. This makes generative AI a powerful tool for businesses looking to automate creative tasks, generate digital assets, and drive innovation. While other AI applications are designed to recognize patterns in existing data, generative AI focuses on producing new, unique content not seen in the training data.

Figure 7. Generative AI

Generative AI is reshaping business practices by enabling marketing teams to craft customized ad content and realistic product imagery, drastically cutting design and production cycles. In operations, AI-driven automation of data entry, invoicing, and report generation is freeing staff for higher-level tasks, thereby optimizing workforce productivity. In software development, AI-assisted coding tools like GitHub Copilot help programmers write more efficient code, accelerating development times and improving quality. Simultaneously, generative AI aids in rapid prototyping and testing in product development, enhancing innovation while shortening time-to-market. Each of these applications underscores generative AI’s pivotal role in enhancing business functions through practical, tangible improvements.

The Continuing Evolution of AI

While generative AI has enabled new capabilities that were unthinkable until very recently, this latest advancement in what AI can do is not the end of its potential. Future advancements for generative AI that are just now starting to emerge include multimodal generative AI and interactive AI agents.

Multimodal generative AI can generate content across multiple types of data, such as text, images, audio, and video. These systems can understand and interpret data across these different modes, enabling them to perform tasks like creating images from text descriptions or synthesizing complex multimodal content. This technology will enable new applications in content creation, entertainment, and education, by mimicking the multifaceted nature of human communication and creativity.

Interactive AI agents provide more nuanced interactions, seamlessly integrating with daily tasks through natural language processing and understanding, providing personalized advice, support, and learning experiences. These agents will evolve to become indispensable personal assistants in professional and personal settings.


Learn More

[1] Anyoha, R. The History of Artificial Intelligence. Science in the News, Harvard Graduate School of Arts and Sciences, 2017.

Last Update: 8 April 2024