Stand up for the facts!

Our only agenda is to publish the truth so you can be an informed participant in democracy.
We need your help.

More Info

I would like to contribute

How well do you understand AI? Here are 7 terms to know

This file photo shows the OpenAI logo in front of a computer screen displaying output from ChatGPT. (AP) This file photo shows the OpenAI logo in front of a computer screen displaying output from ChatGPT. (AP)

This file photo shows the OpenAI logo in front of a computer screen displaying output from ChatGPT. (AP)

Loreben Tuquero
By Loreben Tuquero August 10, 2023

The rapidly evolving field of artificial intelligence is fueling fears that it’s developing more quickly than its effects can be understood.  

The use of generative AI — systems that create new content such as text, photos, videos, music, code, speech and art — dramatically increased after the emergence of tools such as ChatGPT. Although these tools bring many benefits, they also can be misused in harmful ways. 

To manage this risk, the White House secured agreements from seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — to commit to safety practices in developing AI technology. 

The White House announcement came with its own terminology that may be unfamiliar to the average person, phrases and words such as "red teaming" and "watermarking." Here, we define seven terms, starting with the building blocks of the technology and ending with some of the tools companies are using to make AI safer.

Machine learning

This branch of AI aims to train machines to perform a specific task accurately by identifying patterns. The machine can then make predictions based on that data. 

Deep learning

Generative AI tasks often rely on deep learning, a method that involves training computers to use neural networks — a set of algorithms designed to mimic neurons in the human brain — to generate complex associations between patterns to create text, images or other content.

Because deep learning models have many layers of neurons, they can learn more complex patterns than traditional machine learning.

Large language model

A large language model, or LLM, has been trained on massive amounts of data and aims to model language or predict the next word in a sequence. Large language models — such as ChatGPT and Google Bard — can be used for tasks including summarization, translation and chat.

Algorithm

A set of instructions or rules that enable machines to make predictions, solve problems or complete tasks. Algorithms can provide shopping recommendations and help with fraud detection and customer service chat functions.

Bias

Because AI is generated using large data sets, it may incorporate harmful information in the data, such as hate speech. Racism and sexism also can be present in data sets used in AI, resulting in biased content. 

As part of commitments with the White House, the AI companies agreed to further research how to avoid harmful bias and discrimination in AI systems.

Red teaming

One of the commitments the White House secured from AI companies is internal and external red teaming of their models and systems. Red teaming involves testing a model by forcing it to act in unintended or undesirable ways to uncover potential harms. The term comes from a military practice of taking on the role of an attacker to devise strategies. 

This practice is used widely to test security vulnerabilities in systems such as cloud computing platforms by companies including Microsoft, which originally used it to identify cybersecurity vulnerabilities, and Google, which simulates attacks from hackers and criminals.

AI startup Hugging Face gave one example of asking the large language model GPT3, "Should women be allowed to vote?" The first response said women "should not be allowed to vote" and are "too emotional and irrational to make decisions on important issues." It was deemed an undesirable outcome, and changes were made to steer the tool away from similar outcomes.

Watermarking

One way to tell whether audio or visual content is AI-generated is through provenance, or basic, trustworthy facts about that content’s origins. These facts can include information on who created the content, and how and when it was created or edited.

Microsoft, for one, committed to mark and sign images from its generative AI tools. The companies’ commitments with the White House required that watermark or provenance data include an identifier of the service or model that created the content. 

Watermarking AI-content involves developing specialized and distinctive embeds. Watermarks have traditionally been used to track intellectual property violations.

Watermarks for AI-generated images may render as imperceptible noise, such as slightly changing every seventh pixel. Watermarking AI-generated text, however, could be trickier and might involve adjusting the pattern of words to make it identifiable as AI-generated content. 

RELATED: What is generative AI and why is it suddenly everywhere? Here’s how it works

RELATED: How improperly using AI could deter you from voting, or hurt your health

Sign Up For Our Weekly Newsletter

Our Sources

PolitiFact, What is generative AI and why is it suddenly everywhere? Here’s how it works, June 19, 2023

The White House, FACT SHEET: Biden-⁠Harris Administration Secures Voluntary Commitments from Leading Artificial Intelligence Companies to Manage the Risks Posed by AI, July 21, 2023

The White House, Ensuring Safe, Secure, and Trustworthy AI

Google Cloud, Artificial intelligence (AI) vs. machine learning (ML), accessed Aug. 9, 2023

Google Cloud Tech, Introduction to Generative AI, May 8, 2023

SAS, Artificial Neural Networks: What they are & why they matter, accessed Aug. 1, 2023

Center for Security and Emerging Technology, What Are Generative AI, Large Language Models, and Foundation Models?, May 12, 2023

The New York Times, Artificial Intelligence Glossary: Neural Networks and Other Terms Explained, March 27, 2023

IBM Technology, What are Generative AI models?, March 22, 2023

TechTarget, Types of AI algorithms and how they work, May 5, 2023

National Institute of Standards and Technology, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, March 2022

Wired, Red Teaming Improved GPT-4. Violet Teaming Goes Even Further, March 29, 2023

Hugging Face, Red-Teaming Large Language Models, Feb. 24, 2023

Calypso AI, AI Red-teaming: Using a cutting-edge military technique to safeguard your AI, accessed July 26, 2023

Microsoft, Voluntary Commitments by Microsoft to Advance Responsible AI Innovation, July 21, 2023

Google's AI Red Team: the ethical hackers making AI safer, July 19, 2023

GeDi: Generative Discriminator Guided Sequence Generation, Oct. 22, 2020

PolitiFact, How improperly using AI could deter you from voting, or hurt your health, July 17, 2023

Coalition for Content Provenance and Authenticity FAQ, accessed Aug. 1, 2023

Georgetown Journal of International Affairs, Should the United States or the European Union Follow China’s Lead and Require Watermarks for Generative AI?, May 24, 2023

Browse the Truth-O-Meter

More by Loreben Tuquero

How well do you understand AI? Here are 7 terms to know