Stand up for the facts!
Our only agenda is to publish the truth so you can be an informed participant in democracy.
We need your help.
I would like to contribute
The answer is no. Photos that appear to depict those events aren’t real; they are the product of generative artificial intelligence.
Artificial intelligence, also known as AI, has existed for years. But generative AI, a subset of the field, recently has propelled the technology into public view.
Generative AI has had the fastest acceptance and usage growth of any new technology, even surpassing the iPhone, said Tom Davenport, a Babson College information technology and management professor.
Recent investment and advances in the technology have led to better-quality tools that can create realistic images from text prompts, or write a poem or essay on par with what humans can produce.
Generative AI exploded onto the scene in late 2022 when OpenAI, a San Francisco-based tech company, released Dall-E, an image generator, and ChatGPT, an AI chatbot, that allowed anyone to use them to create art or text. Competitors responded in kind, flooding the market with similar products.
The technology is likely to change the way we live and work, and it’s expected to transform a number of industries as companies incorporate it.
Here’s what to know about generative AI and how it’s used.
Artificial intelligence is a field of computer science that focuses on training computer systems to use a process or set of rules, called algorithms, to perform tasks normally performed by humans. We interact with the technology when using personal assistants like Apple’s Siri or Amazon’s Alexa, or when we see predictive text offered as we type a search query on Google.
Generative AI is a broad term that describes when computers create new content — such as text, photos, videos, music, code, audio and art — by identifying patterns in existing data.
"It’s AI that creates new content based on past content. It predicts what is most likely to be next in a sequence of words or images or music or anything sequential using machine learning models," Davenport said.
Several types of generative AI tools are in use today, including text-to-text generators such as ChatGPT, text-to-image generators such as Dall-E, and others used to generate code or audio.
Valerie Wirtschafter, a senior data analyst in the Brookings Institution’s Artificial Intelligence and Emerging Technologies Initiative, said most generative AI tasks rely on deep learning.
That’s a method of artificial intelligence where computers are trained to use neural networks — a set of algorithms designed to mimic neurons in the human brain — to generate complex associations between patterns to create text, images or other content.
There are different types of deep learning models used to train generative AI tools, but the most widely used are transformers and generative adversarial networks, known as GANs.
Transformers, first described in a 2017 paper by Google researchers, are networks designed to more naturally process language. ChatGPT was built using a transformer-based large language model, a deep learning model trained on massive amounts of data. GPT stands for Generative Pre-trained Transformer. Transformers are also used in other text creation software, including Google’s Bard.
Davenport said transformers help AI better predict text in context, because they help identify the relationships between all the words in a sentence. For example, transformer-based models made it possible to distinguish between words that have more than one meaning, such as "bank," based on the context in which they were used, he said.
GANs, introduced in 2014, are mostly used for image and multimedia generation. They have two neural networks: a generator that creates an image based on data, and a discriminator that uses machine learning to predict whether the generated image is real or fake, said V.S. Subrahmanian, a Northwestern University computer science professor.
The first one or two generated images may not be good, but the discriminator can easily determine they are fake. Subrahmanian said that with each failure, the generator learns from its mistakes and produces better, more realistic images.
"Generative adversarial networks turned the scales," Subrahmanian said, because they generate new realistic looking images and videos.
Although generative AI had been around for a while before its recent ascendence, the technology was in its infancy and its use was limited, Subrahmanian said.
"The rise of generative AI is due to a trifecta of factors," he said, including advances in deep learning such as generative adversarial networks; much more available data available to train models; and more powerful graphics processing unit computers that help accelerate the training.
As a result, models can "generate high-quality images much faster than would have been otherwise possible," he said.
When companies such as OpenAI made products available to the general public, it was transformative, experts said.
In September 2022, OpenAI made Dall-E 2 available to anyone, after initially offering it only to users on a waiting list that had grown to more than 1 million people. Competitors launched similar products, including Stable Diffusion and Midjourney.
Interest spiked again in November 2022, when OpenAI launched ChatGPT, allowing anyone to sign up for free to test it and provide feedback during a research preview. In April, the site had more than 206 million unique visitors, according to data analytics company Similarweb, which noted that growth had flattened since its initial launch.
"It was really only when ChatGPT was announced in November that the rest of the world really woke up to it," said Davenport.
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, said ChatGPT was a more accessible and better-behaved chatbot than most users had experienced, explaining the massive surge in public use.
Along with 2022 improvements in image generation capabilities, the release of OpenAI's latest language model "sparked the current wave of public interest," Toner said.
ChatGPT, Midjourney and Dall-E are among the most popular generative AI platforms in use, Subrahmanian said. ChatGPT and Dall-E were both created by OpenAI, while Midjourney comes from a research lab bearing the same name. Their rapid adoption has spurred an arms race, with several new companies and products seeking to enter the space.
Turkish artist Refik Anadol used artificial intelligence trained on 1.3 million images of national parks and natural wonders to create new generative landscapes. It's seen here May 2, 2023, at the Kunstpalast art museum in Duesseldorf, Germany. (AP)
"Generative AI has the power to help ordinary people overcome their weaknesses, and excel in ways they had not previously imagined," Subrahmanian said. He cited examples such as:
A non-native English speaker who needs to write a report in English, writes it in her own words and then asks ChatGPT to rephrase it;
An injured cartoonist could use an image generator to produce work in his style;
A widow could use the technology to hear her deceased husband’s voice say, "I love you," again.
Toner said large language models can be used for tasks including summarization, translation and chat, Toner said. Image generators can be used for video game design, graphic design and animation.
"We are still in the early stages of exploring where these systems can and cannot be useful," she said.
Wirtschafter, of the Brookings Institution, a Washington, D.C., think tank, said the scope of use for text generators such as ChatGPT is vast across education and the workplace. Students and teachers are embracing it, and professionals can use it "as a means of generating new ideas and speeding up work," she said. She described text generators as acting like "Google on steroids."
"People use it for speeches, talking points, generating code output, identifying citations, summarizing documents, generating event and article titles, and so on," she said.
Businesses are also very interested in the technology, Davenport said. Kraft Heinz Co. in August released an AI-generated ketchup ad. Coca-Cola Co. in May released an ad that used generative AI, along with live action and other digital effects, to show a Coca-Cola bottle traveling through an art museum.
Other companies are using text generators to manage their internal knowledge, he said. He cited Morgan Stanley, which has been training GPT using 100,000 company documents to help address questions its financial advisors may have.
OpenAI CEO Sam Altman attends a Senate Judiciary Subcommittee on Privacy, Technology and the Law hearing on artificial intelligence May 16, 2023, in Washington. (AP)
Academic and industry leaders have expressed concern about AI’s potential downsides, including large-scale job loss, the rise of misinformation, the ensuing threat to democracy and the potential for AI to outsmart humans.
Nearly three-quarters of companies plan to integrate current and future AI systems into their functioning, leading to valid concerns about the impacts of AI on job security across sectors.
"Existing models are already capable of replacing or augmenting a decent portion of modern-day intellectual labor," a spokesperson for the Center for AI Safety, a San Francisco-based research nonprofit, said. Future models "will likely be completely capable of doing various white-collar intellectual tasks."
Though AI threatens certain sectors, the World Economic Forum estimates that it will have a net positive impact on job growth, predicting several million new jobs in education, agriculture and digital commerce and trade, and will increase demand for AI specialists.
"Generative AI is a double-edged sword," Subrahmanian said. "If ChatGPT can perform a task currently performed by humans faster, better and cheaper, then those individuals' jobs are at risk. But ChatGPT has already created new jobs such as ones based on prompt engineering. And it can enable people to overcome deficits and qualify for jobs that they did not qualify for before."
Generative AI could be detrimental because of its lack of accuracy, as PolitiFact found when it put ChatGPT to a fact-checking test. OpenAI recognizes that its technology "still is not fully reliable," can be "confidently wrong in its predictions," and may "hallucinate facts and make reasoning errors." Not only can generative AI contribute to the proliferation of misinformation, but internal reports indicate that ChatGPT can also create it convincingly.
Some AI experts are also concerned about the dangers posed by future iterations of the technology — a superintelligent "rogue AI" that supersedes human control. Major tech executives and industry leaders such as Elon Musk, Steve Wozniak, Andrew Yang and Rachel Bronson were among thousands of signatories on a March letter asking AI labs to pause development for six months on AI systems to improve safety and oversight of the technology.
The Association for the Advancement of Artificial Intelligence (AAAI) also wrote an April open letter, highlighting AI technology’s social value, while recognizing several key concerns that need addressing through transparency, safety, and engagement in ethics discussions.
But not all experts share these concerns. One of the "grandfathers" of AI, Yann LeCun, recently told BBC News that although AI will change certain aspects of the world, statements about AI threatening humanity are "preposterous."
At a Senate hearing in May, Sam Altman, the OpenAI’s CEO, urged legislators to regulate the industry.
"I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that," Altman said. "We want to work with the government to prevent that from happening."
Tom Davenport, information technology and management professor at Babson College, phone interview, June 7, 2023
Tom Davenport, "How Morgan Stanley is training GPT to help financial advisors," June 2023
V.S. Subrahmanian, computer science professor at Northwestern University, email interview, June 7, 2023
Valerie Wirtschafter, senior data analyst in the Artificial Intelligence and Emerging Technologies Initiative at the Brookings Institution, email interview, June 12, 2023
Helen Toner, director of strategy and foundational research grants at Georgetown’s Center for Security and Emerging Technology, email interview, June 12, 2023
Helen Toner, blog post, "What Are Generative AI, Large Language Models, and Foundation Models?" May 12, 2023
Center for AI Safety, emailed statement from spokesperson, June 7, 2023
Center for AI Safety, "Statement on AI Risk," accessed June 8, 2023
Analytics Drift, "Coca-Cola’s Generative AI Advertisement Takes Internet by Storm," May 16, 2023
Google Cloud Tech, "Introduction to Generative AI," May 8, 2023
TechTarget, "GAN vs. transformer models: Comparing architectures and uses," April 12, 2023
NVIDIA, "What Is a Transformer Model?," March 25, 2022
Cornell University, "Attention Is All You Need," June 12, 2017
OpenAI, "Introducing ChatGPT," Nov. 30, 2022
OpenAI, "Introducing ChatGPT Plus," Feb. 1, 2023
OpenAI, "GPT-4," March 14, 2023
OpenAI, Product, accessed June 8, 2023
OpenAI, GPT-4 Technical Report, March 27, 2023
Similarweb, "As ChatGPT Growth Flattened in May, Google Bard Rose 187%," June 5, 2023
GQ, "The Pope Francis Puffer Photo Was Real in Our Hearts," March 28, 2023
The Washington Post, "See why AI like ChatGPT has gotten so good, so fast," May 24, 2023
World Economic Forum, "The jobs most likely to be lost and created because of AI," May 4, 2023
Accenture, "A new era of generative AI for everyone," March 22, 2023
World Economic Forum, "The Future of Jobs Report 2023," April 30, 2023
NY Times, "AI Poses ‘Risk of Extinction,’ Industry Leaders Warn," May 30, 2023
The New York Times, "An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.," Sept. 2, 2022
Future of Life Institute, "Pause Giant AI Experiments: An Open Letter," March 22, 2023
Association for the Advancement of Artificial Intelligence, Working together on our future with AI, April 5, 2023
Sam Altman, tweet, December 10, 2022
Yoshua Bengio, "How Rogue AIs may Arise," May 22, 2023
Midjourney, Midjourney, accessed June 8, 2023
The Wall Street Journal, "What Is ChatGPT? What to Know About the AI Chatbot," May 16, 2023
Digital Trends, "GPT-4: how to use, new features, availability, and more," April 6, 2023
TechTarget, "What is a neural network? Explanation and examples," accessed June 12, 2023
The New Yorker, "Is A.I. art stealing from artists?," Feb. 10, 2023
Campaigns of the World, "Heinz A.I. Ketchup," Aug. 17, 2022
Analytics Drift, "Coca-Cola’s Generative AI Advertisement Takes Internet by Storm," May 16, 2023
Microsoft, "From Hot Wheels to handling content: How brands are using Microsoft AI to be more productive and imaginative," Oct. 12, 2022
Wall Street Journal, The Jobs Most Exposed to ChatGPT, March 28, 2023
CBS News, "OpenAI CEO Sam Altman testifies at Senate artificial intelligence hearing | full video," May 16, 2023
Tech Policy Press, "Transcript: Senate Judiciary Subcommittee Hearing on Oversight of AI," May 16, 2023
The Associated Press, "How Europe is leading the world in the push to regulate AI," June 14, 2023
CNN, "Europe is leading the race to regulate AI. Here’s what you need to know," June 14, 2023
European Parliament, "EU AI Act: first regulation on artificial intelligence," updated June 14, 2023
BBC News, Meta scientist Yann LeCun says AI won't destroy jobs forever, June 15, 2023