Survey finds that Americans are uncomfortable with companies that operate AI chatbots sharing their health data
Washington, DC – Two new Consumer Reports surveys explore the use of and attitudes toward text-based generative artificial intelligence (AI) chatbots by consumers in the United States. Artificial intelligence is being used every day across the country and is increasingly being integrated in the lives of Americans, from customer service to online medical advice.
CR conducted two separate nationally representative multi-mode surveys–one of 2,062 US adults in August 2023 and one of 2,070 US adults in November 2023–to measure consumer use and sentiments related to generative AI chatbots such as ChatGPT and more health-related uses of such AI chatbots.
Grace Gedye, policy analyst at CR, said, “With the rising popularity of AI services over the past year, we wanted to study American consumers’ familiarity with AI chatbots. We also wanted to understand consumers’ preferences regarding how their personal data is used and reused. The survey highlights that consumers use AI chatbots for education, writing, editing, and a variety of other tasks. The survey also shows that the vast majority of consumers don’t think it’s acceptable for companies that own chatbots to sell or share their health data.”
“People should be mindful when using AI-powered chatbots and virtual assistants. These services can be tremendously helpful, but they can also provide incorrect information and use consumers’ data in ways they don’t expect. As artificial intelligence becomes integrated in our daily lives, Consumer Reports is advocating for regulatory guardrails and greater transparency to prevent unintended consequences of AI.”
Key findings of the surveys include:
Which types of AI chatbot services are most popular
- According to CR’s August 2023 survey, ChatGPT was far and away the AI chatbot service that Americans had used most commonly at that time with 19% of Americans having used it in the past three months. Six percent had used Bing AI and four percent had used Google’s Bard AI. However, 69% of Americans said they had not used any AI chatbot in the previous three months at that time.
How Americans use AI chatbots
- Also in CR’s August 2023 survey, the most common ways Americans had used AI chatbots were related to gathering and understanding information. Thirty-five percent of Americans who had used an AI chatbot in the past three months said they used it instead of a search engine to answer a question, and the same percentage had had it to explain something. Around one in four(23%) Americans who had used an AI chatbot in the last three months had used it to write, rewrite, or edit something to help them accomplish a writing-related task.
Why Americans turn to AI chatbots
- In the August 2023 survey, CR found that the most common reasons Americans who had used AI chatbots in the past three months had done so were because they believed they would be fun (37%); that it would save time (36%); and/or that it would make a task easier or less stressful (35%). Around one in three (32%) said they like to use new technology.
Health-related uses of chatbots and apps
- CR’s November 2023 survey addressed health-related uses of AI chatbots, specifically. One in five Americans (21%) had used a chatbot for some health-related activity or to discuss a health-related topic in the past six months.
Acceptable uses of health data
- A little under half (45%) of Americans said that companies that operate chatbots should never store health information. A third of Americans said it would be acceptable for companies that own chatbots to store the information as part of a user profile.
- Under a third (29%) said it was acceptable for companies that own chatbots to use users’ health information to train the program.
- Very few Americans (5%) said they thought it was acceptable for a company to sell or share health-related information to organizations that would use it in a way that affects the consumer, such as targeted advertising.
Consumer Reports advocates for legislation at the state and federal level to protect consumers from algorithmic discrimination and other AI-related harms. Clearer standards are needed regarding the responsible use of AI across multiple industries, as well as tools for conducting audits and risk assessments.
Media contact: Cyrus Rassool, firstname.lastname@example.org