Roper Center Artificial Intelligence (AI) Policy​​​​​​​

The Board of Directors of the Roper Center for Public Opinion Research adopted this Artificial Intelligence (AI) Policy on November 14, 2025, to serve as a supplement to the existing General Terms of Use.

 

INCORPORATION AND UPDATES

This Artificial Intelligence (“AI”) Policy is incorporated by reference into the Roper Center for Public Opinion Research (“Roper Center”) General Terms of Use and Subscriber Agreement. Roper Center may modify this AI Policy at any time, and such modifications are effective upon posting. Users are responsible for reviewing the current AI Policy, as continued use of Roper Center Data (as defined below) signifies acceptance of any changes.

DEFINITION

“Artificial Intelligence” or “AI” is defined broadly to include machine learning models (such as Large Language Models, e.g., ChatGPT (“LLM”)), expert systems, or any other form of AI technology.

GENERAL GUIDELINES AND USER OBLIGATIONS

Users must exercise caution when using AI tools with Roper Center Data. In particular, do not input sensitive, private, confidential, or proprietary data (including Roper Center survey data) into any generative AI or similar tool. Doing so could violate privacy expectations or legal/contractual requirements. Users are encouraged to retain full records of any AI-assisted analysis or decisions – including the AI prompts, outputs, and any underlying reasoning or source data – to ensure accountability and transparency in AI-related work. Users shall promptly disclose in writing to Roper Center any current or planned use of AI in research involving Roper Center Data. Depending on the nature of the AI use, prior written approval from Roper Center may be required before proceeding (see AI Tool Categories below). Users must cooperate with Roper Center to ensure that any provision, use, or storage of Roper Center Data via AI tools complies with all applicable laws and Roper Center’s data security protocols.

AI TOOL CATEGORIES AND PERMISSIBLE USES

For the purposes of this policy, AI systems are classified into three categories based on data retention and network isolation, with corresponding limitations on their use with Roper Center data, including, without limitation, any associated (i) question text and topline numbers, (ii) respondent-level datasets, (iii) supplemental documents and files, and (iv) indexing and search metadata (collectively, “Roper Center Data”):

  • Type 1 – AI with Data Retention (Prohibited): This category includes AI/LLM platforms that retain user-provided data for training or other purposes (e.g., public LLMs like standard ChatGPT or other cloud AI services that learn from inputs).
    • Permitted Use: None. Using any Type 1 AI with Roper Center Data is strictly forbidden, as it effectively redistributes the data to the AI provider’s servers. Feeding Roper Center Data into such AI is considered unauthorized data redistribution, which is prohibited unless expressly permitted in writing by Roper Center.
  • Type 2 – AI with No Retention (Limited Use with Permission): This category refers to AI tools that do not retain user data, typically enterprise or institution-licensed AI services operating under contracts that forbid retention of input data. However, these systems are still connected to broader networks (e.g., an institutional LLM with cloud access that does not keep inputs).
    • Permitted Use: Allowed only for Roper Center Data which falls into the categories of (i) question text and topline numbers and (iii) supplemental documents and files, and only with explicit prior written permission from Roper Center. Use of Type 2 AI with Roper Center Data falling into category (ii) respondent-level datasets is prohibited because such AIs are not fully isolated from the internet and such use may conflict with applicable data security requirements. Even when using a Type 2 AI on Roper Center Data which falls into the categories of (i) question text and topline numbers and (iii) supplemental documents and files with Roper Center’s permission, the user must ensure the AI is not employed in a manner that could reconstruct identifiable information about any individual respondent.
  • Type 3 – AI in Secure Isolated Environment (Limited Use with Permission): This category includes AI systems that do not retain input data and are operated in an entirely isolated, secure environment with no internet access (for example, an AI model run on a standalone secure server).
    • Permitted Use: Such use may be allowed for any category of Roper Center Data, but only under strict data security protocols approved in writing by Roper Center. The AI environment must comply with all requirements of the applicable data security plan for restricted data (e.g., the data must remain offline and protected). Advance written approval from Roper Center is mandatory before using any Type 3 AI with Roper Center Data, to ensure the planned setup meets Roper’s security standards.

Type 1 AI usage will not be approved under any circumstances. Any use of AI that results in Roper Center Data being ingested, retained, or learned by an external system without authorization is a violation of this policy.

PROHIBITED AI-RELATED ACTIVITIES (in all cases)

  • Data Redistribution to AI: Except as expressly permitted in writing by Roper Center, users may not use Roper Center Data to train or feed any AI or machine learning system. Providing Roper Center Data to a third-party AI service (especially Type 1 systems) is considered an unauthorized redistribution of such data and is strictly prohibited. This includes uploading datasets or large portions thereof into AI tools or prompts or allowing an AI to crawl or otherwise ingest Roper Center Data.
  • Re-identification or Linking: Users are expressly forbidden from using AI tools to attempt to identify or deanonymize survey respondents in Roper Center Data. Any effort to link Roper Center Data with other data sources (public or private) for the purpose of discovering personal identities or personal information about any respondent is prohibited. Similarly, using AI to find patterns that could increase the risk of identifying individuals or organizations represented in the data is prohibited. All analysis must remain at an aggregate or statistical level consistent with the intended use of the data; any breach of respondent anonymity via AI violates this policy and the promise of confidentiality given to survey participants.
  • Privacy and Security Violations: Users may not use AI in any manner that would violate data privacy laws, confidentiality agreements, or Roper Center’s security requirements. For example, exporting restricted-use data to any online AI service, or any action that circumvents established data security protocols (such as removing data from a secure environment for AI processing) is prohibited. Users must adhere to all data protection regulations when using AI, just as they would with any other data processing tool.
  • Intellectual Property Infringement: Users must not employ AI in a way that infringes intellectual property rights. This means you cannot use Roper Center Data via AI to create outputs that violate copyrights, nor use proprietary AI outputs in violation of their terms. Any AI usage that would breach third-party IP rights or licensing restrictions is prohibited.

USER REPRESENTATIONS AND WARRANTIES

By using Roper Center Data with AI tools (in any manner), the user represents, warrants, and agrees to the following:

  • Legal Compliance: The user has complied, and will continue to comply, with all applicable data protection and privacy laws and regulations in connection with their use of AI. The user will handle any personal or sensitive information in accordance with those laws and will not cause Roper Center to be in violation of any such laws through their actions.
  • Adherence to Agreements: The user’s use of AI will not violate any agreement with or policy of Roper Center. The user acknowledges that Roper Center Data is provided under specific terms (e.g., General Terms of Use, Subscriber Agreements, etc.), and any AI usage will honor those same terms (such as restrictions on redistribution or commercial use, as detailed in this AI Policy, the General Terms of Use, and the Subscriber Agreement).
  • No Third-Party Rights Violations: The user’s current or planned AI use does not and will not infringe upon or violate the intellectual property rights, proprietary rights, privacy rights, rights of publicity, or any other rights of any third party. All content generated or analyzed by AI in the course of using Roper Center Data must either be original, properly licensed, or fall under permissible use. The user is solely responsible for any misuse of Roper Center Data through AI that could harm third-party rights.
  • Cooperation and Accountability: The user will cooperate with Roper Center in good faith to demonstrate compliance with this AI Policy. This includes potentially providing documentation of AI use, such as logs or transcripts, if Roper Center has questions or concerns about a particular use case. The user is accountable for the actions of any AI they employ; AI assistance is not an excuse for violations. The user remains responsible for safeguarding the data and reporting any security breaches or unauthorized accesses that occur via AI.
  • User’s Assumption of Risk: The user acknowledges that any analysis, summaries, or other outputs produced by AI using Roper Center Data are not guaranteed or endorsed by Roper Center. AI-generated content may contain inaccuracies or biases. The user assumes full responsibility for verifying the accuracy and validity of AI outputs before relying on them. Roper Center and the original data providers bear no responsibility for the results of any AI usage on the data or for any interpretations or conclusions drawn from AI-generated analyses.

PUBLICATIONS, ATTRIBUTION, AND DISCLOSURE OF AI USE

If Roper Center Data is used in any publications, reports, or other research outputs where AI tools played a role in analysis or writing, the following guidelines apply:

  • Human Authorship: All publications or manuscripts must be conceptualized and written by human authors, not by AI. Generative AI tools cannot be listed as authors or co-authors on any work products derived from Roper Center Data. Authorship implies responsibility and intellectual contribution, which only human researchers can fulfill; thus, AI may assist in certain tasks (as a tool), but it cannot take credit as an author.
  • Transparency in Use of AI: Users should fully disclose the use of any AI tools in the research process or in preparation of publications. This disclosure should detail which AI tools were used, how they were used, and for what purpose. Best practice is to include this information in the methods section or acknowledgments of a paper. For maximum transparency, the disclosure might be placed prominently (e.g., at the beginning of the manuscript or in a dedicated “AI Use Disclosure” section) and repeated where relevant in the text.
  • Documentation of AI Inputs/Outputs: In line with emerging scholarly guidelines, authors should document the AI prompts and outputs involved in their work. If a large language model was used to generate text or analyze data, consider saving the full chat transcript or output and including it as an appendix or online supplement to your article. This practice provides transparency and allows peer reviewers or readers to see exactly what was generated by AI versus what was written by the human author.
  • No AI-Only Publications: Publications or reports should not consist predominantly of AI-generated content. While AI might help with grammar, coding, or brainstorming, the substance of the analysis, interpretation of results, and conclusions must come from the researcher. Users should critically evaluate and edit any AI contributions. Remember that quality control is crucial – AI can introduce errors or fabricated information. Researchers are expected to rigorously fact-check and validate all content, just as they would with any research assistant’s contributions.
  • AI in Citations/Bibliographies: If AI tools are used to generate summaries or translations of Roper Center Data, those tools should be acknowledged but not cited as primary sources for the data itself. Always cite the original Roper Center Data and any traditional sources. AI tools, if cited, should be referenced in a manner similar to software (including model name, version, date, etc., as recommended by style manuals). For instance, list the AI tool in a methodology footnote or in the references as: OpenAI (2024), ChatGPT [Large language model], URL of tool, along with a description of how it was used. This makes clear that the AI is a tool, not an originator of the data or analysis.

ENFORCEMENT AND CONTACT

Any suspected violations of this AI Policy may result in Roper Center taking appropriate actions, up to and including termination of data access, as well as potential reporting of the violation to the user’s institution. Users are responsible for ensuring that anyone working with Roper Center Data under their supervision (e.g., research assistants, students, etc.) is aware of and adheres to these rules. If you are unsure whether a particular use of AI with Roper Center Data is allowed, please contact Roper Center for clarification and permission before proceeding. By using Roper Center Data with AI, you acknowledge that you have read, understood, and agreed to this AI Policy. This policy exists to protect valuable data assets and ensure that innovations like AI are applied in ways that uphold ethical research standards and legal obligations. Thank you for your cooperation in using AI responsibly.