North Dakota Artificial Intelligence Policy

Proposed 2023-08-10 | Enacted 2024-02-08 | Official source

Summary

Establishes policy for North Dakota's AI use to ensure safety, privacy, and ethical standards. Requires AI reliability, transparency, and accountability. Mandates risk management, privacy assessments, and compliance with legal frameworks. Directs NDIT to provide training and evaluate new AI technologies.

  • This machine-generated summary is awaiting review by an AGORA editor. Use with caution.

Key facts

🏛️ This document has been enacted by the State of North Dakota. For authoritative text and metadata, visit the official source.

📜 This document's name is State of North Dakota Artificial Intelligence Policy. AGORA also tracks this document under the name North Dakota Artificial Intelligence Policy.

Themes AI risks, applications, governance strategies, and other themes addressed in AGORA documents.

Thematic tags are in progress.

Full text

  • This is an unofficial copy. The document has been archived and reformatted in plaintext for AGORA. Footnotes, tables, and similar material may be omitted. For the official text, visit the original source.
[footnotes omitted] INTRODUCTION 1.0 PURPOSE The purpose of the Artificial Intelligence (AI) Policy is to embrace the innovative benefits AI can provide to increase productivity and citizen experience, while reducing risks and concerns in using this emerging technology. This policy protects the safety, privacy, and intellectual property rights of the State of North Dakota by ensuring all forms of AI are handled in a transparent, consistent, responsible, ethical and secure manner. 2.0 BACKGROUND Artificial Intelligence develops data processing systems that perform functions normally associated with human intelligence, such as reasoning, learning, and self-improvement. Generative AI is a prevalent example of AI and includes examples such as chatbots, virtual assistants, and other systems based on it, including: • Standalone systems (i.e. OpenAI – ChatGPT, DALL-E, Microsoft Copilot), • Integrated as features within search engines (i.e. Microsoft Bing chat, Google Bard), or • Embedded in other software tools. Generative AI tools can enhance productivity by assisting with tasks, like drafting documents, editing text, generating ideas, creating images, and software coding. However, these technologies also come with potential risks that include inaccuracies, bias, and unauthorized use of intellectual property in generated content. Content created by AI, and the public availability of information submitted to AI, could pose security and privacy concerns. The State of North Dakota consulted the following sources: • National Institute of Standards and Technology (NIST) Special Publication (SP) 1270 Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, • NIST AI 100-1 Artificial Intelligence Risk Management Framework, and • Other regulatory frameworks for AI requirements. Cybersecurity is one of the greatest challenges facing organizations in North Dakota State Government. Protecting information and communications technology (ICT) from threats and vulnerabilities is critical to ensuring that the confidentiality, availability, and integrity of citizen and organizational data remains protected.
3.0 SCOPE This policy applies to all North Dakota executive branch state agencies including the University Systems Office but excluding other higher education institutions, i.e., campuses and agricultural and research centers. All other state agencies outside the scope of this policy are encouraged to adopt this policy or use it to frame their own. 4.0 STATEMENT OF MANAGEMENT COMMITMENT The North Dakota Chief Information Officer (CIO) directs that Information Technology (IT) Policy be created, as defined within the North Dakota Century Code (Chapter 54-59-09). The Governance Review Team is responsible for review and updating of this policy. Reviews and updates to the policy and procedures will be a coordinated effort, routinely reviewed, and updated annually, or as immediate changes are required. North Dakota’s Chief Information Security Officer (CISO) directs that this policy is created to provide appropriate security and privacy safeguards and countermeasures against the threats and vulnerabilities that may impact the confidentiality, integrity, and/or availability of information and information systems.
5.0 GOVERNANCE AND COMPLIANCE Violations of this policy will be handled in accordance with applicable State of ND policies, procedures, laws, executive orders, directives, regulations, standards, and guidelines. Team members may report non-compliance with this policy to the NDIT Security Governance, Risk, and Compliance team for initial review. The Report for Non-Compliance is completed through NDIT’s ServiceNow platform. NDIT will address submissions with Entity leadership. This policy shall take effect upon publication. Compliance is expected with all State policies, procedures, and standards. Policies, procedures, and standards may be amended at any time. If compliance with this policy is not feasible or technically possible, or if deviation from this policy is necessary to support business function, entities shall request an exception through NDIT’s exception process. Exceptions to this policy shall be requested though the Policy Exceptions request.
6.0 DEFINITIONS Artificial Intelligence (AI) – A field in computer science that focuses on independent decisions based on supervised and unsupervised learning. Machine Learning (ML) – A subfield of AI that focuses on the development of algorithms and statistical models to make independent decisions, but still needs humans to guide and correct inaccurate information. ML is the most common type of AI. Large Language Models (LLMs) – A type of AI that has been trained on large amounts of text and datasets to understand existing content and generate original content. Deep Learning – A subfield of machine learning that focuses on algorithms that adaptively learn from data without instruction or labeling. Also referred to as "unsupervised learning." Examples: self-driving cars, facial recognition, ChatGPT et al. Generative AI – A type of AI that uses machine learning to generate new outputs based on training data. Generative AI algorithms can produce brand new content in the form of images, text, audio, code, or synthetic data. Business Owner – an Entity’s senior or executive team member who is responsible for the security and privacy interests of organizational systems and supporting mission and business functions. Data Owner – individual/individuals responsible and accountable for data assets. Data Steward – individual/individuals with assigned responsibility for the direct operational level management of data.
7.0 POLICY It is important for Entities and users to identify the characteristics of trustworthy AI systems so organizations can continue innovation and growth through AI while reducing risks 7.1 VALID AND RELIABLE AI technologies shall be reliable and consistently valid or accurate in their responses. • Entities shall confirm the validity and reliability of output produced by AI technologies. 7.2 TRANSPARENCY1 Increased transparency increases trust in AI technology. AI technology transparency utilized with risk management strategy, can minimize the impact of risks and negative outcomes. Transparency on use of AI shall be clearly explained and understandable. • Entities shall be transparent about AI technologies and their outputs, disclosing where citizens are interacting with AI, the outcome and/or impact, if applicable, and the business purposes where AI is used. • When using medium to high-risk data outlined in the NDIT Data Classification Policy, Entities shall ensure all systems and processes employing artificial intelligence (AI) for decision-making, or output generation, be clearly marked to enhance transparency and accountability. The decision to inform end-users about the use of AI falls under the discretion of the business process owner or related governing body. 7.3 ACCOUNTABILITY1 • Entities shall ensure AI used within systems is securely developed, assessed for risk, and monitored regularly. • Entities shall ensure AI is used responsibly, operating correctly, and compliant with applicable laws, regulations, policies, procedures, standards, guidelines, and best practices. • Team members shall not use state-managed passwords to log into third-party applications that are not managed by the State’s digital identity management solution (Single Sign-on/Active Directory). Use of a state-issued email as a user ID is permissible if a different password is used.
7.4 SECURITY AND RISK MANAGEMENT Entities utilizing AI system technologies shall incorporate NDIT’s Security Risk Management Program (SRMP) into system development and operations, when applicable. • Privacy impact assessments, third-party and security risk assessments shall be conducted regularly to ensure that security, safety, confidentiality, civil liberties, civil rights, and privacy are protected while continuing to promote and empower the use of AI to benefit the State of North Dakota and its citizens. • Users shall not input any content into public AI/ML technology services (i.e. ChatGPT) that contains moderate-risk or high-risk data based on the NDIT Data Classification Policy. Low-risk data, which is publicly available data, is permitted for use with AI/ML technologies. • The data/business owner shall establish appropriate controls and risk mitigation strategies with advisement from NDIT to mitigate identified risks and ensure the use of AI does not compromise the safety, soundness, or integrity of the Entity’s data and systems. 7.4.1 Training • NDIT shall provide AI training through the cybersecurity education and awareness platform and provide AI workshops. • The data/business owner shall provide role-based training to team members for specific and unique AI technologies used for their business purposes. 7.4.2 Use of Approved Products • All AI technologies shall be properly vetted by NDIT, including free software services. • An AI technology inventory list shall be maintained by NDIT. • Entities shall use approved technologies and request an evaluation of new technologies through the NDIT Initiative Intake request. 7.4.3 Privacy • The data/business owner shall comply with all applicable data protection and privacy laws, regulations, and guidelines. • Citizen, Entity and regulated data will be collected, stored, processed, and shared in a secure and confidential manner, with explicit consent obtained where required. • Entities shall design and implement procedures for specific AI technologies being used. • Entities and users shall evaluate the accuracy and compliance of AI technologies on a regular basis.
7.5 ETHICS, FAIRNESS, AND BIAS1 • AI technologies shall be ethical, fair, and unbiased in a manner that isn’t discriminatory and negatively affects a specific group of people. • Human rights, civil liberties, and dignity shall be protected, while making the selection of AI technologies and using their output. • Users and Entities shall ensure that AI technologies utilize an ethical and fair representation of culture, economics, and society within their data sets, and that benefits are accessible to all citizens. 7.6 LEGAL AND COPYRIGHT AI uses data models to generate output, and submitted data may incorporate source materials that are owned by individuals or organizations, especially using public AI sources. Source material generated into output and, if used without permission, could violate copyright or licensing terms. • End-user licensing agreements, terms of use, privacy policies, and other legal documents shall be reviewed in detail to determine the risks and legal parameters for use of AI. • The U.S. Copyright Office has determined AI-generated content is not protected by copyright. Content must be human authored to be considered protected under copyright laws. • Use of AI technology is subject to open records as defined in NDCC 44-04-18. Open records policy is not bound to any specific system, device or platform, nor by its owner.