Requires the Federal Communications Commission to create an AI-based tool within one year to help the public identify likely scams. Specifies that the tool must accept various submission formats, evaluate scam likelihood, and provide a scam likelihood rating.
Mandates the Federal Communications Commission (FCC) to develop an artificial intelligence-based tool to help the public identify likely scams within one year of this Act's enactment.
Requires the AI tool to accept various submission formats including emails, text messages, website URLs, and scans or photographs of physical materials.
Obliges the tool to evaluate the likelihood of submissions being scams and provide a rating to reflect this likelihood.
Defines "artificial intelligence" as per the National Artificial Intelligence Initiative Act of 2020.
Defines "scam" as any scheme designed to defraud by inducing a recipient to take actions against their interest, through false or misleading information.
This machine-generated summary is awaiting review by an AGORA editor. Use with caution.
Key facts
🏛️ This document was proposed and/or enacted by the United States Congress but is now defunct.
For authoritative text and metadata, visit the official source.
📜 This document's name is SCAM Platform Act.
Themes AI risks, applications, governance strategies, and other themes addressed in AGORA documents.
Thematic tags are in progress.
Full text
This is an unofficial copy. The document has been
archived and reformatted in plaintext for AGORA. Footnotes, tables, and
similar material may be omitted. For the official text, visit the original source.
Be it enacted by the Senate and House of Representatives of the United States of America in Congress assembled,
SECTION 1. Short title.
This Act may be cited as the “Spam Communication Assessment and Mitigation Platform Act” or the “SCAM Platform Act”.
SEC. 2. Scam identification tool.
(a) In general.—Not later than 1 year after the date of the enactment of this Act, the Commission shall provide on the website of the Commission a tool that uses artificial intelligence to assist the public in identifying likely scams.
(b) Requirements.—The tool described in subsection (a) shall—
(1) accept a submission from an individual in a variety of formats, including emails, text messages, website addresses, and scans or photographs of physical materials;
(2) evaluate the likelihood that such submission is a scam; and
(3) provide such submission a rating, on a scale to be determined by the Commission, that reflects the likelihood that such submission is a scam.
(c) Definitions.—In this section:
(1) ARTIFICIAL INTELLIGENCE.—The term “artificial intelligence” has the meaning given such term in section 5002 of the National Artificial Intelligence Initiative Act of 2020 (15 U.S.C. 9401).
(2) COMMISSION.—The term “Commission” means the Federal Communications Commission.
(3) SCAM.—The term “scam” means a scheme or artifice to defraud, including a communication that attempts through false or misleading information to induce a recipient to pay money, provide personal information, or otherwise act contrary to the interest of the recipient.
Requires the Federal Communications Commission to develop an AI tool for public scam identification within one year.
Requires the Federal Communications Commission to develop an AI tool for public scam identification within one year.