New York has taken a significant step towards regulating artificial intelligence by enacting the Responsible AI Safety and Education Act, known as the RAISE Act. Signed into law by Governor Kathy Hochul, this legislation mandates that major AI developers disclose their safety protocols and report incidents related to AI usage within 72 hours of occurrence. The law applies to companies with annual revenues exceeding $500 million and will take effect on January 1, 2027.

The RAISE Act closely mirrors California’s Transparency in Frontier Artificial Intelligence Act, which was signed into law in September 2025. With the establishment of an oversight office within the New York Department of Financial Services, the RAISE Act aims to enhance transparency in AI operations. The legislation grants the Attorney General the authority to initiate civil actions against non-compliant developers, with penalties reaching as high as $1 million for initial violations and up to $3 million for repeat offenses.

California’s Attorney General has also been active in enforcing consumer protection laws, recently settling with mobile gaming company Jam City, Inc. over alleged violations of the California Consumer Privacy Act (CCPA). The settlement, amounting to $1.4 million, addresses claims that Jam City failed to provide consumers with proper methods to opt out of personal data sales across its platforms. The company is now required to implement in-app opt-out features for users aged 13 to 16 and must cease sharing personal data without affirmative consent.

New Privacy Initiatives and AI Regulation

In addition to New York’s legislation, the California Privacy Protection Agency (CPPA) has introduced the Delete Request and Opt-out Platform (DROP) under the recently passed California Delete Act. This initiative enables residents to request the deletion of their personal data from registered data brokers with a single submission. Starting January 1, 2026, California residents can access this platform online, and data brokers are obligated to process such requests every 45 days.

Meanwhile, a coalition of 42 State Attorneys General has urged major tech companies like Google and Meta to adopt stricter safeguards against harmful outputs generated by AI systems. Highlighting serious incidents linked to generative AI, including fatalities and cases of domestic violence, the letter calls for enhanced training and independent audits to ensure child safety and mitigate risks associated with AI technology.

The Indiana Attorney General has also taken a proactive stance by publishing a Data Consumer Bill of Rights, which outlines the rights of Indiana consumers under the Indiana Consumer Data Protection Act (ICDPA). Effective from January 1, 2026, the ICDPA grants consumers the right to know, correct, and delete their personal data, as well as to opt out of data processing for targeted advertising. This act applies to businesses that handle data from a significant number of Indiana residents.

Challenges and Legislative Developments

In federal matters, former President Donald Trump has issued an Executive Order aimed at establishing a national framework for AI regulations to reduce what his administration calls “excessive” state-level regulations. This order instructs the U.S. Attorney General to form a task force to challenge state AI laws that may be unconstitutional and calls for the Department of Commerce to review state legislation within 90 days. Critics, including Senator Edward Markey, have voiced opposition, labeling the order as overly broad and urging Congress to assert its regulatory authority.

The Federal Trade Commission (FTC) is also taking action on consumer protection. It has reopened a case against Ryter LLC, a company providing AI-driven writing services, due to concerns that the previous consent order imposed undue burdens on innovation. In a separate matter, the FTC has settled with Illuminate Education, Inc., following a major data breach that compromised the personal data of over 10 million students, highlighting the ongoing challenges in safeguarding consumer information.

As privacy and AI regulations continue to evolve, both state and federal authorities are emphasizing the importance of transparency and consumer rights in the digital age.