Do You Have to Disclose When Your Users Are Interacting With a Bot?
The customer service agent sympathetically responding to your chat messages about a delayed order, the player fiercely competing against you in an online match, the coach sending you motivational messages and workout advice: are these real people you're interacting with over the internet or bots? Does it matter?
With the recent advances in generative AI, bots capable of engaging in complex interactions via text and voice are becoming increasingly common. Even as such bots become more common, however, legal and ethical issues related to the use of bots remain unresolved. Whether it is necessary or advisable to disclose when users are interacting with a bot is among these unresolved questions.
Below we highlight certain U.S. legal considerations that bot deployers should evaluate when deciding whether or not to disclose their use of bots.
California Bot Disclosure Law
In 2019, prior to the revolution of advanced generative AI, California enacted a bot disclosure law codified in Cal. Bus. & Prof. Code § 17940-17942. The law requires bot deployers to disclose that users are interacting with bots in specific contexts. The disclosure requirement, however, is limited. It only applies when bots are used to "knowingly deceive" a person for the purpose of incentivizing a commercial transaction or influencing a vote in an election. To comply with the law, affected companies need to disclose that they are utilizing bots to interact with consumers and other covered entities. Such disclosures must be "clear, conspicuous, and reasonably designed to inform persons with whom the bot communicates or interacts that it is a bot." For a deeper exploration of this topic, refer to our previous article.
The law does not expressly provide a private right of action; however, private claims have been brought over violations of the law. In 2022, customers of the weight loss app Noom accused the company of misleading customers in a number of ways, including by violating the California bot disclosure law. Customers alleged that although Noom represented that members would receive personalized plans from human coaches as part of Noom's subscription service, the coaches that users interacted with were actually automated bots. The parties eventually reached a settlement valued at $56 million.
The California bot disclosure law expressly provides for enforcement by the state attorney general, who has broad enforcement authority to levy fines of up to $2,500 per violation as well as equitable remedies. Although California was the first state to impose such a disclosure requirement, other states have begun to consider and implement their own disclosure requirements.
Utah Artificial Intelligence Policy Act
The Utah Artificial Intelligence Policy Act, which went into effect on May 1, 2024, adds provisions to the Utah Consumer Sales Practices Act that may also require businesses to disclose when users are interacting with an AI in certain contexts.
Under the new law, consumer-facing bots must disclose, upon being asked, that the consumer is interacting with "generative artificial intelligence and not a human." This new requirement is intended to provide consumers with transparency, ensuring they are fully aware of the nature of their online interactions. For regulated occupations, such as those requiring a license or state certification, the consumer does not need to request disclosure—proactive disclosure of the bot usage is required.
The law additionally holds companies accountable for any consumer protection violations involving bots. Specifically, the law clarifies that companies cannot use the fact that statements were uttered by a bot as a defense against violations. In other words, bot deployers may be held responsible for what their bots say.
The Utah Artificial Intelligence Policy Act does not expressly provide a private right of action, but administrative fines of up to $2,500 per violation can be levied. The attorney general and Utah's Division of Consumer Protection may impose the fines or bring actions in court, with additional civil penalties possible if an administrative or court order issued pursuant to the Act is violated.
Colorado Artificial Intelligence Act
On May 17, 2024, Colorado Governor Jared Polis signed into law Senate Bill SB 24-205, known as the Colorado Artificial Intelligence Act, establishing new state standards for transparency. The law will come into effect on February 1, 2026.
The new law becomes part of Colorado's Consumer Protection Act and requires consumer-facing developers and deployers to disclose to consumers when they are interacting with an AI-powered bot system. However, disclosure is not required if it would be obvious to a reasonable person that they are interacting with a bot.
The Colorado Artificial Intelligence Act does not provide a private right of action. The law provides exclusive enforcement authority to Colorado's attorney general, who will be able to enforce the law as an unfair or deceptive trade practice. Violations of the law constitute unfair trade practices under Colorado law, which may result in a civil penalty of up to $20,000 for each violation. The attorney general can also seek injunctive relief for violations.
Unfair, Deceptive, or Abusive Acts or Practices
Even when state law does not specifically require disclosure, the use of undisclosed bots may still give rise to claims under state unfair, deceptive, or abusive acts or practices laws. Similarly, the Federal Trade Commission (FTC) may regard some undisclosed usage of bots as an unfair or deceptive business practice in violation of the FTC Act. Such risks are likely to be especially acute when it can be argued that the deployer took steps to deceive users into believing they were interacting with a human (e.g., a bot introducing themselves with a human-sounding name), when the value of a service is based on human interactions (e.g., interactions on dating apps), or when the use of bots is arguably at odds with how a product is advertised (e.g., a gaming platform that advertises player-to-player matches).
Depending on the nature of the claims and where a user is located, claims over unfair or deceptive practices may be initiated by both individual consumers and state attorneys general. The FTC may also investigate or bring actions related to unfair or deceptive practices. Available damages and remedies will vary but can include financial restitution, civil penalties, and injunctive relief.
Other Considerations
Disclosing that responses are generated by bots may also help to mitigate other risks. Air Canada was recently required to honor the discounted air fare that its customer service bot erroneously told a customer that they qualified for. Although the bot's statement was contradicted by the airline's published policy on discounted airfare, the British Columbia Civil Resolution Tribunal required the airline to honor the discount because the consumer had no reason to think they needed to double check the information provided by the bot. Suggesting that, due to the nature of the technology, all bot-provided information should be checked against the company's published policies or proactively disclosing when information is provided by a bot could help to defeat similar claims in the future. Such claims, however, are relatively new, and it is not yet clear how easily courts will let deployers escape liability for inaccurate statements made by their bots.
Conclusion
As the pervasiveness of bots in consumer interactions continues to expand, disclosure and transparency requirements are certain to continue to evolve. Legislative developments in states like California, Utah, and Colorado, alongside federal attention from the FTC, clearly demonstrate the growing attention being paid to this topic. Accordingly, companies should be thoughtful in how they deploy bots and closely monitor the statutory and regulatory landscape to ensure compliance with disclosure best practices.
The authors wish to acknowledge Summer Associate Justin Bae's contributions to this Update.
Follow us on social media @PerkinsCoieLLP, and if you have any questions or comments, please contact us here. We invite you to learn more about our Digital Media & Entertainment, Gaming & Sports industry group and check out our podcast: Innovation Unlocked: The Future of Entertainment.
Print and share
Authors
Explore more in
Age of Disruption
We live in a disruptive age, with ever-accelerating advances in technology largely fueling the disruption permeating almost every aspect of our lives. We created the Age of Disruption blog with the goal of exploring the emerging technologies reshaping society and the business and legal considerations that they raise.