AI appears to disrupt key private law doctrines, and threatens to undermine some of the principal rights protected by private law. The social changes prompted by AI may also generate significant new challenges for private law. It is thus likely that AI will lead to new developments in private law. This Cambridge Handbook is the first dedicated treatment of the interface between AI and private law, and the challenges that AI poses for private law.
Research in the area of intellectual property (IP) is increasingly relevant to the rapidly growing artificial intelligence (AI) and robotics industries, affecting the legal, business, manufacturing, and healthcare sectors. This contributed volume aims to develop our understanding of the legal and ethical challenges posed by artificial intelligence and robotics technologies and the appropriate intellectual property based legal and regulatory responses.
There are ongoing privacy concerns and uncertainties about how AI systems harvest personal data from users. Some of this personal information, like phone numbers, is voluntarily given by the user. However, users may not realize that the system is also harvesting information like the user’s IP address and their activity while using the service. This is an important consideration when using AI in an educational context, as some students may not feel comfortable having their personal information tracked and saved.
Additionally, OpenAI may share aggregated personal information with third parties in order to analyze usage of ChatGPT. While this information is only shared in aggregate after being de-identified (i.e. stripped of data that could identify users), users should be aware that they no longer have control of their personal information after it is provided to a system like ChatGPT.
Generative AI tools like ChatGPT generally raise two kinds of copyright questions. The first is often referred to as the "input question" and has to do with training AI tools using in-copyright works, which are often scraped from the internet without permission from creators. Some copyright holders have claimed that using their work in this way infringes their copyrights, and some have sued the makers of AI tools. AI makers argue that copyright's fair use doctrine permits this use, and most legal scholars seem to agree that fair use has an important role to play. The second set of questions is often characterized as addressing "outputs" of AI, and asks whether the products of generative AI tools are (a) infringing the copyrights of their inputs, and (b) eligible for copyright protection as works of authorship in their own right. The US Copyright Office has taken the position that the Constitution requires human authorship for copyright, and it has denied registration to works alleged to be authored entirely by artificial intelligence tools. For works containing a mix of human- and AI-authored works, the Office will register copyrights only for the human-authored portions. These stances are being tested in court, with the August 2023 decision in Thaler v. Perlmutter validating the Office's approach to purely AI-authored works.
Attribution: This has been adapted from the University of Virginia's Generative AI: Understanding Copyright and AI Research Guide.