As first appeared in Healthcare Business Today
By Aron Solomon
The University of Texas’ decision last week to shut off the AI detection feature in Turnitin raises several legal issues related to the use of generative AI tools in academic and professional settings.
Some of the key legal concerns include:
Copyright And Ownership
Determining the ownership of AI-generated content can be challenging, as current US law requires human involvement for a work to be entitled to copyright protection. If AI-generated code is not subject to copyright, it raises questions about the legal protections available for developers and users of AI-generated content.
Plagiarism And Intellectual Property
Generative AI tools can potentially produce content that infringes on existing copyrights or constitutes plagiarism. As Attorney John Lawlor points out, “This raises concerns about the ethical use of AI-generated content and the potential legal consequences for users who unknowingly submit plagiarized or copyright-infringing material.”
Bias And Fairness
AI detection tools have been found to exhibit bias, leaning towards identifying text as AI-generated or human-written. This raises questions about the fairness and accuracy of these tools, as well as the potential legal implications for users who are wrongly accused of using AI-generated content.
Validation And Liability
Ensuring that AI-generated content is sufficiently validated for accuracy, copyright, vulnerability, and bias risks is crucial for maintaining trust in AI software. Companies that develop and use AI tools may face legal challenges if they fail to adequately address these risks.
To address these legal issues, it is essential for educational institutions, businesses, and regulators to establish clear guidelines and policies for the ethical use of generative AI tools.
As a recent Harvard Business Review article suggests, this includes promoting transparency, implementing human-in-the-loop processes, and investing in the development of more accurate and unbiased AI detection technologies.
Yet a constant game of cat-and-mouse in which detection improves-then AI evolves to the next level-then we catch it again, seems to be pointless. If the endgame here is that all stakeholders want to help ensure that generative AI is used responsibly and ethically in academic and professional settings, what has been missing in both of these worlds is a more open and realistic dialogue.
Students are going to use AI and the more technology-savvy ones will truly be on the leading edge of applying these AI tools in their research and academic work. Similarly, recent graduates and others who have been deep diving into using these AI tools in the workplace will continue to do so as it makes them better and more efficient in their work.
So if we can have a broader and more active dialogue about all of these legal AI issues, we’re all in a better spot. Just from a practical perspective, no one wants to put time and effort into building something that both violates someone else’s previous work. More importantly from a practical sense, no one wants to build something that will be unprotected because it’s built upon a stolen foundation, which is what AI is often doing in the code and words it generates.
So as AI continues to develop and improve in quantity and quality, we need to be far better-equipped at knowing what we’re working with and how to best do so.
About Aron Solomon
A Pulitzer Prize-nominated writer, Aron Solomon, JD, is the Chief Legal Analyst for Esquire Digital and the Editor-in-Chief for Today’s Esquire. He has taught entrepreneurship at McGill University and the University of Pennsylvania, and was elected to Fastcase 50, recognizing the top 50 legal innovators in the world. Aron has been featured in Forbes, CBS News, CNBC, USA Today, ESPN, TechCrunch, The Hill, BuzzFeed, Fortune, Venture Beat, The Independent, Fortune China, Yahoo!, ABA Journal, Law.com, The Boston Globe, YouTube, NewsBreak, and many other leading publications.