Legal and Ethical Perspectives on Generative AI

8 min read

Education & Career Trends: September 6

Curated by the Knowledge Team of  ICS Career GPS


Generative AI, with human creativity of usage, can (knowingly or unknowingly) be a source of legal threat


The year 2023 is the rise of Artificial Intelligence, specifically Generative AI. The technology itself isn’t new; numerous tech companies have developed and launched desktop/web/mobile apps with some “generative” components — Maybe you recall having fun with the SimSimi chatbot or have been using the Snapseed background remover feature?

It is the launch of GPT-4 in March 2023 that triggered the rise and public (non-tech) interest in the technology. The ChatGPT user interface provides easy accessibility for usage and the GPT-4 high-quality content-generation capability took the world by storm as people started seeing the utilisation potential of this technology.

This Generative AI democratisation — the rapid adoption and usage from the populace — comes with some risks and implications, including legally and ethically. Legal/regulation-wise, this is still uncharted territory, but it is still a good idea to be aware of the potential implications of the technology.

The Legal and Ethical Perspectives on Generative AI

Generative AI, with human creativity of usage, can (knowingly or unknowingly) be a source of legal threats. Generative AI regulation is currently (by July 2023) still a relatively open and debatable space, but the risks are imminent regardless.

1. Copyright and Ownership

Quoting Wikipedia, copyright is a type of intellectual property that gives its owner the exclusive right to copy, distribute, adapt, display, and perform creative work, usually for a limited time. In Generative AI, the “owner” aspect of this copyright becomes unclear as there are multiple parties involved in the creation of the content, including:

  • The person who wrote the prompt
  • The company/organisation that built the AI model
  • The artists whose works were used in training the model
  • The AI which generated the art/content

Each party plays a significant role in the creation of the content and without each of the party’s involvement, the content wouldn’t be generated.

Specifically for the person writing the prompt, the U.S. Copyright Office made it clear that the AI-generated content (images, writing, video, etc.) can’t be copyrighted, considering the lack of human authorship and predictability of the AI-generated result.

This was made clear in the US copyright application from Kristina Kashtanova, who authored a graphic novel in which book illustrations were created using the AI image-generation tool Midjourney. The argument made by the Copyright Office is “prompts function closer to suggestions than orders, similar to the situation of a client who hires an artist to create an image with general directions as to its content”.

Simply put, prompts created by the author is similar to a project brief given to a commissioned visual artist in general, in which in that case the project brief creator wouldn’t be the owner of the artwork.

This might change in the future as the use of technology emerges, and more arguments are being put in the picture. Regardless, it is always a good idea to put some documentation in the generative AI creation process, showcasing thehuman-authorshipcomponent for any copyright issue.

2. Privacy and Data Protection

Back in April 2023, ChatGPT was briefly blocked in Italy due to suspected breaches of the General Data Protection Regulation (GDPR) in the EU. One of the main concerns highlighted by the Italian government is related to data protection and privacy issues. ChatGPT has already updated its privacy policy and products to address these concerns, but nevertheless, not every user really goes through and evaluates this.

Data privacy areas at risk in Generative AI include (1) consent to data collection, (2) data retention, and (3) usability risks.

  • Consent of data collection: In the initial model creation, data was collected from various sources which might include personal information with owners unaware that it is being used for model training. Some platform provides opt-out options for users to exclude their content from future AI model improvement, but this is not turned on by default and needs to be manually submitted by the users.
  • Data retention: GDPR regulation has a “right to be forgotten” section, which lets users demand that companies correct their personal information or remove it entirely. Though companies can try to facilitate this, there has been debate on the technical feasibility of it considering the complexity of data used in the large language model training.
  • Usability risks: By default, user inputs and conversation histories of users with LLM models like ChatGPT are collected and used by the company to retrain the AI model. In the various use cases of LLM models, i.e. using the bot as a therapist or a doctor, users might provide sensitive information unknowingly.

As a user of Generative AI, we need to understand the tool’s privacy policy — how data is stored and used and be careful about any data that we input into the system as they are collected and can be manually reviewed by the provider organisation.

3. Misinformation

Unintentional (or even intentional) misinformation can happen using Generative AI. With high-quality content generation capabilities, these models are able to produce great content, much like real or human-generated content, making it difficult to identify the authenticity and accuracy of the content.

The AI models are prone to “hallucination”, a term used to describe factually incorrect or unrelated outputs generated by an AI model. This might happen due to the lack of training data, limitation in sequence/comprehension, and inherent bias from the training. It is common to find plausible-sounding random falsehoods within an AI model’s generated content. It is up to us, the actual user of the tool to critically analyse and verify the information provided by the model before further usage or distribution.

Other devilish use of Generative AI for misinformation includes the creation of deepfakes. Deepfakes are synthetic media that have been digitally manipulated to replace one person’s likeness convincingly with that of another, mainly utilising deep learning technology. With powerful generative AIs, it’s getting a lot easier to create such deceiving media.

There is a possibility of parties being aggrieved by the inaccurately generated content, and it can be a strong legal case for defamation. An Australian mayor has threatened to sue OpenAI for defamation after ChatGPT falsely claimed he had served time in prison for bribery.

4. Ethical Implications

The extraordinary capability of generative AI models unlocks new use cases of the technology for numerous processes in various industries. However, ethical implications might arise as the models are being used in the decision-making process in the industry.

Say, for example, an LLM model like ChatGPT is being used to analyse a candidate’s resume or recommendation letter on a recruitment or university application process. Due to the inherent biases in the model’s training, the model might unintentionally highlight candidates from a specific background, defying the equal opportunities that should be given to all candidates.

Another example of image generation, is the persona of images generated might favour a certain culture or demography due to the limitations of training data. A model can only generate output based on what it has been trained, and if it’s only trained on specific demography data, it will only generate output from that demography — eventually might not give a representative image for the users.

One way to handle this is to use Generative AI in human moderation. Having human evaluation as part of the AI usage lifecycle is important to ensure explainability in the outputs and avoid bias in the decision-making process the AI is part of.

Closing Remarks

With the increasing use of Generative AI, legal and ethical concerns also arise for users to be aware of and be able to protect themselves in this space. Though regulations are not explicitly set up yet, we can start taking steps to address the legal and ethical concerns above, preventing unwanted issues.

  • Protect your personal information. Understand the privacy policy of the tools, and be aware of every piece of information being inputted into the system considering the confidentiality.
  • Specify and document the human-authorship elements of the AI-generated content. Although AI-generated content ownership is currently not on the prompt creator, with sufficient documentation one might be able to argue for human ideation and authorship.
  • Beware of the output of the technology. Addressing the hallucination problem and potential defamation of the AI-generated content from our prompts, we can take action to verify information accuracy before sharing the content and put explicit watermarks or notes on the content indicating the content is AI-generated.

Generative AI is a growing space, and the regulations will definitely be set and updated with the additional usage and trends of the technology. It would be interesting to see how the legal space dynamically evolves in response to this technology.


Have you checked out yesterday’s blog yet

How the Japanese Concept of Kintsugi Helps to Rebuild Your Self-Worth


(Disclaimer: The opinions expressed in the article mentioned above are those of the author(s). They do not purport to reflect the opinions or views of ICS Career GPS or its staff.)

Like this post? For more such helpful articles, click on the button below and subscribe FREE to our blog.


Download our mobile app, ICS Career GPS, a one-stop career guidance platform.

2 Replies to “Legal and Ethical Perspectives on Generative AI”

Leave a Reply