An ai generated image of a robot looking at art in a museum

Diving into the turbulent waters of Artificial Intelligence (AI) image generation

In an earlier post (more than just ChatGPT…) I touched upon the ethical issues that are developing as AI assisted image generation becomes more popular. In this post I would like to expand a little more on my feelings about copyright and ethical issues surrounding (specifically) AI image generation (tho it’s an issue in ‘Large Language Model’ services too).

To give a very brief overview for the uninitiated, these image generation services allow users to enter a text prompt (such as create a picture of a cat in a rain coat) and the service will then generate a number of variations to choose from. What you end up with for the specific prompt mentioned above, is this:

An ai generated image of a tabby cat wearing a blue raincoat and looking at the camera.
Thanks Adobe Firefly for this ‘photo’ of a slightly stern looking cat in an oddly blobby raincoat…

Anyway, it is how some of these generative AI services work that can potentially cause problems. Many of the the models that are used are trained on images that have been scraped from the internet. Millions of images are fed into the machine, where clever computers will notice patterns and apply labels to them – ‘this is an eye, this is an ear, this is a weirdly blobby raincoat…’. After a while these services will end up with hundreds of thousands of each type of constituent part that they can then pull from and mix & match them together to create new images, based on users prompts.

It is all very clever, so where is the problem?

Well, the main issue (and the one I will spend most time addressing) is that many of the images that are used in these models are done so without consent (and most of the time, even knowledge) of the artist. Personally, I feel that is bad enough. I would hate to think that my work was being used by a corporation to create a product that they will then sell to people for profit. There is a reason why stock licensing is a multi-billion dollar a year industry. Ownership of an image is normally a thing that can be converted into money (in some form). In this case the ownership of the images that helped create the service has been skipped over.

The issues run deeper than that, however. Not only can these services generate images from text, many of them can generate images in a specific style i.e. creating unique images based wholly on the style of a popular or prolific artist who has had their ingested into the creative model. This is something that is hugely uncomfortable for me personally. I am fortunate enough to know a fair number of hugely talented artists, who rely on their art as their primary or sole source of income. These are people who have spent years learning their craft, developing and honing a style to help them stand out in the very busy art world they trade in and now it can be recreated in 30 seconds on a computer. It’s a distressing thing for many people.

And it’s not just a financial issue either – for many artists (big or small) the style they work in is an expression of themselves. It is a visual representation of the time and effort and love they pour into each piece of work they create and to have that taken and reproduced by an algorithm is soul destroying. That’s not to mention that the generated artworks can then be sold by the users (or used in other ways to gain kudos or recognition).

So, what can be done to address these concerns?

  1. Transparency: AI developers need to be transparent about the sources of their training data and the steps they take to ensure ethical use. This includes clear policies for handling copyrighted material.
  2. User Control: Creators should have the option to opt out of having their work used in AI models. Giving them control over their creations can mitigate some of the ethical issues.
  3. Ethical AI Education: We all need to be more aware of the ethical implications of AI technology. This includes both developers and users. Understanding these concerns can help drive responsible AI usage.

In the grand scheme of things, large language models and image generation services have immense potential to transform the way we create and interact with content. However, it’s crucial that we navigate these uncharted waters with an eye on the ethical implications they raise and respect for creators at the forefront of our minds. Balancing innovation with responsibility is the key to ensuring a brighter digital future for everyone involved.

Pixel Predicaments

Post navigation


What do you think? Leave us a comment to share your thoughts...