A note in my thinking about AI and how it might inform, improve or impact my facilitation work. When I say “a note”, I mean “a note” – stylistically, there is nothing to see here. In this, I am wondering about the ethical issues that are bouncing around in my head. My thinking hardly scratches the surface but that’s okay: it’s not meant to. As always, a disclaimer: I am not an AI expert, simply a curious individual with a technical background (albeit in the distant past!).

Ethical considerations

Bias

For many of the generative AI tools (and possibly others), bias has crept in – it does its best, but it was programmed by humans and neutrality is an issue. The dataset that generative AI tools use to generate their answers are filled with human biases and these can result in the replication of these biases.

For example, this example of bias has conservatives in the US in a panic:

Objectification

AI tools – and not always generative AI tools – have been investigated and found that algorithms objectify women’s bodies. For example:

Privacy

There are also issue of privacy – the tools may be “free to use” but you are paying with your data. The more sensitive the data you contribute, the more vulnerable you and others will be. For folks working in sensitive areas – the law, defence, government, medicine etc. – some of these tools could raise a plethora of problems.

Misinformation

Misinformation is also a huge challenge: not only can the tools get things wrong, but they can also be used to generate misleading content and products like DeepFake videos, which can be used to spread misinformation. This poses many issues for scientists warning about climate change, medical professionals encouraging vaccinations, and experts advising on action that may not be popular politicians or the lay public. For example:

Places of interest

There are many articles online about the risks of AI, and Google (or ChatGPT, perhaps?) is your friend in learning more. Personally, here are a few that I’ve found useful: