fbpx

CIPS Member Blog: Ethics guidance for AI identification of misinformation

By Donna Lindskog FCIPS, I.S.P. (ret.)

The International Federation for Information Processing (IFIP) met in Toronto this March to discuss, amoung other things, how IT can be used to address global warming.  I was able to attend a workshop to propose initiatives.  We worked in pairs and my partner was Stanley U. Osula from DomiTek.  We were both concerned about Artificial Intelligence (AI). We proposed creating “Ethics guidance for AI identification of misinformation“.  The feedback we got was quite thought-provoking.

We had outlined two sets of actions.  On the community level everyone could work to promote professional organizations such as CIPS in Canada and their ethical experts that could give good advice such as we were proposing.  On a more macro level IFIP could compile and issue guidance as it seemed needed.

The group had already discussed the problems that misinformation causes and how many global warming efforts are discredited or derailed because of false facts.

There had also been discussions about Data for Good, Tech for Good, and AI for Good and how these organizations and others are supporting people wanting to do this kind of work.  Stanley put our proposal to ChatGPT to see if there was already ethical guidance out there.

ChatGPT gave the following guidelines:

  1. Transparency
  2. Fairness
  3. Privacy
  4. Accountability
  5. Security
  6. Continual Assessment

When we reported our idea and these results, the feedback from the IFIP members was immediate.  I think it was a member from the UK that explained that the ChatGPT list was clearly based on an early version of the general ethics list for all AI and he was adamant that there was now agreement you could not just have transparency but needed explainability as well. 

I believe it was a member from Sweden that pointed out that there have been efforts to identify misinformation for years.  They mentioned 2016 discussions about fake news. A mention was made of a 2016 “poop detector” that would identify what was suspect.

The main concern seemed to be that the AI could NOT be allowed to filter information that people saw.  A good question was asked : Why would AI be more trustable than a person to identify misinformation?  AI will always be able to find counter information for facts.  If the AI is trained on data sets that were given by people with a bias, then AI has a bias. 

IFIP was a very knowledgeable group to have this discussion.  I conclude that AI can give us feedback about the information we see, but good ethics should ensure it does not filter or delete any information sources.  I hope many other groups are having similar discussions and will work to ensure this happens.