This is a scary moment for AI / ML

This is a scary moment for artificial intelligence / machine learning (AI / ML).  We are at a point where developers are implementing AI / ML wherever there it has potential value.  It is being used for good applications including:

  • Automation of helpdesk bots to answer questions
  • Enhancement of text aid in creative writing and grammar
  • Improvement of marketing text to improve lead capture
  • Analysis of network traffic and correlation of logs to identify cyber threats
  • Aiding decision making to reduce cost and potentially improve quality and fairness
  • Search of text in communications to identify threats to national security

BUT, we are at a point where the negative uses of such tools, if not evaluated and controlled can outweigh the positives. 

These tools require either supervised or unsupervised training for embedded decision logic to reach conclusions.  The article associated with the link: https://arxiv.org/pdf/2101.05783v1.pdf analyzes a forerunner AI tool and its natural language biases when generating text.  The article provides an analysis of the OpenAI GPT-3 natural language processing tool and its bias against Muslims and why that occurs in its learning process.  It is also a tool that Microsoft has licensed. I am not being critical of Microsoft but it should be clear that this indicates how much we can be impacted by AI / ML. I encourage you to read the article.  The same issue of biases could exist for Blacks, Jews, Native Americans, Palestinians, Asians, and Rohingyas.

Increased use of AI / ML is happening at a time when we are also experiencing an explosion of internet-based communications, cybersecurity threats, personal privacy decisions and management, ethnic and racial injustice around the world and lack of trust across nations and political groups.  We see the words “lie” and “cancel culture” used regularly within countries, across ethnic groups and across political parties.  We see an increase in hate crimes. And at this critical point, we are introducing AI/ML tools that can be used to automate division among people.  This can occur  unconsciously if AI/ML learning is poorly managed and measured. It could also be done consciously as part of a nation state attack.  We will not slow the deployment of AI / ML.  There are only two solutions to ensuring that it is used for good purposes.

1. We need to try to find ways to build in constraints to ensure that we measure and control the negative impact.

2. And we need to try to find ways to educate people to think critically about the output of these tools.

This is a time when AI/ML could create some incredible new benefits but it is also a scary moment for AI / ML, Each of us plays a part in managing its future.