Artificial Intelligence (AI) – A Survey of Office Workers Concerns

On the top block (and elsewhere) of the Home page of “Ernst and Young LLP”, the American affiliate of London based, international financial and Legal services company, Ernst and Young Global, the major subject was Artificial Intelligence (AI). The leading text-link on the page was titled “How Organizations can stop the skyrocketing use of AI from fueling anxiety?”Good question. The link goes to a very recent survey article dated December 5th, 2023, that the company took of 1,000 office workers about their own and their company’s use of AI and their concerns about it.

This survey’s results are the topic of this article.

Their mention (above) of “AI use fueling anxiety” gives us a clue as to what their survey found, or at least, some of it. Before we get into the main part of the survey, I want to mention the various types of AI use that their respondents were currently using (by percentage of use).  Most of us are already familiar with many of them.

  • Chat GPT (55%)
  • Chatbots (45%)
  • Virtual Assistants (44%)
  • Predictive Analytics (39%)
  • Voice Recognition Software (35%)
  • Digital Scanners (33%)
  • Voice Transcription Software (33%)
  • Robotic Process Automation (32%)

As you can see most of these AI tools have been around for a while. I have never been impressed with “Virtual Assistants”. Most of the time they work like an extremely incompetent person whose main function seems to be passing off the call to a real person who can help. I do believe, however, that it will get better.

But most of these have been very useful and at least three of them are extremely important, i.e. Chat GPT, Predictive Analytics, and Robotic Process Automation, and which, even in the present state of development, are amazing.

But, in this article we are only going to consider the Ernst & Young survey of office workers’ concerns about AI, both now and in the future.

The Survey Results (Property of Ernst and Young LLC)

Personal Concerns: Although most of the respondents had a positive view of the potential of AI (76%), they were concerned that AI could eliminate some jobs completely (75%) or lower the salary value of other work (72%).

They had personal job concerns if they did not know how to use AI (67%) or simply that they did not use AI in their work and therefore would be out of the mainstream (66%).

In spite of their concerns, most respondents trusted AI technologies (77%), but their greater exposure increased their anxieties rather than lessened them with about half (48%) having more concerns now than they did a year ago.

When asked to identify what other concerns they had, these three were cited most often, i.e., the quality of AI outputs, the speed of introducing AI, and given their concerns, the generalized introduction of AI into the workplace (where they have a lot at stake (60%).

Many wanted more training (80%), they wanted company leaders to promote the responsible and ethical use of AI (77%), and AI best practices to be widely shared (81%)

Ethical, legal, and Moral Concerns: Employees want to know if they are using AI responsibly (65%). They were concerned about Cyber Security (75%), legal considerations (plagiarism, etc.) (77%), moral and ethical issues (bias & discrimination) (71%). Most want AI development companies to self-regulate more (81%) and also have the government regulate (78%).

Transparency: There were three items that employees wanted in response to the question, “What would make you more comfortable in your company’s use of AI; these three were: 1. If the company basically informed them of its use of AI (78%), 2. 3. If the company told them how its data was going to be used by AI (82%), and If the company’s use of AI was reviewed and approved by a trusted expert (76%)

Standing behind the results of this survey were, no doubt, the generally known negative uses of AI that have been in the news lately like deep fakes, mis- information and dis-information, both accidentality and in actual campaigns, plagiarism, inadvertent collection of copyrighted material, etc. These concerns are quite valid and will even get worse when they become more generally used. What other harm can be done is still unknown. This is not to say that AI will not be of great use to society, but rather how to achieve great good while avoiding known and unknown harmful usages. This is going to be a great challenge to humanity and even some of the assumptions of our neo-liberal age.

As always, stay safe and pray for Israel and Gaza, Russia, and Ukraine.