ChatGPT Unlikely to Replace Accountants

Even while there is rising concern about how generative AI can disrupt the world's labor markets, accountants may be able to sigh with relief, even if only temporarily. Recent studies show that my chosen profession may be spared replacement since the ChatGPT language model doesn’t do well with math!
The fastest growing and most well-known AI platform to date, ChatGPT's artificial intelligence language model, which excels at behavioral learning, storytelling, and other creative tasks, has prompted questions about its potential to enable students to cheat on assignments and exams. The bot passed the bar exam with a 90th percentile score, completed 13 out of 15 AP tests, and scored almost perfectly on the Graduate Record Exam (GRE).
"When this technology first came out, everyone was worried that students could now use it to cheat," Brigham Young University accounting professor David Wood noted. "But opportunities to cheat have always existed. So, for us, we’re trying to focus on what we can do with this technology now that we couldn’t do before to improve the teaching process for faculty and the learning process for students. Testing it out was eye-opening."
However, as a later study led by Wood discovered, the platform often has trouble comprehending mathematical procedures and frequently embellishes data to hide errors when they occur!
According to Wood's research, ChatGPT's ability to pass accounting tests was compared to that of actual accounting students. 186 academic institutions from 14 countries submitted 25,181 questions on information systems, auditing, financial accounting, managerial accounting, and taxation. Undergraduate students at BYU added 2,268 more textbook test bank questions to the repository used for the study.
A combination of multiple choice, true/false, and written response prompts were used to deliver the questions in various formats with varied degrees of difficulty.
According to the survey, students outperformed ChatGPT by almost 30%, scoring an average of 76.7% compared to ChatGPT's 47.4%.
Only 11.3% of the questions were answered correctly by ChatGPT, mostly in the areas of auditing and accounting information systems. The chatbot scored substantially worse on short-answer questions, only scoring between 28.7% and 39.1%. It also performed better while answering multiple choice and true/false questions, earning 59.5% and 68.7% on each format, respectively.
According to a press release from Jessica Wood, a BYU student who took part in the study, "It's not perfect; you're not going to be using it for everything. Using ChatGPT alone to learn is a fool's errand,"