Rotterdam school of Management, Erasmus University compact logo

Blog: Friday, 24 March 2023

ChatGPT went viral last December, unleashing reactions of wonder and unease across knowledge-intensive industries. Elon Musk, an early investor in OpenAI, ChatGPT’s parent company, predicts that 2023 will be a big year for AI, echoing his 2019 statement that writing  AI software will be the last human job, after which “eventually the AI will just write its own software.” This isn’t the case according to Prof. Ting Li from Rotterdam School of Management, Erasmus University (RSM) and Dr Maha Golestaneh, CEO of GILO Technologies. In this blog they explain that although ChatGPT and the latest AI-powered large language models (LLMs) can generate coherent text instantaneously, they don’t actually understand  words, much less language, and cannot produce novel or unusual ideas. The generated information can be fabricated, and the truth might be misrepresented, even with in-depth prompting.

ChatGPT is the fastest-growing app in history. And yet in less than three month after its launch, Meta, Google, and OpenAI raced to release even more powerful LLMs to mitigate the challenges of ChatGPT. Not only do these tools rephrase text, draft emails, and summarise meeting notes, they can also create blog posts optimised to advance websites to page one of a search. However, given that LLMs can still generate fictitious text with such a semblance of authority, should you be concerned?
 

The case for (and against) knowledge disruption

Marketing and communication specialists have been early adopters of generative models, and their experience may offer a fair preview of what lies ahead. In the graphics world, designers are experimenting with DALL·E 2, a tool for generating images. Likewise, programmers are using the OpenAI Codex programme to troubleshoot code. Overall, companies can leverage 50 application programming interfaces (APIs) of OpenAI to supercharge their productivity.

AI doesn’t actually understand words, much less language, and cannot produce novel or unusual ideas

But, ChatGPT and other LLMs have structural limitations. Even with a detailed prompt, the writing style of these generative models tend to be bland and generic. That’s not all bad: they tend to make simple word choices rather than using technical or scientific jargon. Emotive, exaggerated, inflammatory, and pretentious language is absent, along with intense modifiers. But because the default voice of ChatGPT and the models of Meta, Google, and OpenAI are passive and presented from a third-person point- of-view, they sound like the automaton they are.

The most serious limitation, however, is that generative models aren’t cognitive. Instead, they calculate a probability distribution to decide what word comes next in any given sentence, basing their calculations on patterns found in training sets equivalent to nearly 10 million books. What you may have experienced as a seemingly sentient being communicating with you is more like the autocomplete on your smartphone.

This is why citations are often missing or misleading, and quotes from authority figures are not included. Mentions of named entities like brands, companies, people, places, dates and numbers are also in short supply. Developers of LLMs have given them diplomatic and careful tones with hedging statements like “I’m just a humble AI model trained by OpenAI, I don’t have personal preferences, opinions, or beliefs.” Yet, ChatGPT is politically biased. It also makes sweeping generalisations with an air of confidence.

Down the line, however, these shortcomings may be addressed gradually in the years to come. Other writing apps are also on the way, capable of doing more than parrot text. Instead, they help knowledge workers, critical thinkers, and problem solvers push them to refine their ideas.


Are you a beneficiary or a victim?

AI has already changed the roles of many analytical professionals. For example, an accountant spends less time entering data into a spreadsheet and more face time assessing client needs. The most successful market research analysts and sales managers have pivoted, learning how to use AI to boost lead volumes, improve close rates, and enhance overall sales performance.

Regardless, many people will bear the brunt of what economists euphemistically call ‘adjustment costs’. A period of hardship and economic pain may ensue for those whose mental work is automated ‘away’. Think copywriters, journalists, and digital media buyers.

Conversely, more advanced AI tools are destined to boost the productivity of editors, attorneys, lecturers, and general practitioners of medicine. For example, armed with an app that recommends research sources and corrects logic, these professionals will produce better work faster.

How do you make sure you end up being one of the beneficiaries rather than one of the victims? The obvious answer will be choosing work that is not subject to automation. Another answer is to apply AI tools to your work, helping you focus on the contributions of your human mind. As Professor Leslie Willcocks from London School of Economics puts it, “a combination of hard but also soft skills – empathy, social interaction, delegation, leadership, experience, tacit knowledge, creativity, care, and service – are unique characteristics of what we are and the value we provide.”

 

Humanity’s unique selling proposition

The ‘father of modern management’, Peter Drucker, said in 1999 that “the most important contribution management needs to make in the 21st century is to increase the productivity of knowledge work and the knowledge worker.” Fast-forward two decades, his words resonate with the arrival of AI-generated text.

Despite the unprecedented unpredictability of the 21st century, the human touch is still indispensable. Our experience with generative models so far suggests that they will evolve as prior technologies did, taking away the most routine aspects of work (for example spellcheck). After all, unlike ChatGPT, great authors capture the hearts and minds of their readers. In other words, the part of your job that requires compassion or common sense is probably safe.

Even so, don’t dismiss the opportunity AI offers to supercharge your productivity. The next time you write an argumentative essay or academic manuscript, experiment with these new AI tools, which can correct your logic and research, add your own voice and data, as well as domain-specific terms and illuminating quotes recommended by them. Finally, run them to detect AI plagiarism to ensure you have made the content your own. Welcome to the era of co-writing with AI.

Prof. dr. T. (Ting) Li
Professor of Digital Business
Rotterdam School of Management (RSM)
Erasmus University Rotterdam
Photo
Ting Li

Dr Maha Golestaneh

Computational linguist and CEO

GILO Technologies

Related articles

RSM Discovery

Want to elevate your business to the next level using the latest research? RSM Discovery is your online research platform. Read the latest insights from the best researchers in the field of business. You can also subscribe to the newsletter to receive a bimonthly highlight with the most popular articles.

Do you want to learn more about this subject?

Check out these RSM education programmes

Diploma Programme in Digital Transformation
Diploma Programme in Digital Transformation
  • 1 Jan 2025
  • 1 January
  • 11,000
Diploma Programme in General Management
Diploma Programme in General Management
  • 17 Mar 2025
  • 1 year
    • €13,100 -
    • €15,500 depending on modules chosen *
    • * see details on the page
Digital Innovation
Digital Innovation
  • 7 May 2025
  • 3 days
  • 3,600
MSc Business Information Management
MSc Business Information Management
  • 12 months
    • €2,530 (EEA)*|
    • €22,500 (Non-EEA)
    • *see details on page
MScBA Business Analytics & Management
MScBA Business Analytics & Management
  • 12 months
    • €2,530 (EEA)*|
    • €22,500 (Non-EEA)
    • *see details on page
Digital Leadership and Change
Digital Leadership and Change
  • 2 Dec 2024
  • 3 days
  • 3,500
Digital Analytics and Customer Insights
Digital Analytics and Customer Insights
  • 7 Apr 2025
  • 3 days
  • 3,600
Digital and AI Strategy
Digital and AI Strategy
  • 10 March 2025
  • 3,500
Your contact for more information:
Danielle Baan

Science Communication and Media Officer

Portrait of Erika Harriford-McLaren
Erika Harriford-McLaren

Corporate Communications & PR Manager

Erasmus University campus in autumn, showcasing its iconic red trees, viewed from across the campus pool.