ChatGPT-4 speeds up work by 40% but may make you feel sleepy on the job: Harvard – People Matters
Since the launch of Open AI’s ChatGPT, employers and employees have been exploring its capabilities to enhance their productivity. A recent study has now shown that generative AI has assisted employees at the Boston Consulting Group (BCG) in accomplishing tasks more frequently, with greater speed, and at an elevated level of quality compared to those who did not use AI.
Conducted by data scientists and researchers from Harvard, Wharton, and MIT, the study also highlighted that the individuals who performed the least effectively within the group experienced the most substantial improvements when incorporating generative AI into their work.
“The fact that we could boost the performance of these highly paid, highly skilled consultants, from top, elite MBA institutions, doing tasks that are very related to their every day tasks, on average 40 percent, I would say that’s really impressive,” Harvard’s Fabrizio Dell’Acqua, the paper’s lead author, reported VentureBeat.
Additionally, the study identified two emerging AI usage patterns among some of the firm’s technologically adept consultants, which the researchers termed ‘Cyborg’ and ‘Centaur’ behaviours. These behaviours, as the researchers concluded, could potentially serve as valuable models for navigating tasks characterised by uncertainties regarding AI’s capabilities.
The study also highlighted that AI appeared to have a levelling effect on skills. Consultants who initially had the lowest baseline performance demonstrated the most significant performance enhancement, with a notable 43% increase when they integrated AI into their workflow. Meanwhile, even the top-performing consultants experienced some performance improvement, although it was comparatively less substantial.
Nevertheless, the study uncovered that individuals who used AI for tasks beyond its capabilities were more likely to make errors by placing excessive trust in AI, even when it wasn’t suitable. One of the study’s primary findings emphasised the ongoing challenge organisations encounter due to the limited transparency of AI’s inner workings.
This lack of transparency makes it challenging to determine precisely when AI can be considered reliable for specific tasks. Consequently, this presents a substantial challenge for organisations moving forward.
According to the report, some employees seemed to handle this new frontier more effectively than others by adopting what the study referred to as ‘Centaurs’ or ‘Cyborgs.’ These individuals found ways to seamlessly integrate AI and human work, leveraging the strengths of both approaches.
Centaurs maintained a clear distinction between human and AI tasks, choosing between them based on their perception of each one’s respective strengths and capabilities. In contrast, Cyborgs fused human and machine elements across most of their tasks.
However, there still exists an imperceptible barrier regarding which tasks can genuinely benefit from AI. Mollick explained, “Some tasks that might logically seem to be the same distance away from the center, and therefore equally difficult – say, writing a sonnet and an exactly 50 word poem – are actually on different sides of the wall.”
He continued, “The AI is great at the sonnet, but, because of how it conceptualizes the world in tokens, rather than words, it consistently produces poems of more or less than 50 words.”
Furthermore, the study discovered that some unexpected tasks, such as idea generation, proved to be relatively simple for AI, whereas tasks that may seem straightforward for machines, like basic math, posed challenges for large language models (LLMs).
AI’s potential can lead to human complacency. The core issue lies in humans sometimes overestimating the areas where AI excels. The paper substantiated prior research, such as Harvard’s Dell’Acqua, which revealed that placing excessive trust in AI’s competence can result in a perilous overreliance on it, ultimately leading to subpar outcomes, such as falling asleep.
Moreover, the recent study uncovered that AI has the capacity to foster uniformity. The investigation examined the diversity of ideas proposed by participants concerning new market concepts for the shoe company. It found that while AI-generated ideas exhibited higher quality, they lacked the diversity seen in ideas generated by consultants who did not use AI.
This implies that while GPT-4 contributes to the creation of better content, it may also encourage more uniform outputs.
Businesses should contemplate the adoption of various AI models, but not limited to Open AI’s GPT-4, involving multiple large language models (LLMs), or potentially increasing human involvement. This approach aims to counteract the trend toward uniformity in outputs.
The study noted that the specific need for such diversity might vary depending on a company’s product. Some companies may prioritise achieving consistently high-quality outputs, while others may place greater value on fostering exploration and innovation.
In cases where numerous companies within a competitive landscape employ the same AI, resulting in a reduction in the diversity of ideas, the study suggested that companies generating ideas without AI assistance may stand out and differentiate themselves.
Did you find this story helpful?
How do you envision AI transforming your work?
By clicking “Subscribe” button above, you are accepting our Terms & Conditions and Privacy Policy.