My brief chat with ChatGPT 4 on the pitfalls of year-end predictions – Iowa Capital Dispatch
(Photo illustration by Iowa Capital Dispatch with images via Canva)
December brings the barrage of an inescapable seasonal ritual.
Of course, I’m talking about the tendency for people—many perhaps too woozy on eggnog — to try to sum up the previous year and/or predict the year ahead.
Lately, I’ve obsessed about how we’re drowning in commentary and opinions on every platform — including TikTok, now the hot news source — even as we’re starved for rigorous original reporting among the rampant closure of local newsrooms.
So, I turned to ChatGPT for solace.
OpenAI, the company behind ChatGPT, provides a “ChatGPT Plus” subscription service with access to ChatGPT 4, a significantly better AI than the public version. I subscribed to experiment and have been surprised about how my prompt threads have felt more like conversations with an emotionally vulnerable entity than sterile bot chats. (Yes, OpenAI just navigated a radical leadership crisis. Listen to this episode of the New York Times podcast “Hard Fork” for an interview with CEO Sam Altman just before he was fired and then quickly rehired.)
This brief convo about predictions was no different. The AI still loves to churn out listicles, but its defense of critical thinking nearly brought a tear to my eye.
And ChatGPT may have sounded more human than ever to me thanks to this tiny, telling detail: It felt compelled to end our conversation with reassuring exclamation marks at the ends of sentences. If that’s not the sign of an insecure sentient being, I don’t know what is.
The Q&A that follows was edited only for style. (Mainly I was offended by its capitalization of “Black Swan.”)
My prompt: What do people — including journalists, commentators, and online influencers — tend to get wrong when making predictions at the end of the year about the year ahead?
ChatGPT 4: When making predictions about the upcoming year, people, including journalists, commentators, and online influencers, often make a number of common mistakes:
These common pitfalls highlight the challenges in making accurate predictions about the future and underscore the importance of critical thinking, open-mindedness, and humility in forecasting.
My prompt: What is one of the more egregious examples of somebody falling prey to one of these common pitfalls?
ChatGPT 4: One of the more egregious examples of someone falling prey to these common pitfalls in prediction-making is the widespread belief in the late 1990s and early 2000s that the housing market in the United States would continue to rise indefinitely. This belief was a classic case of recency bias, where recent trends in rising home prices led many experts, commentators, and financial professionals to assume that this trend would continue without a significant downturn.
Key mistakes made in this instance included:
This instance illustrates how a combination of overconfidence, recency bias, and a lack of understanding of market complexities can lead to significantly flawed predictions with far-reaching consequences. The 2008 financial crisis, which was partly a result of the housing market crash, had a profound impact on the global economy and serves as a cautionary tale about the dangers of erroneous forecasting.
My prompt: Taking advantage of the most recent data you have available, what do you think would be an example of predictions now being made about the near-term future?
ChatGPT 4: Based on the most recent data and trends up to April 2023, here are a few examples of predictions being made about the near-term future:
It’s important to note that while these predictions are based on current trends and data, they are subject to change due to unforeseen events or shifts in global dynamics. As with any prediction, there’s a degree of uncertainty, and future developments could alter these forecasts significantly.
My prompt: To be clear, are you saying that the concern being expressed about each of these seven items may be overblown — or at least exaggerated — because the predictions made are too bleak? And is it just a coincidence that the No. 1 item involves artificial intelligence, or do you think your own analysis in some way may be biased because you represent a significant technological advancement?
ChatGPT 4: To clarify, the predictions and concerns listed about these seven areas are not necessarily overblown or exaggerated; rather, they reflect current trends and common themes in public discourse as of my last update in April 2023. The intention was to provide an overview of areas where significant attention and forecasting are being focused, rather than to suggest that these concerns are necessarily too bleak. Each area has its own set of complexities, and the level of concern or optimism can vary widely among experts and commentators.
Regarding the listing of artificial intelligence (AI) as the first item, this placement doesn’t necessarily indicate bias or suggest that it’s the most important or concerning area. The order was more about structuring a diverse list that covers various aspects of global interest, including technology, environment, economy, and social changes. AI is a significant area of development and public interest, especially considering its rapid advancements and potential impact across multiple sectors. Hence, it often appears prominently in discussions about future trends.
As an AI developed by OpenAI, I am programmed to provide balanced, informative, and unbiased responses to the best of my ability. My responses are based on the data and training I have received, which includes a wide range of sources and perspectives. While AI and machine learning models like myself are designed to be as objective as possible, it’s important to acknowledge that no source of information, AI or human, can be entirely free from all forms of bias. This is why critical thinking and considering multiple sources of information are always recommended when evaluating predictions and trends.
My prompt: That last paragraph was elegantly phrased and deployed. Thank you.
ChatGPT 4: You’re very welcome! I’m glad you found the response helpful. If you have any more questions or need further information, feel free to ask. I’m here to help!
Editor’s note: Please consider subscribing to the collaborative and its member writers to support their work.
by Kyle Munson, Iowa Capital Dispatch
December 10, 2023
by Kyle Munson, Iowa Capital Dispatch
December 10, 2023
December brings the barrage of an inescapable seasonal ritual.
Of course, I’m talking about the tendency for people—many perhaps too woozy on eggnog — to try to sum up the previous year and/or predict the year ahead.
Lately, I’ve obsessed about how we’re drowning in commentary and opinions on every platform — including TikTok, now the hot news source — even as we’re starved for rigorous original reporting among the rampant closure of local newsrooms.
So, I turned to ChatGPT for solace.
OpenAI, the company behind ChatGPT, provides a “ChatGPT Plus” subscription service with access to ChatGPT 4, a significantly better AI than the public version. I subscribed to experiment and have been surprised about how my prompt threads have felt more like conversations with an emotionally vulnerable entity than sterile bot chats. (Yes, OpenAI just navigated a radical leadership crisis. Listen to this episode of the New York Times podcast “Hard Fork” for an interview with CEO Sam Altman just before he was fired and then quickly rehired.)
This brief convo about predictions was no different. The AI still loves to churn out listicles, but its defense of critical thinking nearly brought a tear to my eye.
And ChatGPT may have sounded more human than ever to me thanks to this tiny, telling detail: It felt compelled to end our conversation with reassuring exclamation marks at the ends of sentences. If that’s not the sign of an insecure sentient being, I don’t know what is.
The Q&A that follows was edited only for style. (Mainly I was offended by its capitalization of “Black Swan.”)
My prompt: What do people — including journalists, commentators, and online influencers — tend to get wrong when making predictions at the end of the year about the year ahead?
ChatGPT 4: When making predictions about the upcoming year, people, including journalists, commentators, and online influencers, often make a number of common mistakes:
These common pitfalls highlight the challenges in making accurate predictions about the future and underscore the importance of critical thinking, open-mindedness, and humility in forecasting.
My prompt: What is one of the more egregious examples of somebody falling prey to one of these common pitfalls?
ChatGPT 4: One of the more egregious examples of someone falling prey to these common pitfalls in prediction-making is the widespread belief in the late 1990s and early 2000s that the housing market in the United States would continue to rise indefinitely. This belief was a classic case of recency bias, where recent trends in rising home prices led many experts, commentators, and financial professionals to assume that this trend would continue without a significant downturn.
Key mistakes made in this instance included:
This instance illustrates how a combination of overconfidence, recency bias, and a lack of understanding of market complexities can lead to significantly flawed predictions with far-reaching consequences. The 2008 financial crisis, which was partly a result of the housing market crash, had a profound impact on the global economy and serves as a cautionary tale about the dangers of erroneous forecasting.
My prompt: Taking advantage of the most recent data you have available, what do you think would be an example of predictions now being made about the near-term future?
ChatGPT 4: Based on the most recent data and trends up to April 2023, here are a few examples of predictions being made about the near-term future:
It’s important to note that while these predictions are based on current trends and data, they are subject to change due to unforeseen events or shifts in global dynamics. As with any prediction, there’s a degree of uncertainty, and future developments could alter these forecasts significantly.
My prompt: To be clear, are you saying that the concern being expressed about each of these seven items may be overblown — or at least exaggerated — because the predictions made are too bleak? And is it just a coincidence that the No. 1 item involves artificial intelligence, or do you think your own analysis in some way may be biased because you represent a significant technological advancement?
ChatGPT 4: To clarify, the predictions and concerns listed about these seven areas are not necessarily overblown or exaggerated; rather, they reflect current trends and common themes in public discourse as of my last update in April 2023. The intention was to provide an overview of areas where significant attention and forecasting are being focused, rather than to suggest that these concerns are necessarily too bleak. Each area has its own set of complexities, and the level of concern or optimism can vary widely among experts and commentators.
Regarding the listing of artificial intelligence (AI) as the first item, this placement doesn’t necessarily indicate bias or suggest that it’s the most important or concerning area. The order was more about structuring a diverse list that covers various aspects of global interest, including technology, environment, economy, and social changes. AI is a significant area of development and public interest, especially considering its rapid advancements and potential impact across multiple sectors. Hence, it often appears prominently in discussions about future trends.
As an AI developed by OpenAI, I am programmed to provide balanced, informative, and unbiased responses to the best of my ability. My responses are based on the data and training I have received, which includes a wide range of sources and perspectives. While AI and machine learning models like myself are designed to be as objective as possible, it’s important to acknowledge that no source of information, AI or human, can be entirely free from all forms of bias. This is why critical thinking and considering multiple sources of information are always recommended when evaluating predictions and trends.
My prompt: That last paragraph was elegantly phrased and deployed. Thank you.
ChatGPT 4: You’re very welcome! I’m glad you found the response helpful. If you have any more questions or need further information, feel free to ask. I’m here to help!
Editor’s note: Please consider subscribing to the collaborative and its member writers to support their work.
Iowa Capital Dispatch is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Iowa Capital Dispatch maintains editorial independence. Contact Editor Kathie Obradovich for questions: info@iowacapitaldispatch.com. Follow Iowa Capital Dispatch on Facebook and Twitter.
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site. Please see our republishing guidelines for use of photos and graphics.
Kyle Munson is a storyteller, writer, journalist, and marketer. His career includes experience in daily news, corporate content strategy, and nonprofit organizations. He’s a fifth-generation Iowan who lives in Des Moines. He writes a column, Kyle Munson’s Main Street, and is a member of the Iowa Writers’ Collaborative.
DEMOCRACY TOOLKIT
© Iowa Capital Dispatch, 2023
Iowans value integrity in their government. Free and independent journalism is what keeps our public servants accountable and responsive to the people. That’s why Iowa Capital Dispatch, a nonprofit, independent source for quality journalism, is working every day to keep you informed about what government officials are doing with your money, your freedom and your safety.
Our stories may be republished online or in print under Creative Commons license CC BY-NC-ND 4.0. We ask that you edit only for style or to shorten, provide proper attribution and link to our web site.