ChatGPT: Ask the Right Questions | Elmhurst University Elmhurst … – Elmhurst College
Popular Searches
Home Blog
Unlimited Data | BY JAMES KULICH | 5 MIN READ
The frenzy over generative artificial intelligence (AI) is in full swing, with chat generative pre-trained transformer (ChatGPT) front and center for most of us. At one level, the emergence of ChatGPT is not terribly surprising. As Ajay Agrawal and his coauthors discussed in their book, Prediction Machines, as the cost of making predictions via machine learning drops, new ways will arise to frame desirable outcomes as prediction problems.
ChatGPT is a great illustration of this. At its core, ChatGPT does nothing more than predict the next word to show on the screen as it develops its responses to your prompts. How different is this from the way we as humans instinctively create language?
As is the case with any new technology, asking the right questions is the first step toward attaining value from what the tool has to offer. This is especially important with software as potentially impactful as generative AI.
So, what can ChatGPT and other generative AI products do, and how should we use them? As I’m sure you know, a lot of experiments are underway. If you are just getting started, try something fun. One colleague took some writing and asked ChatGPT to express it in the voice of a pirate. Amusing, yes, but also illustrative of the range of capabilities this tool has.
Then, experiment with an application that might yield something of value to your organization. Keep the focus clear and tight. Accenture’s Chief Technology Officer, Paul Daugherty, and his colleagues offer some guidelines for business-focused generative AI possibilities in A New Era of Generative AI for Everyone. A key to remember is that the more specific you are, the more accurate the response will be.
Ryan Elmore, Innovation Fellow at West Monroe Partners and our newest faculty member, is a coauthor of a series of posts on using generative AI effectively in organizations. A vital first step is a well-crafted set of prompts, i.e., asking the software the right questions. In their piece Mastering Prompt Engineering, Ryan and his colleagues offer a four-step process for creating a good prompt: developing an AI persona, breaking prompts into distinct parts, learning from mistakes, and providing feedback. Ryan mentioned to me that some of his prompts can be three pages long!
One of my faculty colleagues, Stephen Gnidovec, and I were discussing ways to redesign our initial course on data architectures to give students hands-on experience with modern cloud environments sooner. Stephen shared his screen to show me what he and ChatGPT had developed as an initial outline for the revised course.
The results were fascinating. The topics were right, and the flow made sense. But some elements felt overemphasized while others didn’t have enough coverage. With a few better-tailored prompts, ChatGPT offered a new approach that will be a good starting point for our task. Like much data science work, engaging generative AI needs to be an iterative process.
Much has already been written about the risks and challenges associated with generative AI. ChatGPT can convincingly convey wrong information. While ChatGPT makes a good effort to avoid bias and misinformation, its capabilities can be misused. Generative AI can amplify and compound problematic situations by enabling them to spread farther and faster.
In the short run, it is important to set clear limits, hard stops, for use cases that need to be avoided. In my world, we don’t want students using ChatGPT to write papers wholesale, for example. An emerging area of discussion is Responsible AI, a holistic look at what it means to use these tools effectively and ethically. The report Building Robust RAI Programs as Third Party Tools Proliferate published by MIT Sloan offers a recent view of the state of this practice.
Beyond the possibilities for improving business processes and customer experiences, I’m excited about the possibilities for tools like ChatGPT to level the playing field, especially for people with disabilities. In a recent post, Venture Creative Collective offered some powerful illustrations of ways tools like ChatGPT are already doing so.
Some are technical, such as dyslexics facing reading challenges using ChatGPT to express complex texts in alternative forms. Others are very human, such as individuals with autism using ChatGPT to practice social interactions. These can be game changing advances.
These technologies will continue to advance rapidly. It is important to focus on using the tools to do what matters in your setting. This is time to observe, learn, and experiment. Now is the time for each of us to think imaginatively yet realistically about the potential this new technology offers as well as the risks we must address. We can’t afford not to. Neither can our students at Elmhurst. That’s why my policy in courses I teach is now ChatGPT: Required.
Elmhurst University’s Master’s in Data Science and Analytics program helps professionals excel in business. Meanwhile, our flexible online format allows you to earn a master’s degree on your terms. Ready to learn more? Complete the form below.
Jim Kulich is a professor in the Department of Computer Science and Information Systems at Elmhurst University. Jim directs Elmhurst’s master’s program in data science and analytics and teaches courses to graduate students who come to the program from a wide range of professional backgrounds.
Posted July 18, 2023
August 1, 2023 | 4 Minute Read
June 27, 2023 | 4 Minute Read
June 13, 2023 | 5 Minute Read
190 Prospect Avenue
Elmhurst, Illinois 60126
(630) 617-3400
admit@elmhurst.edu
© 2023 Elmhurst University