Read ChatGPT's take on Leopold Aschenbrenner's AI essay – Business Insider

Over the past few months, several employees have left OpenAI, citing concerns about the company’s commitment to safety.
Besides making pithy exit announcements on X, they haven’t said much about why they’re worried about OpenAI’s approach to development — or the future of artificial intelligence.
That is until earlier this week, when Leopold Aschenbrenner, a researcher who was fired from OpenAI in April, published his thoughts on the AI revolution in an epic 165-page treatise.
Aschenbrenner worked on OpenAI’s superalignment team, which was tasked with mitigating AI risks. Aschenbrenner said OpenAI fired him for leaking information about the company’s readiness for artificial general intelligence.
For his part, Aschenbrenner said the information he shared was “totally normal” and that OpenAI might have just been looking for a reason to fire him. He was one of several employees who refused to sign a letter calling for CEO Sam Altman’s return after the board briefly ousted Altman last year.
Aschenbrenner’s essay doesn’t appear to include sensitive details about OpenAI. Instead, as Aschenbrenner writes on the dedication page, it’s based on “publicly available information, my own ideas, general field knowledge, or SF gossip.”
Business Insider uploaded the tract to OpenAI’s GPT-4 model. We asked it to summarize the work and relay the most significant takeaways. We also gently requested that the bot keep its summary clear and concise.
ChatGPT distilled the essay into 57 words that outlined Aschenbrenner’s views on the evolution of AI.
“Leopold Aschenbrenner’s essay discusses the transformative potential of artificial general intelligence (AGI) and superintelligence, and forecasts significant advancements in AI technology in the near future. Aschenbrenner, formerly associated with OpenAI, outlines how the progression from current AI models like GPT-4 to AGI could occur much faster than anticipated, based on trends in computational power and algorithmic efficiency.”
Rapid Progress in AI
Related stories
“Aschenbrenner argues that AI development is accelerating at an unprecedented rate. He predicts that by 2027, AI models could reach the capabilities of human AI researchers and engineers, potentially leading to an intelligence explosion where AI surpasses human intelligence.”
Economic and Security Implications
“The essay highlights the immense economic and security implications of these advancements. Aschenbrenner points out that trillions of dollars are being invested into developing the infrastructure needed to support these AI systems, such as GPUs, data centers, and power generation. He also emphasizes the critical need for securing these technologies to prevent misuse, particularly by state actors like the CCP.”
Technical and Ethical Challenges
“The essay discusses the significant challenges in controlling AI systems smarter than humans, referring to this as the ‘superalignment’ problem. Managing this will be crucial to prevent catastrophic outcomes.”
Predictions and Societal Impact
“Aschenbrenner suggests that few people truly understand the scale of change that AI is about to bring. He discusses the potential for AI to reshape industries, enhance national security, and pose new ethical and governance challenges.”
As Aschenbrenner wrote, there are probably only a few hundred people, most of them in AI labs in San Francisco, who have “situational awareness” about the industry. These people will most likely have the best insight into how AI will reshape the years ahead.
Aschenbrenner, as one of them, seems to believe AI will assume a critical role in national security.
We followed up our original prompt by asking ChatGPT to summarize the predictions Aschenbrenner made in his essay.
Here’s how ChatGPT broke it down.
Here’s the full prompt we used:

Read next
Jump to