Top Five Reasons Why ChatGPT is Not Ready for the Enterprise – Datanami
Translation Disclaimer
(Cherdchai101/Shutterstock)
With all the excitement over ChatGPT, why have so many businesses, including Apple, Amazon, Verizon, JP Morgan Chase, Deutsche Bank, Northrup Grumman, Samsung, and Accenture banned its use? This reluctance is primarily due to concerns about deploying external Large Language Models (LLMs) like ChatGPT which could result in sensitive data being transported and stored outside of the enterprise’s secure environment.
Generative AI effectiveness in the enterprise hinges on the ability to successfully train a Large Language Model (LLM) on the company’s own data, encompassing everything from emails to financial statements. This specialized training ensures AI conversations are more accurate and relevant. However, the private nature of enterprise data and the need for strict adherence to data privacy, governance, and regulatory compliance poses significant challenges. Mismanagement can lead to costly consequences like data breaches and brand damage.
The top five reasons highlighting ChatGPT’s unpreparedness for enterprise use are:
(Ebru-Omer/Shutterstock)
(jijomathaidesigners/Shutterstock)
In light of these challenges, businesses are deploying new infrastructure solutions to meet the data-driven needs of generative AI apps. To manage the risk of exposing enterprise data, stringent data protection measures must be taken to ensure that consumer data privacy and security objectives are met while harnessing the benefits of AI technology.
Companies across various industries might have to consider running their own private LLMs to meet regulatory compliance obligations. Cloud data management platforms that support machine learning and advanced data preparation to train models safely are becoming increasingly important. Tracking workflows, experimentations, deployments, and related artifacts in these platforms enables a centralized model registry for machine learning operations (MLOps) and offers the audit trails, reproducibility, and controls required for regulatory oversight.
AI data fabrics require a full stack of data engineering capability including end-to-end security, data privacy, real-time processing, data governance, metadata management, data preparation, and machine learning. Whether utilizing private LLMs or public models like ChatGPT, centralized MLOps ensures data engineers have control over the entire machine learning lifecycle.
While ChatGPT has made a significant impact, its successful integration in the enterprise depends on successful data governance and data engineering processes. As noted by a Deutsche Bank spokesperson, Sen Shanmugasivam, the bank, despite its ban, is actively exploring how to use generative AI tools in a “safe and compliant way.” Interest in generative AI and machine learning in the enterprise is soaring, but enterprise operations will need data governance standards and safeguards to assure a safe and secure future for enterprise AI.
About the author. John Ottman, with over 25 years in the industry, is the executive chairman of Solix Technologies and chairman and co-founder of Minds.com, an open-source social media leader. His career includes key roles at Oracle, IBM, and as president and CEO of Application Security, Inc., and president of Princeton Softech. Starting in sales at Wang Laboratories, Ottman later joined Oracle and then Corio, contributing significantly to their growth, IPO, and IBM acquisition. He’s the author of “Save the Database, Save the World!” and holds a B.A. from Denison University.
Related Items:
Top 10 Challenges to GenAI Success
What’s Holding Up the ROI for GenAI?
Are We Underestimating GenAI’s Impact?
Comments are closed.
Sorry. No data so far.
Sorry. No data so far.
Sorry. No data so far.
View More…