Gartner: 3 Steps to Mitigate Sensitive Data Risks with ChatGPT | BCS – BCS
BCS MEMBERSHIP
Join the global and diverse home for digital, technical and IT professionals.
CHARTERED IT PROFESSIONAL
CITP is the independent standard of competence and professionalism in the technology industry.
STARTING YOUR IT CAREER
Kick-start a career in IT, whether you're starting out or looking for a career change.
SOFTWARE TESTING CERTIFICATION
Over 100,000 professionals worldwide are certified with BCS.
ESSENTIAL DIGITAL SKILLS
Improve your digital skills so you can get on in today's workplace.
EVENTS CALENDAR
View all of our upcoming events.
ARTICLES, OPINION AND RESEARCH
The latest insights, ideas and perspectives.
IN FOCUS
BCS MEMBERSHIP
CHARTERED IT PROFESSIONAL
STARTING YOUR IT CAREER
SOFTWARE TESTING CERTIFICATION
ESSENTIAL DIGITAL SKILLS
EVENTS CALENDAR
ARTICLES, OPINION AND RESEARCH
IN FOCUS
27 September 2023
As the use of generative AI tools become commonplace, Avivah Litan, Distinguished VP Analyst at Gartner looks at the key steps that organisations can take to help protect their sensitive data.
In the dynamic landscape of artificial intelligence (AI), the surge in interest surrounding generative AI tools like ChatGPT and innovations like Microsoft 365 Copilot is palpable. While these technologies promise transformative potential, they also raise important concerns about safeguarding sensitive data.
CIOs and IT leaders should look to employ the following steps to mitigate sensitive data risks associated with ChatGPT.
The adoption of generative AI introduces the challenge of preventing sensitive data from inadvertently entering AI systems. To combat this, leverage existing security tools such as a security service edge (SSE) solution, incorporating the SSE’s capability to mask, redact, or block sensitive data inputs ensuring data integrity at the point of interaction.
In this context, the vigilant use of the block option can be functional, effectively thwarting sensitive data entry through web interfaces and APIs. This proactive approach is crucial in maintaining the confidentiality of sensitive information and ensuring a consistent approach is adhered to throughout the organisation.
The advent of commercial off-the-shelf (COTS) generative AI solutions (such as Microsoft 365 Copilot) presents an enticing prospect for content creation. Nevertheless, it’s essential to meticulously analyse the data security protocols underpinning these tools.
Organisations planning their adoption strategy for these tools should consider their data type. The COTS generative AI service can be embraced in public data scenarios to boost innovation and productivity. When dealing with proprietary or customer data, a meticulous assessment of data security, compliance, and privacy measures are essential to prevent inadvertent compromises.
For highly sensitive data, where stringent privacy is paramount, integrating these tools requires a fortified approach within existing data governance and access control frameworks, ensuring compliance with rigorous privacy standards. Tailor the adoption strategy to the data’s nature enables organisations to harness the power of generative AI while safeguarding data integrity and adhering to regulatory obligations.
For organisations seeking maximum control over data protection, creating tailored generative AI applications using foundational models emerges as a strategic choice. Microsoft’s Azure OpenAI Service is a pivotal platform for developing GPT-based applications, especially those dealing with proprietary data.
This approach empowers organisations to engineer applications that align precisely with their unique data security requisites. As the responsibility for application security falls on the customer, Azure OpenAI Service offers a versatile canvas for innovation.
For you
Be part of something bigger, join BCS, The Chartered Institute for IT.
Organisations with deep learning proficiency, substantial computational resources, and dedicated budgets can contemplate training domain-specific large language models (LLMs) using proprietary data. This approach, exemplified by BloombergGPT, yields unparalleled control over sensitive data protection. By training LLMs from scratch, organisations can design AI models that adhere closely to their data security parameters, constructing robust defences against data leakage.
As generative AI ascends to the forefront of technological progress, it brings forth both promise and peril. Through a meticulously crafted roadmap, security professionals are equipped to traverse the landscape of ChatGPT and generative AI while safeguarding the sanctity of sensitive data. In this symbiotic relationship between innovation and protection, organisations can realise AI’s potential without compromising their data’s integrity.
Avivah Litan, Distinguished VP Analyst in Gartner Research. Avivah is currently a member of the ITL AI team that covers AI and Blockchain and a lead Gartner analyst specialising in AI trust, risk and security management.
Gartner analysts will explore how leaders must structure their AI operating models at the Gartner Security & Risk Management Summit, taking place from 26 – 28 September 2023 in London.
4 days ago
11 days ago
12 days ago