AI governance: What is being governed? – International Association of Privacy Professionals

The day’s top stories from around the world
Stay on top of the latest AI governance news and developments of the profession.
Original reporting and feature articles on the latest privacy developments
Where the real conversations in privacy happen
Exploring the technology of privacy
A roundup of the top Canadian privacy news
A roundup of the top European data protection news
A roundup of the top privacy news from the Asia-Pacific region
A roundup of the top privacy news from Latin America
A roundup of US privacy news
Talk privacy and network with local members at IAPP KnowledgeNet Chapter meetings, taking place worldwide.
Looking for a new challenge, or need to hire your next privacy pro? The IAPP Job Board is the answer.
Locate and network with fellow privacy professionals using this peer-to-peer directory.
Review a filterable list of conferences, KnowledgeNets, LinkedIn Live broadcasts, networking events, web conferences and more.
Learn how to surround AI with policies and procedures that make the most of its potential by reducing its risks.
Understand Europe’s framework of laws, regulations and policies, most significantly the GDPR.
Steer a course through the interconnected web of federal and state laws governing U.S. data privacy.
Learn the intricacies of Canada’s distinctive federal/provincial/territorial data privacy governance systems.
Develop the skills to design, build and operate a comprehensive data protection program.
Add to your tech knowledge with deep training in privacy-enhancing technologies and how to deploy them.
Introductory training that builds organizations of professionals with working privacy knowledge.
Meet the stringent requirements to earn this American Bar Association-certified designation.
The global standard for the go-to person for privacy laws, regulations and frameworks
The first and only privacy certification for professionals who manage day-to-day operations
As technology professionals take on greater privacy responsibilities, our updated certification is keeping pace with 50% new content covering the latest developments.
Recognizing the advanced knowledge and issue-spotting skills a privacy pro must attain in today’s complex world of data privacy.
The first title to verify you meet stringent requirements for knowledge, skill, proficiency and ethics in privacy law, and one of the ABA’s newest accredited specialties.
The IAPP’S CIPP/E and CIPM are the ANSI/ISO-accredited, industry-recognized combination for GDPR readiness. Learn more today.
Mostre seus conhecimentos na gestão do programa de privacidade e na legislação brasileira sobre privacidade.
Certification des compétences du DPO fondée sur la législation et règlementation française et européenne, agréée par la CNIL.
This report explores the compensation, both financial and nonfinancial, offered to privacy professionals.
This year’s Privacy Risk Study represents the most comprehensive study of privacy risk undertaken by the IAPP in collaboration with KPMG.
This report explores the state of AI governance in organizations and its overlap with privacy management.
The IAPP’s US State Privacy Legislation Tracker consists of proposed and enacted comprehensive state privacy bills from across the U.S.
Access all reports and surveys published by the IAPP.
On this topic page, you can find the IAPP’s collection of coverage, analysis and resources covering AI connections to the privacy space.
IAPP members can get up-to-date information here on the California Consumer Privacy Act and the California Privacy Rights Act.
The IAPP’s EU General Data Protection Regulation page collects the guidance, analysis, tools and resources you need to make sure you’re meeting your obligations.
P.S.R. 2023 is the place to find speakers, workshops and networking focused on the intersection of privacy and technology.
A new, must-see event for business leaders, tech and privacy pros who work with AI to learn about practical AI governance, accountability, fairness testing and more.
Europe’s top experts offer pragmatic insights into the evolving landscape and share knowledge on best practices for your data protection operation.
Gain exclusive insights about how privacy affects business in Australia and Aotearoa New Zealand.
Explore the full range of U.K. data protection issues, from global policy to daily operational details.
Expand your network and expertise at the world’s top privacy event featuring A-list keynotes and high-profile experts.
Leaders from across the country’s privacy field deliver insights, discuss trends, offer predictions and share best practices.
Hear top experts discuss global privacy issues and regulations affecting business across Asia.
View our open calls and submission instructions.
Increase visibility for your organization — check out sponsorship opportunities today.
Start taking advantage of the many IAPP member benefits today
See our list of high-profile corporate members—and find out why you should become one, too
Don’t miss out for a minute—continue accessing your benefits

""
When prompted to explain artificial intelligence governance in a short joke, this is what ChatGPT had to say:
“AI Governance: because even the smartest algorithms occasionally need a timeout!”
Admittedly, ChatGPT’s response is clever. However, it may also hint at why calls for AI governance are proliferating and amplifying around the world. Given the rapid developments in AI recently, do we have the luxury of timeouts?
With AI — especially generative AI — becoming part of the mainstream, and its use and adoption across organizations quickly expanding, it is unlikely, if not impossible, that the world can simply pause for governance to catch up. The time for governance is now.
But what is “AI” or “governance”? Though numerous helpful definitions of AI have surfaced following global calls for governance, technology experts can vouch that AI is difficult to define. Similarly, those working in law and policy can also vouch that governance is difficult to reduce to one crisp definition. In pursuit of understanding both “AI” and “governance” it is useful to move beyond the definitional aspect and engage instead with the mechanics of AI algorithms to understand how risks emerge, how governance applies and why.
AI technology has matured considerably, and, in recent years, we have come a long way from traditional programming to generative AI. In the former, programmers would hardcode to teach an algorithm what an elephant is. With generative AI, the computer can tell you everything it knows about an elephant when prompted to do so.
For simplicity, consider AI to be an umbrella term for the computational techniques comprised of algorithms that automate aspects of human intelligence. Today’s narrow AI is computationally replicating and not practically embodying human intelligence. AI is not human intelligence, it only mimics certain aspects like rational thinking, speech, decision-making, making predictions and generating content — and it is doing so incredibly well.
Three such techniques for AI are machine learning, deep learning and generative AI.
Machine-learning algorithms learn from data by analyzing it, much like humans learn by observing the world around them. Unlike humans, who follow an intuitive process of learning, machine-learning algorithms use statistics and probability theory. Developers begin the process with a “training dataset,” which is the input. Without explicit programming, the algorithm learns and categorizes patterns, structures and relationships within the training data. It does so by analyzing the statistical properties of the data, such as by finding patterns or correlations across data points. Based on what it learns from the training dataset, the algorithm generates a “model,” which is comprised of a range of decision-making rules. To assess if the model will perform well in real-world scenarios, it is then tested on test data it was not exposed to during the training process. The aim is to generalize the model so its performance is accurate even for new and unseen data. After deployment, the model makes decisions or predictions on new data and constantly improves performance based on what it learns in the deployed environment.
Deep learning, a subfield of machine learning, is a technique that uses artificial neural networks inspired by the structure of the human brain. This technique identifies complex data structures through multiple layers of neurons, and each layer learns a different aspect of the data. Let’s say we want a model to distinguish different types of wild cats through supervised learning. We provide the deep-learning algorithm with a labeled dataset of images of all kinds of wild cats — tigers, lions, leopards, bobcats and so on. The first layer may learn the different colors, the second layer may learn features like tail, nose, ears or eyes, and the third layer may identify more complex patterns, such as the lion’s mane or the leopard’s spots. The eventual un-layering will help the algorithm distinguish a lion from a leopard. This is an image-classification technique and represents a discriminative model, telling us what is or is not a lion. Like machine learning, deep learning also makes predictions on new data.
Generative AI, itself a subfield of deep learning, enables algorithms to generate new content like text, videos and images. Unlike the discriminative model discussed above, generative models are not classifying new data. Rather, they generate new content based on what they learn from existing data. One method of understanding generative AI is learning the mechanics of some of the most accessible and accessed AI — chatbots like ChatGPT or Bard.
The leading chatbots today are large language models that learn and match patterns in data and maintain this information in a set of numerical values. AI chatbots use a neural-network architecture called transformer that relies on the attention mechanism first introduced in Attention is All You Need by Google in 2017. This mechanism helps maintain relations between words, allowing them to influence surrounding words. This helps the model focus only on the most important parts of the input sequence.
According to Google, LLMs are “large, general purpose language models which can be pre-trained and fine-tuned for specific purposes.” They are called large because they are trained on enormous datasets. To visualize just how large, consider the common crawl, a free and open repository of web data claiming to be the primary training corpus for every large language model with raw webpage data, metadata extracts and text extracts from all over the internet. It has over 240 billion pages spanning 16 years, and 3-5 billion new pages are added each month. With pages in the billions, you can only imagine just how many words LLMs are exposed to during the training process. They are also called large because of their enormous parameters. For instance, Google’s PaLM has 540 billion parameters.
The pretraining phase develops the model’s general knowledge about our world in a range of disciplines. For an AI chatbot, pretraining may be done through various sources of unlabeled text data, including the common crawl, on a variety of topics for the general purpose of language learning. To generate dialogue, like responses to user prompts, the model is then fine-tuned using a smaller, labeled dataset for this specific purpose.
But how does it “know” to communicate in a human-like manner? As it was trained on an enormous set of text data, the model has seen billions of examples of how humans would respond to some prompt. And it does so one word at a time through probability distribution, which calculates the likelihood of the next word given some preceding words.
For example, take the sentence, “IAPP is based in.” The following next words may be probable:
The model might use the highest ranked word to complete the sentence: “IAPP is based in Portsmouth.” But it does not always use the most probable word. Sometimes, because of the temperature parameter that regulates the randomness of responses, it will randomly pick a less probable word. This is why we receive a different response despite giving the same prompt. This keeps the system creative.
The target of governance is, hence, the life cycle of the AI system. From input to processes to output, risks can emerge from each component of the algorithm: the training data, the model and its parameters, and the outputs. AI governance focuses on each of these components and should be implemented throughout the life cycle of the system.
Model cards are emerging as a best practice to making algorithms and their mechanics more explainable or transparent. These explain various aspects about a model, such as intended use, information regarding performance benchmarks across various metrics like race or gender, and other relevant details. Model cards can be beneficial for everyone involved in the development, deployment and use of AI. For example, they can help developers learn more about the system and compare it with other models, AI practitioners can learn how the system is supposed to work, and policymakers can assess impacts on society.
However, since algorithms are a computational replication and not an embodiment of the way humans think, how can the algorithm suddenly be racist? It is not. At least not consciously. The real-world data the algorithm was trained on was reflective of racism and reinforced through the model’s predictions. Data is not always objective, especially when human beings have made prior judgements on that training data.
To ensure there are no biases at the output stage, data governance is required at the input level. Such governance starts at the predesign stage. It involves collection of data in a manner that complies with data protection laws, meaning the data is accurate, representative, collected lawfully and the amount collected is minimized, which can pose challenges to the efficacy of AI and AI governance.
Aside from data, the system design itself can reinforce human biases. In traditional programming, the human programmer may encode their own bias by assigning a higher risk score to an Arab man than a Western man. This one may be easy to trace. When the model is generated by AI, biases become more difficult to trace due to the complexity of the model. The variables the model is optimized for may also lead to biased outcomes. For example, say an algorithm is deciding whether your child merits admission to a prestigious school. An obvious variable to remove is race. Despite this, it seems the algorithm constantly rejects children belonging to racial minorities. This may be because the algorithm seeks out common proxies for race that were not eliminated from the data, such as postal codes or parents’ income. Here, the algorithm is not consciously or intuitively being racist. It is demonstrating a systematic error, i.e., repeatedly excluding particular groups of people more than other.
One way to govern such risks is by auditing the AI system. Unlike financial audits, which may take place at the end of a fiscal year, AI audits ideally ought to be carried out throughout the life cycle of the AI system, and the specific auditing tool at each stage could vary. For instance, testing the system before it is deployed is fitting to assess its outputs and performance. This can help developers identify errors, bias, harms, accuracy in performance, alignment with intended use, cybersecurity risks, among other things. It can also make various performance indicators more interpretable.
Moreover, generative AI can also lead to potential infringements under copyright law, as copyrighted works can make up part of the training datasets. Generative AI also increases the risk of spreading misinformation due to “hallucinations,” in which the AI’s response sounds grammatically correct but is factually incorrect. Hallucinations turned out to be publicly embarrassing recently for a U.S. attorney who used ChatGPT to look for precedents and submitted fake legal cases generated by the chatbot to the court.
Again, the chatbot was only mimicking human intelligence. It wasn’t “aware” it had lied to the attorney, but chatbots are designed to give an output, so it checked the probability and produced one. This may put users at risk of automation bias. They may overly rely on, or favor outputs predicted or generated by AI systems. This can be dangerous in high-risk situations such as when they are used by judges making recidivism decisions or by financial institutions assigning credit scores.
Given the nature of such risks, they are better governed under law. Under the identification obligations of China’s Interim Measures for the Management of Generative AI Services, providers are required to label AI-generated content so it is distinguishable from other content. For the EU AI Act, European Parliament proposed similar disclosures for content generated by AI and requires foundation models to ensure safeguards against the generation of illegal content. To protect copyright, European Parliament also proposed detailed summaries of copyrighted data used for training be publicly available.
An ongoing sentiment in recent years is that AI brings many benefits that need to be maximized through risk mitigation. That sentiment finds root in the need for trustworthy systems. Governance — good governance — is an agent for trust.
Governance does not mean compliance only with law and policy, although they are extremely important parts. As it may not be practical to wait for legal regulations while AI rapidly develops and risks loom large, governance can also include internal policies like audits and impact assessments, or organizational preparedness by training professionals to responsibly implement governance goals.
In such a multidisciplinary field, a bridge is needed between the technical and legal or policy perspectives, so meanings are not lost in translation when switching from one discipline to another and so governance can become more intertwined with development and deployment, rather than be seen separately.
Submit for CPEs
If you want to comment on this post, you need to login.
""
The IAPP is the largest and most comprehensive global information privacy community and resource. Founded in 2000, the IAPP is a not-for-profit organization that helps define, promote and improve the privacy profession globally.
The IAPP is the only place you’ll find a comprehensive body of resources, knowledge and experts to help you navigate the complex landscape of today’s data-driven world. We offer individual, corporate and group memberships, and all members have access to an extensive array of benefits.
© 2023 International Association of Privacy Professionals.
All rights reserved.
Pease International Tradeport, 75 Rochester Ave.
Portsmouth, NH 03801 USA • +1 603.427.9200

source

Jesse
https://playwithchatgtp.com