The Catch-22 Of AI Chatbots – Forbes

The pros and cons of AI chatbots continue the debate of regulation.
If you have used any of the chatbots in the past, you know they can be valuable sources of information. However, the more I use a chatbot, the more I am concerned about its accuracy.
If you have lived a bit and have any public persona on the internet, you might have done this experiment I did with three of the chatbot engines. Since I have been a tech analyst for over 40 years, with a public persona and a legitimate bio on our website (Team – Creative Strategies), I asked three chatbots to write me a new biography. All three came back with significant information inaccuracies. One said I had written two books, which I have not done; one had me graduating from a college I did not attend, and the last had my role at my company completely wrong.
This kind of inaccuracy is even more rampant now.
One of the more interesting things that have arisen since ChatGPT launched about a year ago is that current data scraping (automated data collection from a website) now includes data created by AI, not just original material as it did initially.
Futurism recently wrote, “Don’t believe everything you see on the Internet” has been pretty standard advice for quite some time now. And according to a new report from European law enforcement group Europol, we have all the reason in the world to step up that vigilance.
“Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026,” the report warned, adding that synthetic media “refers to media generated or manipulated using artificial intelligence.”
“In most cases, synthetic media is generated for gaming, to improve services or to improve the quality of life,” the report continued, “but the increase in synthetic media and improved technology has given rise to disinformation possibilities.”
The impact of AI scraping data that AI created can be both positive and negative.
On the positive side, scraping AI-created data can help improve AI models’ quality and accuracy. AI models trained on a more comprehensive and representative dataset by extracting data from various sources can minimize bias and improve the performance of AI models.

For example, an AI model trained on scraped data from social media can be used to identify and track emerging trends. This information can be used by businesses to make better marketing decisions or by governments to follow public opinion.

On the negative side, scraping AI-created data can also have many risks. For example, if the culled data is not anonymized correctly (meaning the dataset is stripped of identifiable information such as names, addresses, and phone numbers), an AI model could identify and track individuals, negatively impacting individuals’ privacy and security.

In addition, scraping AI-created data could also lead to the creation of counterfeit or misleading content. For example, an AI model trained on scraped data from news websites could be used to create fake news articles, negatively impacting public trust and understanding.

Overall, the impact of scraping data that AI creates depends on how the data is utilized. If the data is used responsibly and ethically, it can positively impact society. However, if the data is used irresponsibly, it could have some negative consequences.
With this in mind, tech companies involved with AI must work overtime to create programs that identify “AI’ synthetic content and label it in some form.

While I do believe that tech companies have the responsibility to police themselves and make transparent that any AI content creation is labeled accordingly, a recent poll sponsored by Generation Lab for Axios in partnership with Syracuse University’s Institute for Democracy, Journalism & Citizenship and Autonomous Systems Policy Institute suggested that instead of the private sector regulating AI, experts recommend for a new federal agency created to regulate AI.
Possible entities to regulate AI
While some major tech company leaders like Mark Zuckerberg, Eric Schmidt, etc., have lobbied for government oversight of AI, others bristle at any federal agencies in charge of this regulation.

Regardless of who ends up regulating AI content, this is the time for tech and governmental leadership to get on the same page to ensure AI has more positive impacts than negative ones.

Although Pandora’s box on AI has been opened, there is still time for critical guidelines, ethics and a drive to create accurate content using AI technology. If we don’t deal with this now, business and public trust in AI will dilute the future value and potential of AI technology.