David Greene

background tree

Is ChatGBT The Same Old Story?

The latest popular development in “Artificial Intelligence” (AI), ChatGBT is an application of Large Language Models which generates tantalizingly humanlike written answers to common, practical questions. It works by combining massive amounts of text taken from the internet with statistical models that predict which word should follow the present one. Although at root ChatGBT is simply another AI triumph of brute force, its immediate impact on society has been more like a bombshell.

Because ChatGBT can—and certainly will—be used as a plausible substitute for all kinds of human written material, concerned citizens are anxiously assessing its prospects for disruption and harm, not only of broad swaths of our economic and educational systems, but also the very foundations of human social relations. The heart of these concerns is the acute, very human vulnerability to accept manufactured ChatGBT text as if it were a product of intentional human communication.

Of all the many important issues raised by this development, the one that disturbs me most is not about the technology itself, but the degree to which the societal benefit or harm from ChatGBT is substantially under the control of a few huge technology companies. AI is now poised to be the commercial profit engine of the next decade. It is not a simple equation, but the profit motive strongly “incentivizes” businesses to normalize the functionality of AI applications, and to minimize the significance of their artificiality. If not already, it will soon be difficult to distinguish between a ChatBot and a human providing customer service. How about that History term paper?

The humanistic, ethical stance toward any powerful information technology asks questions like, How can this technology augment human capacities? How can it be used as a tool to advance social health and well-being? and How can we control and/or regulate its use to minimize potential for harm? The profit motive and ethical concerns are at loggerheads here.

At present, I do not see a scenario in which legislation reins in the proliferation of ChatGBT-like applications embedded in every sphere of online experience. The more they are normalized, the harder it will be to distinguish automated commercial text from human communication. How long before the same is true for what we once called interpersonal communication? Although the stakes are arguably higher this time around, the same fundamental ethical issue is raised by ChatGBT as by all other powerful AI technologies: Whose interests will it serve?

It is within the power of the owners and developers of the applications to set the tone for how they are used and perceived by the recipients of the communications. There are huge profits at stake and strong incentives to normalize the acceptance of automated text as a substitute for its human counterpart. A major tenet of natural humanism is that profitable activities should be regulated as needed to protect exploitation of the social Commons. In the case of AI we still need to learn how to frame relevant ethical issues in these terms. We should do everything in our power to study, understand, and characterize the individual and social vulnerabilities at stake when automated text replaces the human equivalent.

AI technologies will continue to have transformative effects on society. Losses will surely exceed benefits unless we can agree on what needs to be regulated and elect legislators who will protect our common interests.

To receive new posts and other news by email, please submit the form below.


To pose a question or suggest a topic for a future blog, please leave a comment below or send me a message on the Contact page.

One Response

  1. Unfortunately, I think it’s unrealistic to hope that legislation will rein in the destructive effects of large language models. Aside from the power of the corporations building these models, which will be exerted to counter any regulatory efforts, there’s the fact that legislators and the public at large do not understand how these models work, or why they are dangerous. Most people are dazzled and delighted by ChatGPT. They don’t see the dark side.

    Aside from throwing up our hands, we can only work to expose the risks, using concrete examples and compelling language that people will grasp.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to top
Which Future Book

To receive blog posts and other news by email, please submit the form below.