• Tech & Society

    A Blog by James Cusick

AI Social Risk? Real or Imagined by AI?

Introduction

The development of AI has been centuries in the making. We notice it’s seeming instantaneous emergence in recent years as if it landed in a flash of light. Yet this lightning strike has been glowing within the computing world for over 75 years and much longer with philosophers and mathematicians.

It is a legitimate reaction to question or perhaps even fear the unknown, including the introduction of new technologies from the steam engine to elevators to the airplane. In most cases such new technologies had growing pains which included human fatalities. However, science, engineering, and policy have acted to improve each step forward even if many new advances were challenged as threats to society.

Such hesitancy and concern are especially true when a trend or a new innovation crashes into sight everywhere all at once to paraphrase a recent movie title. In this blog entry I would like to turn attention to the history of AI, some of its key building blocks, and how its maturation may impact the types of social interests which Henry George discussed at length.

Cultural Views of AI

Before considering this technical history and the Georgist perspective, it is helpful to compare the view of AI and robotics as seen from two different culture’s entertainment circles. Recently I spoke with a Computer Scientist in Japan who noticed that in the United States a common reference theme for intelligent and autonomous robots is the Terminator as presented in the 1984 action sci-fi and its follow-on films. The Terminator is sent from the future as an assassin providing a window into a terrible destiny for humankind where people are hunted by machines.

In contrast, the most common reference for Japanese is Doraemon, a 22nd century robotic cat introduced by illustration team Fujiko Fujio in 1969. Doraemon also travels back in time but instead of playing a villain, comes to the aid of a below-average student in today’s world. This friendly, smart, helpful, and resourceful character is the touchstone for much of Japan’s thinking about future tech.

The countenance of these alternative icons of the future speaks for themselves. One with menacing electronic red eyes and the other with a broad smile and a welcoming wave. I know which one I prefer when thinking about systems yet to be. Just like any human produced technology, AI has the potential to provide benefits and can be destructive based on how we use them. Our responsibility is to apply these solutions properly. But to do so let’s first explore how we got here.

AI Background

Most people know that modern computing has its roots in the mid-twentieth century. With the creation of the first digital electronic computers in the 1930s and 1940s the world was irreparably changed, and we live in the downstream flow from these events. However, the conceptual foundations of these early engineering successes and AI itself stretch back millennia. In ancient Greece and China, philosophers pondered the manner of human thought akin to machinery and in some cases, built prototypes [1].

From a conceptual standpoint the work of Hobbes, Pascal, Laplace, Bayes, Leibniz, and others from the 17th to the 18th centuries provided key ideas and mathematics which underly today’s AI technologies [2]. These include probability, the chain rule, and the method of least squares. Converting related ideas into a mechanical device allowed Babbage and Lovelace to demonstrate the possibility of the reprogrammable automation of computing in a way that set the stage for the advances of the twentieth century.

The story of automation and computing which emerged from the research of Turing, von Neumann, and a broad cadre of others gave us the digital world we have today. All the while, the pursuit of AI proceeded in fits and starts with the term itself being coined in 1955 by John McCarthy at a conference at Dartmouth just after the first AI programs had been written.

AI Tech

The earliest programming language for AI applications was LISP, originally designed by McCarthy. Personally, I had the occasion to work in Prolog, perhaps the second most common AI language which was introduced in 1972. There are many other such languages and importantly the problem representation, semantics, and logic, introduced by these languages have influenced many other non-AI languages.

While perhaps the mathematics and programming languages essential to AI are less well known in society, the applications using them receive more attention. Some of these applications have helped to advance human knowledge and/or enhance our daily life. My favorite examples include the Spirit and Opportunity rovers which autonomously explored the surface of Mars for over 2,000 days. Another example is the development of the COVID-19 vaccines which used AI capabilities to model millions of RNA synthesis sequences in compressed timelines to help save millions of lives. Also, the US military’s DART logistics system relies on AI and has repaid taxpayer investments in AI research many times over.

These applications, as well as the current breakout technologies like ChatGPT, rely on a web of supporting methods and tools including Natural Language Processing, Machine Learning, pattern recognition, massively parallel computing, and hyper-scale cloud computing. The current market for AI technologies is about US$184 billion and is projected to grow to between US$826 billion and US$1.8 trillion by 2030 [3]. Thus the AI gold rush is on.

AI and Social Impacts

Unlike the types of computers most of us have become accustomed to in our daily life which are deterministic (meaning that they follow a given path repeatedly), AI computing is based on a different approach. Many AI solutions like ChatGPT use stochastic engines which means they rely on sophisticated guesses driven by statistics. As Mark Twain put it so elegantly, “there are lies, damned lies, and statistics”. This is why some AI solutions provide reasonably sounding answers which sometimes breakdown upon inspection.

The application of these technologies can lead to various social implications including understanding the source of data input to this generation of AI platforms. For example, Large Language Models essentially read all relevant data on the internet and create probability maps for what makes sense to say in response to a question. This leads to some very real dilemmas. For example, is it legal for an AI company to read in and utilize the copyrighted writings of a newspaper or other publisher? If they do so without permission does this put them and their users at risk of copyright infringement? Such questions are currently being litigated.

From the writings of Henry George, the influential 19th century economist and social philosopher, this introduces the question of common property. If the current crop of AI companies utilizes the content produced by all of us, do they have the right to profit from this infrastructure? In essence they are profiting from the collective labor of society and privatizing the financial gain. Perhaps such AI infrastructure and capabilities based on the knowledge of society should be viewed as a public utility. This is a perspective that George might endorse.

There is another aspect to this which bears consideration. At the time George was writing [4], telecommunications were an emerging technology just as AI is today.  George viewed such platforms as natural monopolies which often worked contrary to social equity.

“… [we] lace the air with telegraph wires; we add knowledge to knowledge and utilize invention after invention; … yet it becomes no easier for the masses of our people to make a living. On the contrary, it is becoming harder. The wealthy class is becoming more wealthy; but the poorer class is becoming more dependent.” – George, 1879

Substituting “telegraph” with “AI” highlights the consistency of the underlying social issue. A useful corollary to consider is the regulated monopoly of AT&T in the 20th century. For the right of gaining a service monopoly, the company agreed to provide universal access to telephony throughout the United States. Perhaps some type of similar bargain makes sense today. For the right of deployment of AI models and services built from decades of digitized human records, perhaps a social return value should be factored in.

Finally, following from these contradictions we can explore the impacts of AI on labor. Much has been said on this topic already. Some claim that all jobs will be lost, and others are more balanced in their assessment. I shared my own thoughts on this at a recent seminar hosted by the Henry George School [5]. In essence, most new technological innovations impact labor in three ways: eliminating redundant roles, creating new roles, and enhancing existing roles. With the emergence of additional AI tools history seems likely to repeat. Patel [6] points out the difference between the digital world where computing can dominate and the analog world where we all live. She explains that current AI solutions are strongest with textual analysis and data inference but have limitations around logic, intuition, and novel synthesis of ideas. Some also call this the “human in the loop” phenomenon where full autonomy for some systems is simply unachievable.

If these technologies continue to improve, as is likely, the need for traditional labor may change. In such cases, we may want to reconsider another idea George promoted currently known as Basic Income. During transitions from one state of industrial or post-industrial social engagement, such a stipend may be strategic. Companies profiting from the common pool of intellectual assets developed by society could contribute to this benefit.

Conclusions

Understanding the introduction, adoption, and diffusion of technology is well understood. A standard representation in industry is the Gartner Hype Cycle [7] which places many AI technologies in the “Peak of Inflated Expectations” phase. This model predicts optimal use of these technologies are still 2-5 years in the future.

In the meantime, working collaboratively, technologists, regulators, and supporting functions can explore beneficial means of leveraging innovation while avoiding less desirable outcomes. In the end, humans do maintain one advantage. You can ask a robot to automatically vacuum your carpet which may mean accepting some technical intrusion in other areas of your life. On the other hand, if you feel uncomfortable with this you can simply pull the plug and pick up a broom.

References

  1. A History of Robotics: Yan Shi the Artificer, 1/20/2014, https://blog.salvius.org/2014/01/a-history-of-robotics-yan-shi-artificer.html.
  2. Lancelot Hogben, Mathematics for the Million: How to Master the Magic of Numbers, Fourth Edition, W. W. Norton & Company, Inc., New York, 1968.
  3. Statista, Artificial Intelligenece – Worldwide, Market Insights: Technology, March 2024.
  4. George, Henry, Progress and Poverty: An Inquiry into the Cause of Industrial Depressions and of Increase of Want with Increase of Wealth. The Remedy., D. Appleton and Company, New York, 1879.
  5. Cusick, James, Transforming to an AI Rich Work Environment, Panel on Artificial Intelligence and Its Impacts On The Workforce, Henry George School of Social Science, New York, January 2021.
  6. Patel, N., “AI: Prosperity or Doom for Human Workforce?”, SXSW, Austin, TX, March 2024.
  7. Gartner Places Generative AI on the Peak of Inflated Expectations on the 2023 Hype Cycle for Emerging Technologies, Press Release, Gartner, August 16, 2023.
The last comment and 22 other comment(s) need to be approved.
0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *

DON’T MISS OUT ON A SINGLE POST!

Subscribe now and receive email updates whenever a new entry is posted.