The recent warning from the US Surgeon General regarding potential hazards of social media has relevance to prospective developments in artificial intelligence (AI). Dr. Vivek Murthy, in his second stint as America’s top doctor, issued a 19-page report documenting the risks of social media to young people. While Murthy admitted that effects of social media on adolescents are not fully understood and there may be some benefits, he also asserted, “There are amply indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents.”
Murthy’s observations should be viewed as a cautionary tale about possible dangers that AI represents for the future well-being of Americans of all ages. Governments around the world have been slow to regulate growth of social media---none more slowly that the US. And the task of closing the barn door is proving to be daunting. Allowing another significant technological innovation to develop without responsible guardrails would be inexcusable.
It’s not as if artificial intelligence has only recently come on the scene. The concept of AI as a serious phenomenon has been around since the 1950s. Merriam-Webster’s Dictionary and Thesaurus defines artificial intelligence as “the capability of a machine and especially a computer to imitate intelligent human behavior.” This capability is produced through “machine learning,” a process that identifies patterns of behavior through the statistical analysis of large amounts of data by computer. The larger the data sets and the greater the computer processing power available, presumably the more sophisticated the artificial intelligence produced.
Variations of AI already ungird functions associated with a wide range of communication devices: telephones, computers, social media platforms, smartphones, virtual assistants and tablets. Navigating vehicular traffic, editing text, using search engines, dealing with customer service, relying on Siri for personal assistance, and managing the payment of bills through your bank are just a few of the examples of how AI is involved in our everyday lives. These connections are accomplished primarily through “bots,” which are software applications programed to perform certain repetitive tasks automatically.
Interest in AI received a big boost with the Covid crisis which caused major staffing and operational problems for businesses and many non-profit institutions, such as hospitals and schools. According to a study by PricewaterhouseCooper, nearly 86 percent of companies said in 2021 that AI was becoming a “mainstream technology.” Meeting resulting demand has led to a major push to enhance AI capabilities, but some serious concerns have arisen about potential negative effects.
The most obvious threat is economic. AI has already led to consequential job losses for American workers. This has been the story for implementation of new technology throughout history, but research has shown that since 1980 technological change has displaced workers faster than it has created jobs. Projections for AI’s impact on employment in the near term are cause for apprehension. Goldman Sachs, the investment bank, has issued a report indicating that AI currently under development has potential for replacing a quarter of work tasks in US and Europe.
Today’s most serious source of disquiet regarding AI is the chatbot, defined as “a computer program that simulates and processes human conversation (either written or spoken), allowing humans to interact with digital devices as if they were communicating with a real person.” Initially, chatbots were designed to assume repetitive tasks that humans generally perform, or to reduce the number of humans needed in accomplishing those tasks. Recent improvements in chatbot capabilities, however, have raised the possibility that chatbots soon might be able to produce original essays or images including art. Such developments would surely impact the job market, but also would likely create some non-economic concerns.
Several major Big Tech companies are involved in the race to enhance the potential role of chatbots. OpenAI, which signed a billion dollar agreement with Microsoft in 2019, appears to have achieved the greatest success. Its ChatGPT responds to complex questions, writes poetry, generates computer code, plans vacations, translates languages, and identifies images.
Other companies attempting to create a more powerful chatbot include Google (Bard), Meta (Galactica), Alibaba and Baidu. The last two are Chinese companies likely to face serious regulation by the Chinese government.
The effort to expand the capabilities of chatbots has been marked by some unanticipated and undesirable results. Attempts have been made to use chatbots to create fake political ads featuring Hillary Clinton and Joe Biden. The Republican National Committee produced an AI-generated ad in April using fake images that suggested disasters if Biden is re-elected---Taiwan being invaded by China and San Francisco overwhelmed by crime. Donald Trump has shared an image of Anderson Cooper praising the former president’s recent appearance on CNN.
Perhaps the most bizarre AI abuse surfaced in late May. Two New York lawyers submitted a court brief using the assistance of ChatGPT. The attorneys are facing sanctions after it was determined the brief cited six nonexistent court cases.
Concern about possible flaws has not discouraged supporters from extolling the virtues of AI. When OpenAI released ChatGPT last November, within five days more than a million users had signed up. And while Sam Altman, OpenAI’s CEO, has admitted his company’s chatbot still makes things up, he recently told a congressional hearing, “the benefits of the tools we have deployed so far vastly outweigh the risks.”
What are the risks?
Besides the threat to existing jobs and the vague promise of replacement employment, there is the fear that “bad actors” can utilize the capabilities of these tools for societal disruption or criminal activities. And the extraordinary cost involved in expanding chatbot capabilities raises questions about probable domination of the AI industry by a few big hitters, already a problem relative to social media.
In addition, there is the potential impact on our humanity. AI supposedly will ease the workload on many professionals, but how good is this for consumers of AI information. Acquiring knowledge involves more than collecting information. In the process of analyzing and evaluating information, the human actor gains wisdom and judgement. Will the chatbot?
Several company leaders, including Altman and a number of AI experts have called for government to step in and to establish meaningful regulations to channel safely future development. The British computer scientist Geoffrey Hinton, a noted AI authority, is concerned enough that he resigned his position with Google and declared that government has the responsibility to ensure AI is developed “with a lot of thought into how to stop it going rogue.”
Congress has not been very effective in recent years regarding oversight of the communication industry or protection of personal privacy. European lawmakers are doing a much better job. Any federal effort to regulate AI should include rules that force companies to reveal how their models work and impose responsibility for how their models are used or abused. Companies in the field should also bear some responsibility for funding programs to offset financial dislocations that may result from expanded use of AI. In addition, rigorous attention must be given to enforcing antitrust laws against Big Tech giants who already dominate the media sector.
It would be a horrific mistake if AI companies were given the same blank check that social media tycoons enjoy. They have reaped trillions, while the average American worker has seen his real income decline over the past four decades and political divisiveness has been exacerbated. The country and the American people deserve better.
https://www.nytimes.com/2023/05/23/health/surgeon-general-social-media-mental-health.html
https://www.cloudflare.com/learning/bots/what-is-a-bot/
https://hbr.org/2021/09/ai-adoption-skyrocketed-over-the-last-18-months
https://www.bbc.com/news/technology-65102150
https://www.theverge.com/2023/3/5/23599209/companies-keep-up-chatgpt-ai-chatbots
https://www.wsj.com/articles/chatgpt-sam-altman-artificial-intelligence-openai-b0e1c8c9?page=1
https://www.nytimes.com/2023/05/25/technology/reid-hoffman-artificial-intelligence.html