Concerns about AI
The prospect of AI has long created fears about what humans might unleash with such technology, with the literature and movies of science fiction providing a chilling voice to some of those fears. (Think about HAL scheming to kill the human astronauts on its ship, the robots going rogue in Westworld, the WOPR WarGames simulator aiming to start a global thermonuclear war, or SkyNet becoming self-aware.) With modern generative AI tools becoming a reality, some of those existential fears are perhaps being superceded in the meantime by some more pressing concerns about how such technology is being used. We highlight a few of those concerns here.
Hype
While the computational technologies involved in the development and production of AI tools are impressive by any measure, many feel that the current buzz around AI amounts to a lot of hype. Some have argued that the term "AI" is used too vaguely to be meaningful, and only serves as a rebranding intended to capitalize on the current excitement. Some point out that generative AI tools are sufficiently unreliable that they need to be closely monitored by the humans using them, rendering net productivity gains less impressive. Others have argued that AI hype is serving to obscure the real advances made through the use of machine learning (ML) technologies to develop useful, predictive models of many different sorts of data and processes, and that proponents and users of ML technologies are doing a disservice by rebranding all that they do as AI. Concerns about the profitability of huge corporate investments in AI technology are helping to drive volatility in stock markets. More broadly, some have argued that Large Language Models (LLMs) and other Foundation Models — while able to produce plausible and sometimes compelling content — have not actually "learned" anything about language or other cognitive modalities, and do not process such content in a manner resembling human intelligence. Such debates will of course continue, but they do suggest the need to be clear in one's own head, and in communicating to others, about the technical specifics of one's particular use of AI tools and technologies.
Detecting and accepting its use
The availability of tools that can readily produce plausible and/or compelling content has complicated the process of evaluating materials and work products, whether that is in an educational setting, a work environment, or an online platform. Schools and universities continue to work to develop policies regarding the use and disclosure of AI tools in producing content for courses, such as essays or term papers. Previous tools aimed at detecting plagiarism in submitted content needed to be able to identify pieces of pre-existing text, but the generative capabilities of AI tools can now produce "original" content only derived or motivated by such text. Teaching students how to be responsible and critical consumers of technology has gone through many iterations over the years, but the availability of generative AI tools introduces new sets of questions and challenges in this endeavor.
Stealing content
Large Language Models (LLMs) and other Foundation Models are trained on large quantities of available content. What constitutes as "available" content is up for debate. Many artists, creators, and news organizations have complained — and in some cases, filed lawsuits — stating that companies and other organizations have engaged in copyright infringement by using their content for training large models, when they were not legally allowed to do so.
Stealing jobs
For centuries, new technologies have displaced workers, replacing human labor with machine operations. Some have argued that this has enabled humans to create new types of jobs and to apply their skills and intellects to other pursuits, while others have raged against the machine. This scenario continues to play out in the current era of generative AI, although — curiously — there seems to be less concensus about what types of workers and work are likely to be impacted or replaced by AI tools. While earlier technologies such as manufacturing devices and robots have tended to disproportionately impact those engaged in manual labor or artisanal craft work, generative AI that can successfully produce content has the potential to threaten the livelihood across a broad spectrum of vocations, potentially impacting workers in fields such as law, business, journalism, and the creative arts.
Energy usage and carbon emissions
Training and deploying large AI models requires massive amounts of computational hardware in huge data centers and massive amounts of electricity to power that hardware. Many people are understandably concerned about the carbon footprint of such endeavors, and whether the resulting CO2 emissions and planetary destruction are worth the perceived benefits of the technology.
Manipulation, misinformation, and disinformation
Many people and organizations with political and financial agendas — groups, corporations, state and non-state actors, social media platforms, etc. — engage in stealth online campaigns to manipulate target audiences, to collect data, and to spread misinformation and disinformation. Such campaigns have long leveraged available technologies in order to further their goals, but modern generative AI tools have ratcheted up substantially their capabilities, making it more difficult to detect or counter such content. Deepfakes use generative audio and video technologies to produce convincing but misleading content that portrays well-known people saying things that they have never said.
Security of data
Most generative AI tools do not just produce content in response to prompts, queries, and materials provided as input — they often also use those materials to refine their training and models. As a result, content submitted to an online generative AI tool can eventually "leak" back out to other users in responses generated by the tool. A worker in a company, for example, who submits a group of proprietary and confidential documents to an online AI tool in order to generate summaries or slides of that material could risk having that information produced in response to subsequent queries by others on related topics. Many organizations have therefore needed to develop strategies and firewalls to prevent such leakage, allowing for the internal development of proprietary models to serve their objectives and business purposes.
Noninterpretability
While AI tools are increasingly able to generate convincing and compelling content, and to produce models and pipelines that perform very well on a wide variety of tasks traditionally handled by humans, the inner details of how those applications work are often inscrutable and uninterpretable, creating a sense of unease and even mistrust among those using such tools. As described more in Concepts in Machine Learning, machine learning applications are not explicitly programmed to perform a specific task by following a specific set of instructions, but consist rather of collections of generic and flexible computational elements that can be organized and parameterized to carry out a wide variety of tasks dependent on what data they are trained with. Therefore, opening up the "black box" that encapsulates an ML algorithm, in order to understand why it is making particular decisions, can be a rather unilluminating exercise. This is especially true of deep learning models constructed out of extremely large neural networks with many millions, billions, or trillions of trainable parameters. Decisions are made by such models through the collective actions of many interacting neurons in a network, which are not easily decipherable. The current era of large deep learning models and generative AI has unleashed new sets of concerns surrounding issues with model interpretability, described a bit more below.
Bias
Some concerns revolve around issues of bias — intended or not — that are baked into large models, and which can be difficult to detect or root out. These biases can have serious societal impacts when the corresponding models are used for societal purposes, such as in the case of facial recognition algorithms that improperly characterize particular subsets of people, due to the nature of the image data used to train the models in the first place.
Hallucinations
Other concerns, associated especially with generative applications built from LLMs, arise from the unpredictable hallucinations that such applications can produce, such as nonexistent and nonsensical content that a model has not been trained on, but concocts in the process of generating new material. In some cases, those hallucinations might simply be annoying and distracting. In many other cases, however, particularly those offering advice or guidance through chatbots or other platforms, the impacts of going off the rails as such could be much more serious indeed.
Warfare in the Age of AI
AI technologies are under active research, development, and deployment for their use in a wide variety of military endeavors, with the prospect of autonomous weapons able to operate without direct human control. Given the potential for their destructive power, the possibility of rapid escalation of hostilities under purely computer-driven decision-making, and the inherent capacity for unstable and adversarial processes such as arms races to spiral out of control, there are widespread and worldwide concerns about the use of AI technologies being aggressively deployed in this arena.
Cognitive Leaps
In the stuff of sci-fi, more ominous and far-reaching concerns about the impenetrability of large AI systems involve fears that such systems could some day make important cognitive leaps, without our ability to recognize such leaps or to understand that we are being manipulated and controlled by the systems we have built.