top of page
Search
  • Kathy Wilson

AI – When Science Fiction Becomes Reality



As the AI frenzy unfolds, new technologies with unprecedented capabilities will embed themselves in almost every aspect of our lives. Bringing great promise and also significant risk, AI is positioned to work alongside humans solving some of the world’s most intractable problems. As it makes its way across our systems, it can be a powerful force, enabling giant strides in space, medicine, industry, education, communications, and the environment. Poised to deliver amazing breakthroughs, it can also unleash unintended consequences that society will need to anticipate, understand, and control.


Can We Stop Runaway AI?

This blog focuses on “the scary side” of artificial intelligence, specifically what’s keeping “those in the know” up at night as they look ahead to the future of AI and its potential impacts. In the recent New Yorker article, “Can We Stop Runaway AI?,” Jeff Clune, a computer scientist at the University of British Columbia and the Vector Institute who has also worked at Uber, OpenAI, and DeepMind, says:

“Five years ago, it was risky to say out loud that I believe in the possibility of human-level or superhuman-level AI. But now as A.I. challenges “dissolve,” more researchers are coming out of the “AI-safety closet,” declaring openly that artificial general intelligence (AGI) is possible and may pose a destabilizing danger to society.


In March, a group of prominent technologists published a letter calling for a pause in some types of AI research, to prevent the development of “nonhuman minds that might eventually outnumber, outsmart, obsolete, and replace us.”


I asked ChatGPT “What is AGI and why are people afraid of it?”

Here’s its response.

* “AGI stands for Artificial General Intelligence. It refers to highly autonomous systems or machines that possess the ability to outperform humans at most economically valuable work. AGI represents a level of artificial intelligence that surpasses narrow AI, which is designed to perform specific tasks. While narrow AI is prevalent today in various domains like image recognition, speech synthesis, and data analysis, AGI would possess a broader understanding and capability to excel in multiple complex tasks.


People's concerns about AGI arise from several factors:

Superintelligence: AGI has the potential to become super intelligent, surpassing human intellectual capabilities. This raises concerns about the ability of humans to control or understand its actions and decision-making processes.

Unpredictability: AGI systems may exhibit behavior that is difficult to predict or understand, especially as they become more complex and capable. This lack of transparency can make it challenging to determine the intentions and actions of AGI systems.

Ethical implications: There are concerns about AGI systems making autonomous decisions that could have significant ethical consequences. Without proper guidelines or value alignment, AGI could potentially prioritize goals in ways that are harmful or contrary to human values.

Job displacement: AGI's superior capabilities could lead to widespread automation and job displacement, affecting various industries and livelihoods. This could have significant social and economic consequences if not managed appropriately.

Control and security: If AGI systems become highly autonomous, there is a risk that they could be used maliciously or fall into the wrong hands. Safeguarding AGI against misuse or unintended consequences is a significant concern.

It's important to note that AGI is still largely a hypothetical concept, and the timeline and specific capabilities of achieving AGI remain uncertain. Nonetheless, discussions and research in this area are vital to ensure that the development of AGI aligns with human values and addresses potential risks.”


The Critical Success Factor Is Trust

It’s one thing for AI to help us learn a new language, drive a car, control a robotic arm, or manage a factory floor. It’s quite another to envision an AGI-driven world where deep-learning machines have human-like intelligence and are making mission-critical decisions that could literally transform the world as we know it:


  • Can we rely on AGI to enhance rather than do harm?

  • Will it do what we would do, only smarter and faster, or will it have a “mind of its own”?

  • How do we tell the difference between what is fake and what is real?

  • Which AGI use cases are potentially dangerous, and which ones will contribute to the greater good?


When we give up control to a machine, we need to know that we can trust its outputs (both actions and words). Just like human-to-human collaborations, human-to-machine partnerships require trust in order to work.


New technology introductions are typically all about the promise and not much about the dark side (social media being a great example). As AI and AGI move into a whole other realm of information gathering and summarization, critical thinking, and problem-solving, the prevailing advice seems to be “proceed with caution" — making sure we have needed guardrails in place that can prevent AI from going off the rails.


Sources:

The Future of Artificial Intelligence

Can We Stop Runaway AI?


AI Disclaimer:

The image in our banner was created by Adobe Firefly (Beta) Text to Image tool and is used for illustrative purposes.

* This section was created using ChatGBT tool and is used for illustrative purposes.





bottom of page