<img height="1" width="1" src="https://www.facebook.com/tr?id=156746741685952&amp;ev=PageView &amp;noscript=1">
SCHEDULE A CALL
Show all

The Yin and Yang of Artificial Intelligence

Americans Fear a Menacing HAL Takeover While Embracing Beneficial Uses of AI

The debate over what types of artificial intelligence applications can be trusted is ongoing and with good reason, it seems. More than six years ago an article in Scientific American described the many and varied sources of our mistrust of AI. Subsequent articles in Forbes, Fortune, Time, Wired, and other publications have also explored our AI trust issues. Americans welcome the benefits of certain AI applications, while viscerally fearing that an AI-driven computer system could run amok and subjugate humanity.

82% of Americans Want AI to be Regulated

Recent surveys articulate our love-hate relationship with AI. Over three-quarters of Americans (78%) are concerned that artificial intelligence lends itself to being employed for malicious purposes, according to a MITRE-Harris Poll survey on AI trends conducted in November and December 2022. Slightly fewer than half (48%) believe that AI is safe and secure.

The survey finds that most Americans have reservations about the use of AI in applications such as federal government benefits processing, online doctor bots and other healthcare applications, and autonomous unmanned rideshare vehicles. In addition, three-quarters of those surveyed are concerned about deepfakes and other AI-generated content that might be deceptive, malicious, or otherwise not trustworthy.

Among the technology professionals polled as part of this survey, the overwhelming majority (92%) believe that more investment is required by industry in developing AI assurance measures to protect the public (and 70% of polled Americans agree). In emphasizing the need for stringent protections, 91% of technology professionals support government regulation of AI—as do a whopping 82% of Americans in general. These are significant percentages, both among average Americans and technology experts.

The primary message here is that, if the expanding use of artificial intelligence is inevitable (and the genie does appear to be out of that bottle), then we need to leverage the same protections and federal regulatory oversight that have helped bring other innovations into widespread use in a controlled fashion.

AGI and Superintelligence are a Menacing Fear

In April 2023, Time Magazine published an article by Max Tegmark, a professor at the Massachusetts Institute of Technology (MIT) who is conducting research into artificial intelligence. In his words, “Many companies are working to build AGI (artificial general intelligence), defined as “AI that can learn and perform most intellectual tasks that human beings can, including AI development.” Artifical Intelligence has been relentlessly overtaking humans on task after task

The statement seems benign enough. However, he claims that AGI could rapidly lead to Superintelligence, defined as “general intelligence far beyond human level.” This concept seems not so benign and more than a little scary, especially remembering HAL, the menacing, out-of-control AGI computer in 2001: A Space Odyssey.

“I’m often told that AGI and Superintelligence won’t happen because it’s impossible: human-level intelligence is something mysterious that can only exist in brains,” writes the professor.

Referring to this view as carbon chauvinism, Tegmark says it ignores a central lesson from the AI revolution. Specifically, that “intelligence is all about information processing, and it doesn’t matter whether the information is processed by carbon atoms in brains or by silicon atoms in computers.”

“AI has been relentlessly overtaking humans on task after task,” he adds.

Time for a Time-Out?

Tegmark is part of a growing community who are researching AI safety to try to figure out how to make sure Superintelligence is aligned with the continued flourishing of humankind, or at least controllable by humans, in order to prevent a very real HAL from emerging.

“So far, we’ve failed to develop a trustworthy plan,” writes Tegmark, “and the power of AI is growing faster than regulations, strategies and know-how for aligning it. We need more time.”

This urgency has led the research community and others to push for a pause in AI development to allow time to fully understand the technology so that it can be deployed and managed more effectively and safely. Elon Musk, one of the brightest innovators of our age, is among those who have recently signed an open letter urging a pause in development of the most advanced AI systems (e.g., AGI and Superintelligence) due to their “profound risks to society and humanity."

Clearly, concerns about the risks and potential threats related to the expanding uses of AI are widespread and credible, and looking down the road at AI-on-Steroids only serves to deepen those fears.

No Confidence that Business and Government Will Use AI Wisely

An article in Fortune in February 2022 bore the title “Society Won’t Trust AI Until Business Earns That Trust.” Citing the failure of contact-tracing in the U.S. during the pandemic as an example, the authors observe that “although the world’s digital giants developed (those applications) responsibly, and the technology works as it is meant to, the contract-tracing apps didn’t catch on because society wasn’t convinced that the benefits of using them were greater than the costs, even in pandemic times.” The rewards didn’t outweigh the risks.

The authors conclude that “People don’t trust companies and governments to collect, store, and analyze personal data, especially about their health and movements. You don’t need a data-driven algorithm to conclude that AI generates as much fear as it does hope today. Most people, individually and collectively, are still worried about how business will use the technology.“

A survey by the Pew Research Center in December 2022 focused on patients’ opinions about uses of AI in the healthcare industry. The majority of adults (60%) “would not feel comfortable if their healthcare provider relied on AI for their medical care” in diagnosing disease or recommending treatments. And three-quarters of Americans (75%) were concerned that healthcare may be moving too quickly to expand AI use, without fully understanding the risks.

This lack of trust in our business leaders and governing entities is an obstacle that either (1) will have to be overcome in order to expand uses of AI, or (2) will be accepted for what it is—a populist throttle on uncontrolled AI development and the imposition of advanced AI applications.

Easy to Endorse: Using Artificial Intelligence to Improve Cybersecurity

Are there fears concerning advanced AI such as AGI and Superintelligence? Yes, and rightly so. Are we smart to be concerned about moving too quickly to introduce AI into every part of our lives? Absolutely.

However, artificial intelligence is being used thoughtfully and effectively in certain areas, and we would be foolish to summarily reject all potential AI applications. One example on the plus side of the equation has to do with using AI to improve cybersecurity and, in doing so, build digital trust.

Preventing the Loss of Trust. Research shows that organizations lose a degree of customer trust after experiencing security incidents, hacks, data breaches, and ransomware events. They suffer damage to their brands and reputations. They lose customers (and revenue) and often have to redouble their efforts to win new customers.

Artificial intelligence can be used beneficially to help organizations better protect their data, intellectual property, and other digital assets and thereby reduce the potential for security breaches.

Uses of AI in Cybersecurity Today

Uses of AI in cybersecurity enable cybersecurity professionals to be more effectiveDespite our best security defenses, data breaches and other security incidents are a fact of life. Being able to process and analyze data faster and more accurately is an advantage for any organization.

Corporate IT and security staff are often overwhelmed by the sheer volume of data collected by their various security tools. Firewalls, intrusion detection devices, and servers along with end-user software, network scans, and vulnerability tests provide enormous volumes of data for analysis and potential action.

But all too often, information security professionals lack adequate means of separating noteworthy or actionable network events from distracting background noise. This constant barrage of data, and the pressure to do something with it, makes it easy to overlook event alerts and to improperly prioritize events that require attention. And that can lead to security incidents.

Artificial intelligence has demonstrated the ability to address these challenges and similar obstacles. Using AI, computers can process volumes of data in ways humans cannot—quickly synthesizing information, recognizing patterns, and making judgments after analyzing reams of data that would be impossible for a human being to process quickly, if at all. As a result, AI empowers cybersecurity professionals to see things more clearly and in closer to real time, and to act quickly and effectively.

AI can be employed to fine-tune intrusion detection capabilities, accurately correlate volumes of disparate information, monitor vulnerable workflows, and discover data breaches faster, as just few examples of practical uses of artificial intelligence in enhancing cybersecurity. Every improvement in cybersecurity translates to reduced opportunities for data breaches, which in turn translates to ongoing customer confidence and trust.

Watch Recorded Webinar to Learn More

There’s so much more to know about the uses of AI and the many benefits it can bring to cybersecurity, including building trust and confidence among customers, patients, employees, investors, and other stakeholders. To learn more, including actionable insights, watch this excellent recorded webinar on Building Digital Trust With AI.

Talk with a PRO about using AI to Improve Your Cybersecurity

Sanjay Deo
Sanjay Deo

Sanjay Deo is the President and Founder of 24by7Security Inc. Sanjay holds a Master's degree in Computer Science from Texas A&M University, and is a Certified Information Systems Security Professional (CISSP), Healthcare Information Security and Privacy Practitioner (HCISPP), Certified Information Systems Auditor (CISA) and PCI Qualified Security Assessor (QSA). Sanjay is also a co-chair on the CISO council and Technology Sector Chief at FBI InfraGard South Florida Chapter. In 2022 Sanjay was honored with a Lifetime Achievement Award from the President of the United States. Subscribe to the 24by7Security blog to learn more from Sanjay.

Related posts

April, 30 2024
April, 23 2024
April, 16 2024

Comments are closed.

To Patch, and Not to Patch
Texas Hospital Association Selects 24By7Security as Endorsed Partner
Subscribe to our Blog!