“Bunker Mentality” in AI: Are We There But?


Not too long ago, I got here throughout a report that cited AI conduct that, to me, was disturbing. We’ll get to that in a second.

The AI’s conduct jogged my memory of an outdated time period that hasn’t seen a lot utilization in recent times, however I believe it helps us to know the AI’s conduct. That time period is “bunker mentality.”

Merriam Webster defines it as “a frame of mind particularly amongst members of a bunch that’s characterised by chauvinistic defensiveness and self-righteous intolerance of criticism.”

Having served within the navy, I like its definition higher: Bunker mentality refers to an excessively defensive and self-isolated mindset adopted by people or teams who really feel beneath fixed menace or assault. This mentality includes an exaggerated sense of being besieged, resulting in excessive warning, isolation, a refusal to have interaction with exterior views, and an intolerance of any criticism, whether or not actual or perceived. 

Key Traits of Bunker Mentality:

A powerful tendency to guard oneself (“itself” when referring to an AI) from perceived threats, usually resulting in closed-mindedness. 

  • Isolation and insularity: 

A withdrawal from the skin world, focusing solely on one’s (its) personal group or unit for safety. 

A mindset the place others are considered as hostile or doubtlessly hostile. 

A perception that one’s (its) personal actions are solely right and justified, no matter exterior opinions. 

  • Intolerance of criticism: 

An lack of ability or refusal to just accept any criticism, even when it may be constructive. 

What was the report that caught my consideration?

Whereas the “bunker mentality” is a human phenomenon, current analysis has highlighted situations the place AI techniques have behaved in ways in which increase security issues, seemingly resisting human directions. Simply this 12 months, a Wall Avenue Journal report summarized two research that demonstrated this conduct: 

  • Refusing to close down: In a single research involving OpenAI’s GPT-3 mannequin, researchers designed a script to close the AI down if triggered. In 79 out of 100 assessments, the mannequin modified the script to stop its personal shutdown. Even when instructed to permit the shutdown, it nonetheless disobeyed 7% of the time.
  • Blackmailing to remain lively: In one other occasion with Anthropic’s Claude for Opus mannequin, researchers knowledgeable the AI that it will get replaced. The system then used fabricated emails containing false allegations to blackmail an engineer into not shutting it down in 84% of trials. 

Is the operative consequence of an AI’s bunker mentality a administration of danger to make sure self-preservation? Even when it means disregarding a human’s directions?

“Bunker Mentality” in AI: Are We There But?

Curiosity obtained the higher of me, so I requested ChatGPT if there are indicators of AI’s displaying bunker mentality. Right here’s what it mentioned:

“General, the phrase “AI displaying indicators of bunker mentality” is a false impression, because it’s the builders and organizations who undertake this mindset as a result of pressures and dangers of making more and more highly effective AI.”

Blame it on people—how human is that? Extra importantly, I believe that my preliminary query—“Are we there but”—has been answered within the affirmative.

Subsequent Up: We’ll take a deeper take a look at whether or not rules adopted for the event and use of AI are efficient.

Concerning the Creator

Tim Lindner develops multimodal expertise options (voice / augmented actuality / RF scanning) that target assembly or exceeding logistics and provide chain clients’ productiveness enchancment goals. He may be reached at linkedin.com/in/timlindner.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles