etc.[VWAY Tech Note] AI in Safety Analysis: Why 'Hallucinations' Are Fatal and How RAG Is the Answer

With the dramatic improvement in Large Language Models (LLMs) capabilities, attempts to utilize AI in safety analysis tasks such as FMEA, FTA, and STPA are increasing. However, engineers soon face a fatal problem: "plausible lies," or Hallucinations.

Today, we will delve deep into why hallucinations are dangerous in safety analysis and explain the RAG (Retrieval-Augmented Generation) system—introduced by VWAY to technically solve this issue—supported by academic grounds.

Reference Tech: https://www.google.com/search?q=https://developer.nvidia.com/blog/what-is-retrieval-augmented-generation/


1. 'Hallucination' in Safety Analysis: Not Just a Simple Error 

If a general chatbot lies, saying, "Napoleon used a smartphone at the Battle of Waterloo," it is merely a laughable incident. However, in domains dealing with safety standards like ISO 26262 or SOTIF (ISO 21448), such phenomena can be catastrophic.

Why is it fatal?

  • Fabrication of Failure Modes: The AI might identify non-existent failure modes as causes, leading engineers to overlook actual risks.

  • Proposal of Baseless Safety Mechanisms: It may suggest technically unverified or physically impossible safety measures, causing confusion during the design phase.

  • Loss of Traceability: If the source of an answer is unclear, it cannot be used as evidence during future safety Audits.

Reference Paper:https://dl.acm.org/doi/10.1145/3571730


2. Is There No Solution? The Limits of Fine-tuning vs. The Rise of RAG

 Many ask, "Can't we solve this by training (Fine-tuning) the AI with our company data?"

Fine-tuning is effective for teaching the AI a specific "tone" or "format," but it has limitations in making it perfectly memorize "new knowledge." There is still a risk that the AI will fabricate information regarding parts it has not learned. The solution that emerged to address this is RAG (Retrieval-Augmented Generation).


3. What is RAG (Retrieval-Augmented Generation)?

RAG is a technology where the LLM searches (Retrieves) relevant information from a reliable external Knowledge Base before generating an answer, and then uses that information to Generate the response.

To put it simply:

  • Traditional LLM: A student taking a test with no materials, relying solely on memory (and making things up when they forget).

  • RAG System: A student taking an 'Open Book Test,' checking textbooks (internal technical docs, standards) right beside them while writing answers.


Reference Paper:https://arxiv.org/abs/2005.11401


4. How Does RAG Solve Safety Analysis Problems?

1) Fact Grounding The AI is controlled to find answers only within the input 'Context.' If the content is not in the document, it is instructed to answer "Information not found," thereby blocking false generation at the source.

2) Transparent Citation The AI provides exact sources with the answer, such as [Ref: Chassis Controller SRS, p.45]. Engineers can Cross-check the original text with a single click, making audit responses significantly easier.

3) Up-to-date Information When standards or designs change, there is no need to retrain the AI. Simply uploading the new document to the database allows the AI to immediately reflect the latest information in its analysis.


VWAY goes beyond simple LLM integration. We combine RAG technology with engineering domain knowledge to build a "Hallucination-free, Safe AI Environment." Let VWAY's AI support your safety analysis based on verified grounds. 

a59cd44cddaab.png


Roh Kyung Hyun
04559, 5F Pyeonggwang Building, 243 Toegye-ro, Jung-gu, Seoul (Chungmuro 5-ga 19-19)
+82-10-8337-9837
631-81-00287
www.vwaycorp.com
vway@vwaycorp.com

© VWAY All rights reserved


Representative

Roh Kyung HyunBusiness Registration Number
631-81-00287
Company Address
5th Floor, Pyeong-kwang B/D, 243, Toegye-ro, Jung-gu, Seoul, Republic of Korea
Website
www.vwaycorp.com
Telephone
+82-2-2285-6541
Representative Email
vway@vwaycorp.com

© VWAY All rights reserved