Logo
Forgot Password
List of Coordinators Departments and coordinators
Software Engineering
Hazrina Sofian
Computer System & Network
Noorzaily Mohamed Nor
Artificial Intelligence
Dr. Nurul Japar
Information System
Sri Devi A/p Ravana
Multimedia
Hannyzzura Pal@affal
Islamic Studies
Hannyzzura Pal@affal

A multitask and multimodal chatGPT on reasoning and interactivity

Student

LAWRENCE LEROY TZE YAO CHIENG

Supervisor

Chan Chee Seng

Collaborator

Dr. Fan Lixin


This research project delves into the phenomenon of 'hallucination' in Large Language Models (LLMs), with a particular focus on OpenAI's ChatGPT. Hallucination is characterized by the generation of outputs that lack factual grounding or deviate from the provided prompt. The primary objectives of this study are to gain a comprehensive understanding of ChatGPT, explore the occurrence of hallucination in LLMs, and devise an effective solution to mitigate this issue. The research encompasses an examination of various facets of LLMs, including the latest GPT-4, LLM Augmenter, Reinforced Learning with Human Feedback (RLHF), Black-Box Hallucination Detection, and a Probabilistic Model of Hallucination. It also scrutinizes the token limits of these models. The study identifies a significant gap in current research, namely the insufficient exploration of the fundamental principles of hallucination and the computational demands of output quality checks. The proposed methodology involves a detailed analysis of hallucination types and the development of techniques to curb hallucination, with a focus on English language content. The ultimate goal of the project is to propose a solution that not only effectively curbs hallucination but also enables LLMs to process extensive domain-specific knowledge, all while ensuring computational and time efficiency.