Leveraging Large Language Models for Firm-Intelligence: A RAG Framework Approach
(2024) STAN40 20231Department of Statistics
- Abstract (Swedish)
- In the wake of OpenAI's release of ChatGPT in November 2022, powered by the 175 billion parameter neural network GPT-3, the potential applications of Large Language Models (LLMs) in various sectors have become evident. One such application lies in hedge funds and trading desks where knowledge sharing is paramount. These entities often possess a wealth of firm-specific knowledge that spans different research areas and personnel expertise. Leveraging LLMs on this knowledge is challenging due to its proprietary nature, the immense data and computational demands of training LLMs, and the inherent limitations of LLMs, such as the tendency to fabricate facts. The Retrieval Augmented Generation (RAG) framework, which has recently gained traction... (More)
- In the wake of OpenAI's release of ChatGPT in November 2022, powered by the 175 billion parameter neural network GPT-3, the potential applications of Large Language Models (LLMs) in various sectors have become evident. One such application lies in hedge funds and trading desks where knowledge sharing is paramount. These entities often possess a wealth of firm-specific knowledge that spans different research areas and personnel expertise. Leveraging LLMs on this knowledge is challenging due to its proprietary nature, the immense data and computational demands of training LLMs, and the inherent limitations of LLMs, such as the tendency to fabricate facts. The Retrieval Augmented Generation (RAG) framework, which has recently gained traction (Shi et al., 2023), presents a solution. This thesis explores the potential of creating a firm intelligence unit using the RAG framework, leveraging research reports from Lund University Finance Society's Trading & Quantitative Research (TQR) department as a representative dataset. The envisioned AI Assistant aims to answer questions based on the TQR reports, admit ignorance when necessary, and provide detailed answer sources. This study provides insights into the theory behind LLMs and the implementation of the RAG framework and offers a comprehensive evaluation, discussing results, limitations, and future prospects for firm intelligence units. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/student-papers/record/9144767
- author
- Wölner-Hanssen, Niclas LU
- supervisor
-
- Jonas Wallin LU
- organization
- course
- STAN40 20231
- year
- 2024
- type
- H1 - Master's Degree (One Year)
- subject
- keywords
- Artificial Intelligence, Large Language Models, Retrieval Augmented Generation, Retrieval Augmented Generation Assessment, Contrastive Learning
- language
- English
- id
- 9144767
- date added to LUP
- 2024-01-25 08:01:20
- date last changed
- 2024-01-25 08:01:20
@misc{9144767, abstract = {{In the wake of OpenAI's release of ChatGPT in November 2022, powered by the 175 billion parameter neural network GPT-3, the potential applications of Large Language Models (LLMs) in various sectors have become evident. One such application lies in hedge funds and trading desks where knowledge sharing is paramount. These entities often possess a wealth of firm-specific knowledge that spans different research areas and personnel expertise. Leveraging LLMs on this knowledge is challenging due to its proprietary nature, the immense data and computational demands of training LLMs, and the inherent limitations of LLMs, such as the tendency to fabricate facts. The Retrieval Augmented Generation (RAG) framework, which has recently gained traction (Shi et al., 2023), presents a solution. This thesis explores the potential of creating a firm intelligence unit using the RAG framework, leveraging research reports from Lund University Finance Society's Trading & Quantitative Research (TQR) department as a representative dataset. The envisioned AI Assistant aims to answer questions based on the TQR reports, admit ignorance when necessary, and provide detailed answer sources. This study provides insights into the theory behind LLMs and the implementation of the RAG framework and offers a comprehensive evaluation, discussing results, limitations, and future prospects for firm intelligence units.}}, author = {{Wölner-Hanssen, Niclas}}, language = {{eng}}, note = {{Student Paper}}, title = {{Leveraging Large Language Models for Firm-Intelligence: A RAG Framework Approach}}, year = {{2024}}, }