honey_pot

LLM 기반 챗봇 애플리케이션 구조 본문

기타

LLM 기반 챗봇 애플리케이션 구조

_tera_ 2023. 6. 21. 15:14

 

내용, 코드 출처 : Anatomy of LLM-Based Chatbot Applications: Monolithic vs. Microservice Architectural Patterns

https://towardsdatascience.com/anatomy-of-llm-based-chatbot-applications-monolithic-vs-microservice-architectural-patterns-77796216903e

 

 


In a microservices application, each component is split up into its own smaller, independent service. Image by Author

 

Monolithic architecture

  • an approach that involves building the entire application as a single, self-contained unit.
  • simple and easy to develop but can become complex as the application grows.
  • All application components, including the user interface, business logic, and data storage, are tightly coupled in a monolithic architecture.
  • Any changes made to one part of the app can ripple effect on the entire application.
  • great starting point for a Data Scientist to build an initial proof-of-concept quickly and get it in front of business stakeholders.

In a monolithic application, all the code related to the application is tightly coupled in a single, self-contained unit. Image by Author

 

Microservices architecture

  • better bet over a monolithic one
  • more flexibility and scalability
  • different specialized developers to focus on building the various components.

 

해당 게시글의 코드 구현은 두 가지 형태

1. Monolithic : 공통으로 사용할 utils.py + ui를 담당하고 함수를 콜하는 main 함수가 담긴 monolith.py

   -> streamlit run monolith.py 로 run

2. Microservice : fastapi를 이용하여 chatbot 객체를 생성하고 health_check(), llm_response()를 담당하는 backend.py, payload를 받아서 streamlit의 session_state에 추가한 뒤 main() 함수를 통해 UI로 나타내는 frontend.py, utils.py

   -> 백엔드 run(docs에서 api 확인): uvicorn backend:app --reload 

   -> 프론트엔드 run : streamlit run frontend.py

 

 

 

코드 실행 시 결과는 다음과 같음

hallucination 덕에 개가 없는데 개와 산책을 다녀왔다는 챗봇

request
response

llama 모델 자체로 backend를 구현하는 때가 올 것 같아서 찾아보다가 한번 따라 해봤는데

streamlit, fastAPI 기반으로 앱을 쉽게 구현하는 걸 보니 정말 머지 않았다 싶다...

Comments