A multi-level intention-aware LLM that uses Theory of Mind to anticipate how its decisions and actions will affect human rapport levels. The model integrates appraisal theory to assess the significance of these actions within the context of the team's goals and then adapts its behavior to enable the integration of individual mental states into collective, coherent views and decisions.
--
- Session-Aware Responses: Maintains user interaction history to generate consistent and context-aware responses.
- Sentiment Analysis: Analyzes the emotional tone of user inputs for better understanding.
- Trust and Rapport Management: Uses ToM to anticipate how responses affect trust and rapport levels.
- Appraisal Theory Integration: Evaluates how responses align with team goals and emotional impact.
- Explainable AI: Provides explanations for how responses align with inferred user intentions.
- Python 3.8 or later
- Node.js 14.x or later
- Redis (for session management)
- OpenAI API Key (for GPT-4 integration)
- Python Backend:
- Handles OpenAI API interactions.
- Performs sentiment analysis and response appraisal.
- Stores session history in Redis.
- Node.js/Express Backend:
- Serves as the API gateway and hosts the frontend.
- Communicates with the Python backend.
- Frontend:
- A minimal web interface to interact with the AI assistant.
- Displays session history, sentiments, and appraisals.
git clone https://github.com/your-repo/intent-aware-app.git
cd intent-aware-app- Navigate to the
python-backendfolder:cd project-root/python-backend - Create a virtual environment:
python -m venv venv source venv/bin/activate # On Windows, use venv\\Scripts\\activate
- Install dependencies:
pip install -r requirements.txt
- Start the Redis server (if not already running):
redis-server
- Start the Python backend:
python python_backend.py
- Navigate to the
node-appfolder:cd ../node-app - Install Node.js dependencies:
npm install
- Start the Node.js server:
node app.js
- Open your browser and navigate to:
http://localhost:3000
To deploy the Node.js backend and frontend on Vercel and run the Python backend separately:
- Install the Vercel CLI:
npm install -g vercel
- Log in to Vercel:
vercel login
- Navigate to the
node-appfolder:cd project-root/node-app - Deploy to Vercel:
Follow the prompts to configure the deployment.
vercel
- Use a cloud provider (e.g., AWS EC2, Heroku, Google Cloud) to host the Python backend:
- Install Python and Redis on the server.
- Upload the
python-backendfolder to the server. - Start the Python backend:
python python_backend.py
- Ensure the Python backend is accessible via a public URL.
- Edit the
services/pythonService.jsfile in thenode-appfolder:const PYTHON_BASE_URL = 'https://your-python-backend-url';
- Ensure Redis is running before starting the Python backend.
- Replace
"YOUR_API_KEY"inpython-backend/python_backend.pywith your actual OpenAI API key. - For secure deployment, use environment variables to store sensitive keys and configurations.
- To enable HTTPS for the Python backend, use tools like NGINX or a reverse proxy.
A user interacts with the assistant to discuss collaboration challenges in a team.
-
User Input:
How can I better support my teammates in our project? -
Generated Response:
Supporting your teammates starts with understanding their goals and challenges. Consider setting up regular check-ins to foster open communication and trust. Would you like suggestions for structuring these check-ins? -
Sentiment:
- Neutral
-
Appraisal:
- "The response aligns with the team's goals of fostering trust and rapport."
-
Session History:
[ { "input": "How can I better support my teammates in our project?", "output": "Supporting your teammates starts with understanding their goals...", "sentiment": "Neutral", "appraisal": "The response aligns with the team's goals of fostering trust and rapport." } ]