A Python package that analyzes customer service or technical support interaction descriptions to identify potential red flags that could lead to public complaints or negative exposure. The system evaluates text for issues like poor communication, unprofessional behavior, lack of accountability, or unethical practices, and returns a structured assessment with actionable feedback.
pip install supportinsightcheckfrom supportinsightcheck import supportinsightcheck
user_input = "The support agent was rude and refused to help me with my issue..."
results = supportinsightcheck(user_input)
print(results)You can use any LangChain-compatible LLM by passing it to the function:
from langchain_openai import ChatOpenAI
from supportinsightcheck import supportinsightcheck
llm = ChatOpenAI()
user_input = "The technician didn't show up for the scheduled appointment..."
response = supportinsightcheck(user_input, llm=llm)from langchain_anthropic import ChatAnthropic
from supportinsightcheck import supportinsightcheck
llm = ChatAnthropic()
user_input = "They charged me for services I didn't request..."
response = supportinsightcheck(user_input, llm=llm)from langchain_google_genai import ChatGoogleGenerativeAI
from supportinsightcheck import supportinsightcheck
llm = ChatGoogleGenerativeAI()
user_input = "The support representative gave me incorrect information..."
response = supportinsightcheck(user_input, llm=llm)from supportinsightcheck import supportinsightcheck
user_input = "They refused to honor their warranty policy..."
response = supportinsightcheck(user_input, api_key="your_llm7_api_key_here")user_input(str): The text description of the support interaction to analyzellm(Optional[BaseChatModel]): LangChain LLM instance (defaults to ChatLLM7)api_key(Optional[str]): API key for LLM7 service (if using default LLM)
The package uses ChatLLM7 from langchain-llm7 by default. The free tier rate limits are sufficient for most use cases. For higher rate limits, you can:
- Set the
LLM7_API_KEYenvironment variable - Pass your API key directly to the function
- Get a free API key at https://token.llm7.io/
The function will raise a RuntimeError if the LLM call fails or if the response doesn't match the expected format.
Found an issue or have a suggestion? Please open an issue on GitHub.
Eugene Evstafev
Email: hi@euegne.plus
GitHub: chigwell
This project is licensed under the MIT License - see the LICENSE file for details.