Skip to content

A new package that analyzes user-submitted text descriptions of customer service or technical support interactions to identify potential red flags that could lead to public complaints or negative expo

Notifications You must be signed in to change notification settings

chigwell/supportinsightcheck

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 

Repository files navigation

SupportInsightCheck

PyPI version License: MIT Downloads LinkedIn

A Python package that analyzes customer service or technical support interaction descriptions to identify potential red flags that could lead to public complaints or negative exposure. The system evaluates text for issues like poor communication, unprofessional behavior, lack of accountability, or unethical practices, and returns a structured assessment with actionable feedback.

Installation

pip install supportinsightcheck

Usage

Basic Usage

from supportinsightcheck import supportinsightcheck

user_input = "The support agent was rude and refused to help me with my issue..."
results = supportinsightcheck(user_input)
print(results)

Using Custom LLM

You can use any LangChain-compatible LLM by passing it to the function:

from langchain_openai import ChatOpenAI
from supportinsightcheck import supportinsightcheck

llm = ChatOpenAI()
user_input = "The technician didn't show up for the scheduled appointment..."
response = supportinsightcheck(user_input, llm=llm)
from langchain_anthropic import ChatAnthropic
from supportinsightcheck import supportinsightcheck

llm = ChatAnthropic()
user_input = "They charged me for services I didn't request..."
response = supportinsightcheck(user_input, llm=llm)
from langchain_google_genai import ChatGoogleGenerativeAI
from supportinsightcheck import supportinsightcheck

llm = ChatGoogleGenerativeAI()
user_input = "The support representative gave me incorrect information..."
response = supportinsightcheck(user_input, llm=llm)

Using Custom API Key

from supportinsightcheck import supportinsightcheck

user_input = "They refused to honor their warranty policy..."
response = supportinsightcheck(user_input, api_key="your_llm7_api_key_here")

Parameters

  • user_input (str): The text description of the support interaction to analyze
  • llm (Optional[BaseChatModel]): LangChain LLM instance (defaults to ChatLLM7)
  • api_key (Optional[str]): API key for LLM7 service (if using default LLM)

Default LLM Configuration

The package uses ChatLLM7 from langchain-llm7 by default. The free tier rate limits are sufficient for most use cases. For higher rate limits, you can:

  1. Set the LLM7_API_KEY environment variable
  2. Pass your API key directly to the function
  3. Get a free API key at https://token.llm7.io/

Error Handling

The function will raise a RuntimeError if the LLM call fails or if the response doesn't match the expected format.

Contributing

Found an issue or have a suggestion? Please open an issue on GitHub.

Author

Eugene Evstafev
Email: hi@euegne.plus
GitHub: chigwell

License

This project is licensed under the MIT License - see the LICENSE file for details.

Releases

No releases published

Packages

No packages published

Languages