Profanity Detection
This method helps safeguard your website or app by detecting offensive or inappropriate language in user inputs. By screening for profanity and other harmful content before it’s made public, you can maintain a positive user environment, protect your brand, and prevent abusive behavior on your platform.
Query Parameters
The text you want to filter
Sample value: This is a sample text without profanity!
Returns only the score of the text and whether it’s safe or not.
Expected values: yes
, or no
.
Used to list the bad words in an array.
Expected values: yes
, or no
.
The format command is used to get a response in a specific format.
Expected values: JSON
, XML
, or CSV
For more information please refer to Response Format.
The mode command is used to in the development stage to simulate the integration process before releasing it to the production environment.
Expected values: live
, or test
.
For more information please refer to Development Environment.
The callback command can help you make the response as a JSONP format.
Expected values: any name that can be used as a function name in Javascript, e.g: myFunctionName
.
For more information please refer to JSONP Callback.
This method returns a score for the text you pass.
We classify profanity into 4 different level. The first level contains the most risky words and phrases (risky). The second level contains the medium-risk words and phrases. Whereas, the third level contains the low-risk words.
When you use the API, the response will the determine the score of the text you passed as follows:
riskScore = 0
means that this text is completely safe.riskScode = 1
means that this is a high-risk text.riskScode = 2
means that this is a medium-risk text.riskScode = 3
means that this is a low-risk text.
Was this page helpful?