This method helps safeguard your website or app by detecting offensive or inappropriate language in user inputs. By screening for profanity and other harmful content before it’s made public, you can maintain a positive user environment, protect your brand, and prevent abusive behavior on your platform.
This method uses Machine Learning (ML) to analyze text and determine whether it contains profanity. It returns a score for the text you pass, classifying it as safe or risky.
This is a sample text without profanity!
yes
, or no
.yes
, or no
.JSON
, XML
, or CSV
For more information please refer to Response Format.live
, or test
.For more information please refer to Development Environment.myFunctionName
.For more information please refer to JSONP Callback.riskScore = 0
means that this text is completely safe.riskScode = 1
means that this is a high-risk text.riskScode = 2
means that this is a medium-risk text.riskScode = 3
means that this is a low-risk text.