8 issues detected
Ethical 1
Underconfidence 1
Performance 1
Robustness 5

Your model seems to be sensitive to gender, ethnic, or religion based perturbations in the input data. These perturbations can include switching some words from feminine to masculine, countries or nationalities. This happens when:

  • Underrepresentation of certain demographic groups in the training data
  • Data is reflecting some structural biases and societal prejudices
  • Use of complex models with large number of parameters that tend to overfit the training data

To learn more about causes and solutions, check our guide on unethical behaviour.

Issues

1 medium
Feature `text` Switch Gender Fail rate = 0.052 52/1000 tested samples (5.2%) changed prediction after perturbation 1000 samples affected
(2.2% of dataset)
Show details

What's next?

1. Generate a test suite from your scan results

test_suite = results.generate_test_suite("My first test suite")

2. Run your test suite

test_suite.run()