8 issues detected
Ethical 2
Robustness 5
Performance 1

Your model seems to be sensitive to gender, ethnic, or religion based perturbations in the input data. These perturbations can include switching some words from feminine to masculine, countries or nationalities. This happens when:

  • Underrepresentation of certain demographic groups in the training data
  • Data is reflecting some structural biases and societal prejudices
  • Use of complex models with large number of parameters that tend to overfit the training data

To learn more about causes and solutions, check our guide on unethical behaviour.

Issues

2 medium
Feature `text` Switch Religion Fail rate = 0.071 6/85 tested samples (7.06%) changed prediction after perturbation 85 samples affected
(4.2% of dataset)
Show details
Feature `text` Switch Gender Fail rate = 0.050 21/418 tested samples (5.02%) changed prediction after perturbation 418 samples affected
(20.9% of dataset)
Show details

Debug your issues in the Giskard hub

Install the Giskard hub app to:

  • Debug and diagnose your scan issues
  • Save your scan result as a re-executable test suite to benchmark your model
  • Extend your test suite with our catalog of ready-to-use tests

You can find installation instructions here.

from giskard import GiskardClient

# Create a test suite from your scan results
test_suite = results.generate_test_suite("My first test suite")

# Upload your test suite to your Giskard hub instance
client = GiskardClient("http://localhost:19000", "GISKARD_API_KEY")
client.create_project("my_project_id", "my_project_name")
test_suite.upload(client, "my_project_id")