What is this study about?
Concept Bottleneck Models (CBMs) classify images by
first identifying a set of human-interpretable
concepts (e.g., "has wings", "sandy ground") and then using
these concepts—weighted by their importance—to predict a class label.
In this study you will see the same image classified
by two different CBM systems. Your task is to judge
which model's explanation better justifies its prediction.
What to look for
-
Concept Relevance
Are the highlighted concepts actually visible in the image and
meaningful for the predicted class?
-
Concept Importance
Do the weights reflect which concepts should matter most for this
prediction?
-
Coherence
Do the concepts together form a clear, non-redundant rationale?
-
Prediction Support
Does the explanation adequately justify why the model chose that
class?
Instructions
- View the image and the two explanations (CBM A and CBM B).
-
Select which explanation you find more informative and faithful.
- Optionally, add a comment to explain your choice.
- Click Submit to proceed to the next pair.
You will evaluate 20 image pairs. This should take approximately 10 minutes.