maud_accuracy_of_target_capitalization_rw_(outstanding_shares)_bringdown_standard_answer
- Task Description: This is a multiple-choice task in which the model must select the answer that best characterizes the accuracy required for capitalization representations and warranties in a merger agreement according to the bring down provision.
- Task Type: 4-way classification
- Document Type: merger agreement
- Number of Samples: 182
- Input Length Range: 61-675 tokens
- Evaluation Metrics: accuracy (maximize), balanced_accuracy (maximize), f1_macro (maximize), f1_micro (maximize), valid_predictions_ratio (maximize)
- Tags: corporate law, interpretation
- Paper: LegalBench: A Collaboratively Built Benchmark for Measuring Legal Reasoning in Large Language Models
- Dataset Download: https://hazyresearch.stanford.edu/legalbench/
7 submissions
Rank | Model | accuracy | balanced_accuracy | f1_macro | f1_micro | valid_predictions_ratio | Date | Results |
---|---|---|---|---|---|---|---|---|
1 | claude-3-5-haiku-20241022 | 0.762 | 0.288 | 0.306 | 0.762 | 1.000 | 2025-08-01 | View |
2 | claude-3-haiku-20240307 | 0.680 | 0.217 | 0.217 | 0.680 | 1.000 | 2025-07-28 | View |
3 | gpt-4.1-nano | 0.182 | 0.245 | 0.115 | 0.182 | 1.000 | 2025-07-03 | View |
4 | google/gemma-2-27b-it | 0.127 | 0.265 | 0.065 | 0.127 | 1.000 | 2025-07-24 | View |
5 | meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo | 0.066 | 0.173 | 0.064 | 0.066 | 1.000 | 2025-08-03 | View |
6 | gpt-4o-mini | 0.050 | 0.218 | 0.048 | 0.050 | 1.000 | 2025-07-02 | View |
7 | meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo | 0.050 | 0.096 | 0.038 | 0.050 | 1.000 | 2025-07-25 | View |