HKU Data Repository
Browse

Supporting data for "Toward Self-Improving and Sustainable Large Language Models"

dataset
posted on 2025-10-08, 00:54 authored by Qintong Li
<p dir="ltr">The GSM-Plus Benchmark is available in a .jsonl format, designed for easy readability and processing. It includes key attributes such as the "question" (the adversarial question), "solution" (the reasoning chain), and "answer" (the gold answer). Additionally, it specifies the "perturbation_type" used, along with a "seed_question" that serves as the basis for generating the adversarial question, as well as the corresponding "seed_solution" and "seed_answer".</p><p dir="ltr">The Large Language Models Evaluation dataset (CoEval) is provided in a .json format and encompasses various tasks to assess model performance. This includes ELI5 for long-form question answering, ROCStories for open story generation, and Self-Instruct for general instruction following. These tasks help evaluate how well models handle different types of inquiries and evaluation instructions.</p><p dir="ltr">Finally, the ReverseGen training dataset is also in .json format, focusing on enhancing large language model performance. It consists of a warm-up dataset for model initialization and an interactive preference learning dataset that leverages feedback from success and failure signals to iteratively improve model capabilities. This ensures a more effective training process in real-world applications.</p>

History

Usage metrics

    Research Postgraduates

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC