TABLET Logo

        

Overview

While many prediction problems require the use of tabular data, often, gathering sufficient training data can be a challenge task, due to costs or privacy issues. Large language models (LLMs) offer considerable world knowledge due to their pre-training and can help improve sample efficiency for these problems. Still, these models are often not completely aligned with many tabular prediction tasks because of model biases from pre-training and lack of information about the task, hurting their performance in the zero and few shot settings. Task instructions are ideal for bridging this gap, because they offer LLMs context about the task. Thatโ€™s where TABLET comes in. TABLET is a living benchmark of tabular datasets annotated with task instructions for evaluating how well LLMs utilize instructions for improving performance.

TABLET contains 20 diverse classification tasks. Each task is annotated with multiple instructions. We collected these instructions from naturally occurring sources and generated them with GPT-3, and they vary in their phrasing, granularity, and technicality.

The results in our paper demonstrate instructions are highly useful for improving LLM performance on tabular prediction. Still, LLMs underperform fully supervised models fit on all the training data, and we analyze several of the shortcomings of LLMs which contribute to this. Hopefully, we develop models that close this gap!

What can you do with TABLET?

  1. Evaluate. TABLET provides the tools to collate tabular instances into prompts for LLMs. TABLET also makes it easy to evaluate performance across different instructions.
  2. Compare. TABLET enables users to compare performance across LLMs in zero and few-shot settings, or against fully supervised models like XGBoost that have access to all the training data.
  3. Contribute. TABLET provides a simple API for creating new prediction tasks from tabular datasets. TABLET supports both instructions written by users and can also generate instructions for tasks.

Getting Started

  • Paper: Read our evaluation on TABLET ๐Ÿ“
  • Demo: Explore LLM predictions on TABLET ๐Ÿ•ต๏ธ
  • Install: Install TABLET ๐Ÿ’พ
  • Evaluate: Follow a tutorial on how to evaluate an LLM on TABLET ๐Ÿ’ฏ
  • Contribute: Follow a tutorial on how to contribute a new task to TABLET โœ๏ธ

Tasks

Here are the current tasks in TABLET, a short description, the creator, and the number of each type of instruction. Contribute more tasks to TABLET by following the instructions in contribute, and I will add the task and your name and website as the person who contributed it (if you want).

Citation

@article{tabletSlack23,
Author = {Dylan Slack and Sameer Singh},
Title = {TABLET: Learning From Instructions For Tabular Dataset Tasks},
Year = {2023},
journal = {arXiv},
}

Authors

Dylan Slack profile photo.
Dylan Slack
Sameer Singh profile photo.
Sameer Singh