Official code for "SummAct: Uncovering User Intentions Through Interactive Behaviour Summarisation" published at CHI'25
Find a file
2025-04-12 08:46:24 +02:00
hf_bmt Uploaded 2025-04-10 20:14:17 +02:00
inference Uploaded 2025-04-10 20:14:17 +02:00
preprocess Uploaded 2025-04-10 20:14:17 +02:00
train Uploaded 2025-04-10 20:14:17 +02:00
environment.yml Uploaded 2025-04-10 20:14:17 +02:00
README.md Updated readme 2025-04-12 08:46:24 +02:00

SummAct: Uncovering User Intentions Through Interactive Behaviour Summarisation

Guanhua Zhang,   Mohamed Ahmed,   Zhiming Hu,   Andreas Bulling
ACM CHI 2025, Yokohama, Japan
[Project] [Paper]

Directory Structure

SummAct
│   README.md
│   environment.yml
│
└───preprocess
│   convert_dataset.py
│   create_steps.py
│
└───hf_bmt
│   hf_2_bmtrain.py
│   hf_2_bmtrain.sh
│   bmt_hf.py
│
└───train
│   train.py
│   train.sh
│
└───inference
│   inference.py
│   inference.sh 

Setup

We recommend setting up a virtual environment using Anaconda.

  1. Create a conda environment and install dependencies
    conda env create --name summact --file=environment.yml
    conda activate summact
    
  2. Since model_center==1.0.3 is needed but is not yet available on PYPI, build from source
    $ git clone https://github.com/OpenBMB/ModelCenter.git
    $ cd ModelCenter
    $ pip install -r requirements.txt
    $ python3 setup.py install
    
  3. Clone our repository to download our code and a pretrained model
    git clone this_repo.git
    

Preprocessing

  1. Convert actions from symbolic formats to natural language by running preprocess/convert_dataset.py. Adapt it to your local dataset paths.
  2. Prompting the pretrained LLM with examples to generate sub-intentions using preprocess/create_steps.py. Adapt it to your local prompt txt path.

Fine-tuning

  1. After downloading the model from Hugging Face, convert it into model_center weights using the script in hf_bmt/hf_2_bmtrain.sh. Adapt it to your local paths of the downloaded model and the wanted output.
  2. Run train/train.sh, which will call train/train.py to fine-tune the model for interactive behaviour summarisation. Make sure your computer has GPUs.

Inference

Run inference/inference.sh, which will call inference/inference.py to convert the fine-tuned model back to HF format, and then calculate metrics to evaluate the summarisation quality.

Citation

If you find our code useful or use it in your own projects, please cite our paper:

@inproceedings{zhang25_chi,
  title = {SummAct: Uncovering User Intentions Through Interactive Behaviour Summarisation},
  author = {Zhang, Guanhua and Ahmed, Mohamed and Hu, Zhiming and Bulling, Andreas},
  year = {2025},
  pages = {1--17},
  booktitle = {Proc. ACM SIGCHI Conference on Human Factors in Computing Systems (CHI)},
  doi = {10.1145/3706598.3713190}
}

Acknowledgements

Our work relied on the codebase of Mind2Web, ScreenAgent and Tell Me More!. Thanks to the authors for sharing their code.