Skip to content

This project includes the codes and comprehensive experiments for our paper (Privacy-Preserving Multimodal Sentiment Analysis)

License

Notifications You must be signed in to change notification settings

ahahnut/DPCRL-for-Privacy-Preserving-Multimodel-Sentiment-Analysis

Repository files navigation

Privacy-Preserving Multimodal Sentiment Analysis

This project includes the codes and some results of our paper. The codes of models in "models.py" are written to implement a new autoencoding architecture.

Setup the environment

We work with a conda environment.

conda env create -f environment.yml
conda activate DPCRL

Data Download

  • Install CMU Multimodal SDK. Ensure, you can perform from mmsdk import mmdatasdk.
  • Option 1: Download pre-computed splits and place the contents inside datasets folder.
  • Option 2: Re-create splits by downloading data from MMSDK. For this, simply run the code as detailed next.

Running the code

  1. Set word_emb_path in config.py to glove file.
  2. Set sdk_dir to the path of CMU-MultimodalSDK.
  3. python train.py --data mosi. Replace mosi with mosei or ur_funny for other datasets.

About

This project includes the codes and comprehensive experiments for our paper (Privacy-Preserving Multimodal Sentiment Analysis)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages