We developed CHORA, a zoomorphic robot featuring biomimetic breathing and heartbeat behaviors. Through a mixed-methods study with 30 participants, we gathered physiological data, self-reports, and interview feedback. Our findings demonstrate how haptically experienced animacy can support emotional regulation by enabling four different coping strategies.
@inproceedings{vyas2026chora,title={Haptically Experienced Animacy Facilitates Emotion Regulation: A Theory-Driven Investigation},author={Vyas, Preeti and Guta, Bereket and Zhou, Tim G. and Himam, Noor Naila and Uusberg, Andero and MacLean, Karon E.},booktitle={IEEE Transactions on Affective Computing},year={2026},url={https://arxiv.org/abs/2602.07395},}
ICLR 26 ML4RS
Changing Modalities by Cross-Band Transfer, Addition, and Peeking
Tim G Zhou, Anthony Fuller, Geoff Pleiss, and Evan Shelhamer
In 4th ICLR Workshop on Machine Learning for Remote Sensing (Main Track), 2026
Machine learning models for remote sensing typically assume a static set of modalities. However, as we equip newer satellites with novel sensors and retire old ones, practitioners may wish to deploy a model on a substitution, superset, or subset of modalities given data availability or practical constraints. We formulate the setting of changing modalities and identify three main scenarios: Modality Transfer, Addition, and Peeking. We propose Delulu-Net, an architecture with modular components for all three changing modality scenarios. Delulu-Net learns a multi-modal model from a unimodal teacher and \emphun-labeled multimodal data, providing a practical alternative to re-labeling and re-training.
@inproceedings{zhouchanging,title={Changing Modalities by Cross-Band Transfer, Addition, and Peeking},author={Zhou, Tim G and Fuller, Anthony and Pleiss, Geoff and Shelhamer, Evan},booktitle={4th ICLR Workshop on Machine Learning for Remote Sensing (Main Track)},year={2026},url={https://openreview.net/forum?id=rkGlI80m12},}
We propose pairing large deep neural networks with smaller sidekick models to improve uncertainty quantification in a computationally efficient manner. Rather than ensembling multiple training runs, we combine predictions via learned weighted averaging. Our method achieves improved accuracy and uncertainty metrics across five image classification benchmarks with only 10-20% more computation, while the smaller model rarely degrades performance.
@inproceedings{zhou2025asymmetricduos,title={Asymmetric Duos: Sidekicks Improve Uncertainty},author={Zhou, Tim G. and Shelhamer, Evan and Pleiss, Geoff},booktitle={NeurIPS},year={2025},url={https://arxiv.org/abs/2505.18636},}