Posts by Collection

portfolio

publications

Social Trust and its Impact on Survey Response Rates

Published in JSM Proceedings, 2024

Increasing nonresponse in household surveys in recent years has been a matter of concern, especially regarding the quality of information produced from large scale surveys with gradually decreasing response rates. In this research we explore if there is any empirical relationship between social trust, both interpersonal and institutional, and the survey response rates over time. We use social trust items measured in General Social Survey (GSS) to answer our research question. Analyzing data from 13 federally-administered national household surveys over two decades spanning 2000 to 2022 using state-space models, we found mixed results indicating that trust in both government and economic institutions yielded strong associations between response rates and trust. However, we found limited evidence suggesting that institutional trust bears an association with response rates to surveys related to health. Overall our research tries to find the root causes of declining survey response and motivates the discussion around social trust being a potential driver of influencing an individual’s propensity to respond to surveys.

Recommended citation: Das, U., & Forrester, A. C. (2024, October 18). Social Trust and its Impact on Survey Response Rates. Joint Statistical Meetings (JSM), Portland, OR. https://doi.org/10.5281/zenodo.13947972.
Download Paper | Download Slides

Measuring Impact of Artificial Intelligence on US Federal Government

Published in Proceedings of Federal Committee of Statistical Methodology (FCSM) Conference, 2024

The rapid growth of artificial intelligence has been in the limelight for the past couple of yeras. While public sectors move towards adopting AI in their services, it is of prime interest to understand whether the same shift is happening inside givernment institutions. Especially, with the concern about bias and fairness regarding AI implementation, we seek answers to questions on how and where the US Federal Government agencies are at in terms of adopting AI.

Recommended citation: Das, U., & Mitra, S. Measuring Impact of Artificial Intelligence on US Federal Government.
Download Slides

Evaluating the Efficacy of LLM-Augmented Imputation in Longitudinal SurveysEvaluating the Efficacy of LLM-Augmented Imputation in Longitudinal Surveys

Published in Preecidings of 80th Annual Conference of American Association of Public Opinion Research (AAPOR), 2025

Longitudinal surveys are essential for data collection in social sciences, economics, health, and public policy, offering valuable insights into trends over time and motivating evidence-based policy making. However, declining response rates in major U.S. surveys, like the Current Population Survey (CPS) and American Community Survey (ACS), raise concerns about data accuracy and representativeness. Declines resulting largely from attrition undermine the representativeness and validity of longitudinal datasets. Traditional imputation techniques, such as multiple imputation, hot-deck imputation, and inverse probability weighting, are common methods used to address nonresponse. Yet, recent advances in artificial intelligence (AI) and machine learning (ML) offer promising new strategies, especially for handling complex nonresponse patterns. For example, large language models (LLMs) permit analysts to generate synthetic personas to better represent respondent groups and may provide a more accurate reflection of the target population. In this research, we train the open source LLM ChatGPT 4o on respondent demographic information and previous responses from the four waves of the 2014 Survey of Program Participation (SIPP) and impute missing responses in each wave. This approach enables us to evaluate the effectiveness of LLMs to impute missing responses, particularly in federally-administered program participation data (such as SNAP, WIC, TANF, Medicaid/Medicare), where attrition in longitudinal surveys significantly impacts data quality. Our preliminary work shows that LLMs provide a meaningful candidate to impute missing data in the context of the SIPP. Using trained personas through LLMs provide a transparent method to develop imputed values, mimicking how traditional person interviews function. Further research will involve evaluating the imputations and developing new respondent weights to enable longitudinal analysis that combines reported and imputed data.

Recommended citation: Mitra S., Das, U. & Forrester, A., (2025). Evaluating the Efficacy of LLM-Augmented Imputation in Longitudinal SurveysEvaluating the Efficacy of LLM-Augmented Imputation in Longitudinal Surveys, *AAPOR*
Download Slides

Do People Trust Enough to Respond to Surveys?

Published in CHANCE, 2025

Increasing nonresponse in household surveys in recent years has been a matter of concern, especially regarding the quality of information produced from large scale surveys with gradually decreasing response rates. In this research we explore if there is any empirical relationship between social trust, both interpersonal and institutional, and the survey response rates over time. We use social trust items measured in General Social Survey (GSS) to answer our research question. Analyzing data from 13 federally-administered national household surveys over two decades spanning 2000 to 2022 using state-space models, we found mixed results indicating that trust in both government and economic institutions yielded strong associations between response rates and trust. However, we found limited evidence suggesting that institutional trust bears an association with response rates to surveys related to health. Overall our research tries to find the root causes of declining survey response and motivates the discussion around social trust being a potential driver of influencing an individual’s propensity to respond to surveys.

Recommended citation: Das, U., & Forrester, A. C. (2025). Do People Trust Enough to Respond to Surveys? *CHANCE, 38(2)*, 23–33. https://doi.org/10.1080/09332480.2025.2510158.
Download Paper

talks

Modular Interactive Tutorials in R

Published:

This talk featured a Shiny interactive tutorial I made for helping students without technical background to learn coding in R from scracth. The tutorials had modules on basic to intermediate level of programming with exercises at the end of each module. The interactive modules were designed to give feedback, encourage students upon attempting the exercises. The Shiny app was hosted in the College of Behavioral and Social Science server for students to access.

Download Slides

teaching

Guest Lecturer

Full-time, Chakdaha College, Statistics, 2021–2022

I taught Statistics to undergraduate students as their minors from 2021 to 2022. I covered topics like probability, regression analysis, design of experiment, time series analysis and statistical quality control. The teaching experience involved in-person and online sessions with the students, as well as preparing exams and grading them.