Trustworthy & Responsible Artificial Intelligence
Development and innovations in artificial intelligence (AI) and machine learning (ML) offer new levels of economic opportunity and growth, safety and security, and health and wellness to our society. Meanwhile, broad acceptance and adoption of large-scale deployments of AI systems relies critically on their trustworthiness, which depends on fairness, privacy preservation, transparency, explainability, and accountability of such systems. To ensure that the benefits of AI technologies are broadly available across all segments of society, AI technologies need to be accepted across groups that differ by factors such as sociocultural identities, age, gender, health status, geography, income, and education. This raises significant challenges in AI and ML around ensuring non-discrimination and explainability of decision making. Our core research includes designing privacy and fairness into AI systems by means of developing fairness metrics, preprocessing/postprocessing methods, optimization and learning algorithms, and regularization criteria, etc., and providing explainability and robustness to inference processes of modern ML approaches and algorithms.