Working Group in AI Ethics and Policy

Leaders in industry, government, and academia are seeking guidance on the ethical challenges they face as they design, implement, and regulate novel AI-based technologies. The Working Group in AI Ethics and Policy brings together UF’s many AI ethics and policy research programs under one umbrella in order to better serve UF researchers and partners seeking this guidance as well as to promote foundational research in AI ethics and policy.
But what is AI ethics and policy? AI ethics and policy is an emerging multidisciplinary field that ensures that AI-based systems are designed, developed, implemented, and regulated in ways that align with our shared values. 
AI ethics and policy spans numerous academic disciplines, including computer science, philosophy, the social sciences, and law. Computer scientists attempt to operationalize values like fairness, accountability, transparency, and privacy with the goal of developing AI-based systems that better align with our shared commitments to these values. Philosophers critically investigate these concepts as they apply to AI-based systems, with the goal of clarifying the relationship between ordinary moral concepts and the highly technical work being done by computer scientists. Social scientists investigate the real-world effects of AI-based systems on our lives and consider how existing social and political institutions influence how these systems are designed and implemented. Legal scholars consider how AI-based technologies challenge existing regulatory structures—such as those designed to safeguard privacy, ensure due process, and prevent discrimination—and propose new legal and regulatory frameworks to meet those challenges. The composition of our working group reflects this disciplinary diversity.

David GrantDavid Gray Grant is an assistant professor in the UF Department of Philosophy and a senior research fellow in digital ethics and governance at the Jain Family Institute. He works mainly in ethics of technology (especially ethics of artificial intelligence) and philosophy of science (especially computer and data science). His research focuses on philosophical issues raised by automated decision-making systems and autonomous software agents. Before coming to UF, David was a postdoctoral fellow at Harvard University, where he ran the Embedded EthiCS Teaching Lab as part of his work with the Embedded EthiCS @ Harvard program. The program is a joint effort between the philosophy and computer science departments, and develops ethics modules for courses across the computer science curriculum. David completed his Ph.D. in Philosophy at MIT in 2018.

Jim HooverJim Hoover is a professor and researcher in UF’s Warrington College of Business. He teaches applications of AI across multiple functional disciplines in business, but is especially focused on marketing. Marketing has embraced AI as a key differentiator in business performance. But AI challenges such as biased outcomes and privacy concerns are important areas for business and marketing practitioners to address. Jim’s current research focus is on mitigating bias in AI resulting from the data inputs to AI models.


Jasmine McNealyJasmine McNealy is an attorney and associate professor in media production, management and technology in the UF College of Journalism and Communications. She also serves as the associate director of the Marian B. Brechner First Amendment Project. She is currently a visiting fellow with the Shorenstein Center for Media, Politics & Public Policy and a faculty associate at the Berkman Klein Center, both at Harvard University.

McNealy is an internationally recognized scholar whose research is interdisciplinary, centered at the intersection of media, technology, policy, and law. Of particular focus are the areas of privacy, surveillance, and data governance and the impacts on marginalized and vulnerable communities. Her research has been published in social science, law, ethics, and computer science journals. Her research has been funded by public institutions, private foundations, and private organizations. A public scholar who understands the importance of making research explainable and accessible to the wider public,  McNealy has collaborated with partners in academia, industry, and government, presented to industry, government, and community as well as scholarly audiences, and translated her research to mainstream media.

Juan Claudio NinoJuan Claudio Nino is a professor in the UF Materials Science and Engineering Department.  He is an expert in electronic materials and within the world of artificial intelligence (AI), his research includes the design and manufacturing of neuromorphic devices and related hardware for machine learning and AI applications.  He is also working on the use of AI for the early detection of neurodegenerative diseases like Alzheimer’s disease.  Nino serves in the Emerging Technologies Technical Advisory Committee of the U.S. Department of Commerce that focuses on the state of technologies such as AI and projecting their likely effects in the future on national security, the U.S. defense industrial base, and the overall health and competitiveness of the U.S. economy.
Since AI hardware is where all AI software, applications, and algorithms are run on, Nino’s main interests in terms of AI Ethics and Policy revolve around questions like: “We are about to be able to build a potentially ubiquitous chip that can greatly surpass the human brain; should we build it?” “Should powerful technology have a “kill switch”? and if so, who should have access to that switch?” Nino is also interested in the exploration of the technical limits, practical implications of implementing explainability in AI algorithms.

Duncan PurvesDuncan Purves is an associate professor in the UF Department of Philosophy and a UF Research Foundation Professor. His research addresses ethical questions that arise when human decision making is assisted or replaced by artificial intelligence-based systems (AI), particularly in law enforcement and military contexts. The guiding aim of his research is to incorporate ethical considerations in the design, development, and implementation phases of new AI technology. Unlike a lot of AI ethics work--most of which starts with very general abstract principles designed to apply across all domains of application--his approach is to first look at the norms governing the domain in which AI will be applied and evaluate the AI application in terms of those norms.
One strand of this research applies the ethical principles of Just War Theory to identify ethical constraints on the use of so called “lethal autonomous weapon systems” (LAWS). These are weapons that can identify possible targets and choose which targets to attack without human intervention. LAWS are being developed by major military powers around the globe, and they are already being used in warzones. At the same time, regulators and agencies like the Department of Defense are seeking international consensus about the ethical limits of this technology. Across several articles he has argued for a collection of important conclusions concerning the ethical limits of LAWS. Some of this published work has been taught in ethics seminars attended by military officers at U.S. military academies.
His current research is the ethical implementation of data-driven systems in law enforcement, especially predictive policing. This research is being funded by the National Science Foundation project Artificial Intelligence and Predictive Policing: An Ethical Analysis. Police on patrol are increasingly guided to locations by predictive policing systems. Predictive policing is the use of algorithmic systems trained on historical crime data and other data to forecast future criminal activity and allocate resources accordingly. Predictive policing can allow law enforcement to detect crime patterns that would go unnoticed by human analysts, and it promises to eliminate questionable human “hunches” from the job of crime forecasting. Despite these advantages, predictive policing has come under withering criticism from civil rights groups, academics, and the communities that have been subjected to the practice. These criticisms include charges that predictive policing reinforces racially biased patterns of policing; that it unfairly burdens marginalized communities; that it is inscrutable by police or the citizens that it affects; or that it infringes the liberty of targeted communities. The ultimate deliverable of this project is to develop an industry framework for the ethical implementation of data-driven technology in law enforcement. We developed this framework in collaboration with firms developing the technology, police reform advocates, as well as some of the major U.S. police departments that use predictive policing tools. You can learn more about the project deliverables here.
Dr. Purves also maintains research interests in theoretical ethics, including the metaphysics and moral significance of death, harm, and well-being. You can find a full list of his publications on his CV.

Amy SteinAmy L. Stein is associate dean for curriculum and Cone Wagner Professor of Law at the University of Florida Levin College of Law. Her scholarship focuses on how our governance mechanisms shape and are shaped by emerging technologies. Professor Stein is an internationally recognized law and technology scholar, sharing her work on the legal and ethical implications of various technologies, including artificial intelligence (AI). Demonstrating ethics as inseparable from justice, consequence, proper use of authority, accountability, and sustainability, her recent AI-related publications explore the possibilities of using AI to address aspects of climate change, Artificial Intelligence and Climate Change, 37 Yale Journal on Regulation 890 (2020), work that has been featured in Popular Science and cited in Forbes, as well as the implications of artificial intelligence for tort defenses in civil liability, Assuming the Risks of Artificial Intelligence, 102 Boston University Law Review 979 (2022). Prior scholarship focuses on delegations of authority related to emergency powers, pathways to integrate emerging energy technologies, and the federalism implications of energy and climate change, all of which can be accessed at
Professor Stein began her academic career at George Washington University Law School and Tulane Law School.  Prior to her academic appointments, she practiced as an environmental and litigation associate for Latham & Watkins LLP in the firm's Washington, D.C., and Silicon Valley offices.  She is a member of the District of Columbia, Illinois, and California state bars.  She is a graduate of the University of Chicago (AB) and the University of Chicago Law School (JD).

Headshot of Sonja M Scmer-GalunderSonja M. Schmer-Galunder is a social anthropologist, professor of practice and the Glenn and Deborah Renwick Leadership Professor in AI and Ethics at the UF Computer and Information Science and Engineering Department (CISE) at the Herbert Wertheim College of Engineering. She leads the newly formed AI and Ethics program through the Engineering Leadership Institute (ELI) with a focus on multidisciplinary perspectives related to ethical considerations in the use of AI. 

Her research focuses on the social, cultural and ethical impact of AI. In particular, she has researched algorithmic bias, moral value pluralism, prosocial discourse in online environments, hate speech and misinformation, computational cultural understanding, anthropological methods for machine learning, and human performance optimization in extreme environments. Before joining UF, she worked as principal research scientist at Smart Information Flow Technologies (SIFT), where she was the Principal Investigator on several multi-million-dollar DARPA-funded research projects, e.g. DARPA Understanding Group Bias, DARPA Civil Sanctuary and DARPA HABITUS (co-PI), as well as several Phase 1, Phase 2 and Phase 3 SBIRs. For the Collective Allostatic Load SBIR, she led four 1-month long lunar surface operation simulation studies at the Hawaii Space Exploration Analog and Simulation (HI-SEAS) habitat. Prior to working at SIFT, she was a junior researcher at Columbia University and New York University.


Headshot of Joel DavisJoel Davis is a clinical professor in UF’s Warrington College of Business. Davis 25 years of commercial experience in analytics, AI, and business operations. Most recently he was chief strategy officer at Revenue Management Solutions, a company that assists the restaurant and retail industry identify profitability opportunities through data-driven analytics, where he was responsible for leading the company’s emerging analytics and data strategy discipline. Dr. Davis is director of the David F. Miller Retail Center & Clinical Professor of Information Systems and Operations Management at the Warrington College of Business. He  teaches introductory programming, Artificial Intelligence Methods, and IT strategy courses. His current research is centered around the integration of analytics and artificial intelligence solutions into business decision-making, and effective responsible AI solution adoption strategies within corporations.