In 15 TED talk show talks, MIT faculty recently discussed their groundbreaking research that incorporates social, ethical and technical considerations and expertise, each supported by a seed grant established by the Computer (SERC) (SERC), a cross-strategy program at MIT Schwarzman Computing. There were nearly 70 applications for proposals filed last summer. The committee of representatives from each MIT school and college convened winning programs with a maximum funding of $100,000.
“SERC is committed to driving progress at the intersection of computing, ethics and society. Seed grants are designed to inspire bold, creative thinking around the complex challenges and possibilities in the field,” said Nikos Trichakis, co-dean of SERC and JC Penney Management. “With the MIT ethics of the Computational Research Symposium, we think not only is it necessary to demonstrate the breadth and depth of research that shapes the future of ethical computing, but to invite the community to be part of the conversation.”
“What you see here is a collective judgment on the social and moral responsibility of completing calculations at MIT,” said Caspar Hare, co-dean of SERC and professor of philosophy.
The full-day workshop on May 1 is organized around four key topics: responsible health care technology, artificial intelligence governance and ethics, technology for social and civic participation, and digital inclusion and social justice. Speakers delivered thought-provoking speeches on a wide range of topics, including algorithmic bias, data privacy, the social implications of artificial intelligence, and the evolving relationship between humans and machines. The event also hosts a poster meeting where student researchers will present their projects as SERC scholars throughout the year.
Highlights of the MIT ethics of Computing Research Workshops in each topic area, many of which are available on YouTube, including:
Make kidney transplant system fairer
The policies to regulate the U.S. organ transplant system are formulated by the national committee, which usually takes more than six months to create before it can be implemented for years, a timeline for many people on the waiting list that simply cannot survive.
Dimitris Bertsimas, associate dean of open learning, associate dean of business analysis and Boeing operations research professor Dimitris Bertsimas shared his latest work on analytics for a fair and effective kidney transplant distribution. Bertsimas’ new algorithm studies standards like geographic location, mortality and age, which changes dramatically from the usual six hours.
Bertsimas and his team work closely with the United Organization Shared Network (UNOS), a nonprofit organization that manages most of the national donation and transplant systems through a contract with the federal government. In his speech, Bertsimas shared a video by James Alcorn, senior policy strategist at UNOS, who provides a poignant summary of the impact of the new algorithm:
“This optimization fundamentally changes the turnaround time for evaluating these different simulations of these policy scenarios. It used to take months to view several different policy scenarios, and now it takes minutes to view thousands of scenarios. We can improve these changes faster, which means we can improve these changes faster to improve the system of shifting gears.
Ethics of AI-generated social media content
As AI-generated content becomes more common on social media platforms, is it what is the meaning of any part of a disclosure post created (or not disclosed) by AI? Mitsui Political Science Professor Adam Berinsky and Gabrielle Péloquin-Skulski, PhD in the Department of Political Science, explored the issue at a conference that examined the latest research on the impact of various tags on AI-generated content.
In a series of investigations and experiments, when pinning tags to AI-generated posts, researchers looked at how specific words and descriptions influence users’ perceptions of deception, their intention to interact with the position, and ultimately whether it is True or False.
“The biggest gain from our initial discovery is that one size is not suitable for everyone,” said Péloquin-Skulski. “We found that process-oriented tagging AI-generated images can reduce belief in fake and real posts. This is problematic because tags are designed to reduce people’s belief in fake information, not necessarily real information. This suggests that labeling processes and authenticity may be better in refuting the misleading nature of AI generation.”
Use AI to increase civil discourse online
“Our research aims to address the ways people increasingly want to speak out in the organizations and communities they belong to,” Lily Tsai explained at a conference on the experiments and the future of digital democracy in generating AI. Tsai, Ford professor of political science and director of the MIT Governance Laboratory, is conducting ongoing research with Toshiba’s media arts and art science professor Alex Pentland and a larger team.
In the public and private sector environments in the United States, online review platforms have recently become popular in the United States. Tsai explained that using technology, now everyone has a say — but doing so can be overwhelming and even feels unsafe. First, too much information is provided, and second, online discourse is becoming increasingly “uncivilized”.
The group focused on “how we build existing technologies and improve them through rigorous interdisciplinary research, and how we innovate through integration of generative AI to enhance the benefits of online spaces for deliberation.” They have developed their own integrated AI platform for deliberation democracy, deliberation, and launched four initial modules. All studies have been in the lab so far, but they are also working on a series of upcoming field studies, which will be partnered with the District of Columbia government for the first time.
Tsai told the audience: “If you have no other choice from this presentation, I hope you will take this away – we all ask to evaluate the technologies being developed to see if they have positive downstream results rather than just focusing on maximizing the number of users.”
A public think tank that considers all aspects of AI
When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens of the MIT Data+Feminism Laboratory initially submitted their funding proposals, they intended to develop a think tank, but a framework, but a framework-a framework-a framework-a framework-a expression of artificial intelligence and machine learning being able to integrate community approaches and bring participants together.
Finally, they created Liberator AI, which they described as a “public think tank on all aspects of artificial intelligence.” D’Ignazio and Stevens have collected 25 researchers from a wide range of institutions and disciplines who have written over 20 job dissertations and studied the latest academic literature on AI systems and participation. They deliberately divided the paper into three different topics: corporate AI landscape, dead ends and directions to go.
“We are not waiting to open up AI or Google to invite us to participate in the development of their products, but instead gather together to compete for the status quo, think about bigger pictures and reorganize resources in a bid to achieve greater social transformation,” D’Ignazio said.