We’re excited to announce that next month, we will launch an open call for ideas aimed at shaping the influence artificial intelligence (AI) has on the field of news and information.
The challenge asks an overarching question: How might we ensure that the use of artificial intelligence in the news ecosystem be done ethically and in the public interest?
The open call, presented by the Ethics and Governance of AI Initiative, will invest $750,000 in the best ideas we receive.
Specifically, we are looking for proposals that address the following topics:
Governance: enabling greater transparency and accountability in the decisions made by web platforms in automating how information flows through the web;
Platform design: reimagining and redesigning AI-driven news platforms so that they more effectively manage the spread of misinformation and disinformation;
Bad actors: detecting the presence of sophisticated bots spreading misinformation, and uncovering machine learning-driven fakes in the field;
Journalism: supporting reporters in understanding and better communicating the impact of AI on the information ecosystem and society at large.
The Ethics and Governance of Artificial Intelligence Initiative is supported by Knight Foundation, Omidyar Network, LinkedIn co-founder Reid Hoffman, and the William and Flora Hewlett Foundation. The anchor institutions leading the Initiative are The MIT Media Lab and the Berkman Klein Center for Internet & Society at Harvard University.
Artificial intelligence is the long-standing effort of building machines that can mimic or exceed human cognition. More recently, machine learning – the art of designing algorithms that improve as they collect data – has rapidly developed to the point that it is much easier to use and seeing much more widespread use.
However, we and others in the field are increasingly concerned that these technologies are being implemented without proper forethought, leaving all of us vulnerable to a host of risks.
Knight Foundation and the AI Initiative have been deeply involved in these issues, particularly as machine learning and automation increasingly shape the quality of information online, and how it flows through the web and ends up on our screens.
It’s hard to go online without an algorithm suggesting which movie to watch or a news article to read, decisions informed by the programmer that created the code. In this respect, artificial intelligence is already intertwined in helping the spread of misinformation and disinformation worldwide. And increasingly we see examples of AI used to create fake video and audio – which can be damaging to anyone, even more so in the high-stakes political arena.
Unfortunately, at the same time, our approaches to dealing with this automation of the public sphere, and ensuring it is done ethically and in the public interest, are limited, because most technology development in this area is being driven by a handful of private companies.
This open call aims to expand the range of possible ideas, tools and communities that can address these concerns. We’re still early in planning the challenge and expect it to evolve, but we’re making this announcement today to encourage people to start thinking of projects to submit.
We plan to open the submission form to this challenge on Sept. 12, 2018, with a deadline of Oct. 12, 2018.
We’re excited to see your ideas, and plan to have more information soon. We also plan to do virtual office hours to answer your questions. For now, follow us @timhwang and @pcheung630 or sign-up to get email updates on the AI challenge here.
Tim Hwang is Director of the Ethics and Governance of AI Initiative at the Berkman-Klein Center and the MIT Media Lab.
Paul Cheung is Director/Journalism + Technology Innovation at Knight Foundation.
Technology / Article