In 2016, the AI100 project was launched to track AI trends over the course of a century, up to 2116. As a result, an initial report was published that downplayed existing concerns around sci-fi autonomous AI. This year, a panel of seventeen researchers and experts from Stanford University and beyond offer a second versionof that report that indicates the field has reached a turning point where attention needs to be paid to the everyday applications of AI and the ways in which the technology is used.
A second update of the AI100 report, the project that tracks AI trends over the next century
AI100 was launched by Eric Horvitz, chief scientist at Microsoft, and hosted by the Stanford University Institute for Human-Centered Artificial Intelligence. The project is funded by a gift from Horvitz, a Stanford alumnus, and his wife, Mary. As a result, in 2016, a first report from the project was published: the paper recognized that the effects of AI and automation could lead to social disruption and that AIs were far from being machines that could be compared to those in science fiction.
Today, a new report titled “Gathering Strength, Gathering Storms” has been released to show, five years on, the developments in AI research and use. This year’s update was prepared by a standing committee in collaboration with a panel of 17 researchers and experts. Schwartz Reisman Institute (SRI) Director Gillian Hadfield, a law and economics expert specializing in AI regulation spoke on the release of the new report while referring to the previous one:
“The first report in 2016 was very focused on technical progress and only briefly addressed risks and challenges. A key change this year is a much greater concern about the potential risks and challenges of AI, which fits well with our goal at SRI: how do we make sure AI is good for the world?”
A 2021 version of the AI100 report that looks at cities around the world
While the first AI100 report focused on the impact of AI in North American cities, participants aimed this new study to explore the impact of AI on people and societies around the world. The new report presents the findings of two workshops commissioned by the AI100 standing committee:
- One on “Prediction in Practice”: who has studied the use of AI-based predictions of human behavior?
- And another on “Coding Caring”: addressing the challenges and opportunities of integrating AI technologies into the human care process and the role that gender and work play in addressing the urgent need for innovation in healthcare.
As Michael Littman, a computer scientist at Brown University and chair of the working group, states:
“In the last five years, AI has gone from something that happens primarily in research labs or other highly controlled environments to something that affects the lives of people in society. That’s really exciting, because this technology is doing amazing things that we could only dream about five or ten years ago. But at the same time, the field is grappling with the societal impact of this technology, and I think the next frontier is thinking about how to leverage AI while minimizing the risks.”
“Risks” is one aspect the paper considers: faked images and videos used to spread fake news, online bots used to manipulate public opinion, discriminating algorithmic bias, invasion of privacy with the use of visual recognition systems.
The four fundamental points to remember from the 2021 edition of the AI100 report
There are then several points to take away from the work provided by the AI100 team:
- Greater public awareness from AI scientists would be beneficial as society grapples with the impacts of these technologies.
- Appropriately addressing the risks of AI applications will inevitably involve adapting regulatory and policy systems to be more responsive to the rapid pace of technological development.
- Studying and assessing the societal impacts of AI, such as concerns about the potential for AI and machine learning algorithms to shape polarization by influencing content consumption and user interactions, is easier when academic-industry collaborations facilitate access to data and platforms
- One of the most pressing dangers of AI is techno-solutionism, the idea that AI can be seen as a panacea when it is just a tool.”
Oren Etzioni, CEO of the Allen Institute for Artificial Intelligence, says of the updated AI100 report:
“The report represents a substantial amount of work and information by the best experts both inside and outside the field. It avoids sensationalism in favour of measured and scholarly obligations. I think the report is correct about the perspective of human-IA collaboration, the need for AI knowledge, and the essential role of a strong non-commercial perspective from universities and non-profits. […] I think the report may underestimate the economic impact of AI, as AI is often a component technology of products made by Apple, Amazon, Google and other large companies.”
See you in five years, in 2026, to have, surely, the publication of a new report where the economic impact of AI will be more explicitly discussed.
Translated from La nouvelle édition du rapport AI100 évoque les risques et l’impact sociétal de l’IA dans les prochaines années