pointers
Here we include some pointers covering a range of topics around Predictable AI. The list below is not meant to be comprehensive and is highly biased by the participants of the March 2023 event.
- Armstrong, Sotala & Ó hÉigeartaigh "The errors, insights and lessons of famous AI predictions – and what they mean for the future" https://www.tandfonline.com/doi/abs/10.1080/0952813X.2014.895105
- Beck, J., Burri, T., Christen, M., Fleuret, F., Kandul, S., Kneer, M., & Micheli, V. (2023). Human Control Redressed: Comparing AI-To-Human Vs. Human-To-Human Predictability in a Real-Effort Task. Human-To-Human Predictability in a Real-Effort Task (January 16, 2023). https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4325339
- Burnell et al. “Not a Number: Identifying Instance Features for Capability-Oriented Evaluation”, IJCAI 2022, https://www.ijcai.org/proceedings/2022/0392.pdf
- Caballero et al. "Broken Neural Scaling Laws." arXiv 2022, https://arxiv.org/abs/2210.14891.
- Cave & Ó hÉigeartaigh “Bridging near- and long-term concerns about AI”, Nature Machine Intelligence, https://www.nature.com/articles/s42256-018-0003-2, 2020
- Dafoe, Hughes, Bachrach, Collins, McKee, Leibo, Larson and Graepel "Open Problems in Cooperative AI" (2020). https://arxiv.org/abs/2012.08630, 2020.
- Gabriel “Artificial Intelligence, Values, and Alignment”, Minds and Machines 2020. https://link.springer.com/article/10.1007/s11023-020-09539-2
- Ganguli et al., “Predictability and Surprise in Large Generative Models”, Anthropic, arxiv 2022 https://arxiv.org/pdf/2202.07785.pdf
- Grace et al. “When will AI exceed human performance? Evidence from AI experts” Journal of Artificial Intelligence Research 2018 https://www.jair.org/index.php/jair/article/download/11222/26431/
- H-Orallo, Schellaert and M-Plumed, “Training on the Test Set: Mapping the System-Problem Space in AI”, AAAI 2022, https://ojs.aaai.org/index.php/AAAI/article/view/21487/21236
- Idrissi et al. “ImageNet-X: Understanding Model Mistakes with Factor of Variation Annotations”, https://arxiv.org/abs/2211.01866, arxiv 2022.
- Kadavath et al. “Language Models (Mostly) Know What They Know”, Anthropic, arxiv 2022.
- Kello et al. “Scaling laws in cognitive sciences”, Trends in Cognitive Sciences 2010 https://doi.org/10.1016/j.tics.2010.02.005
- La Malfa and Kwiatkowska “The king is naked: on the notion of robustness for natural language processing”, AAAI 2022. https://ojs.aaai.org/index.php/AAAI/article/download/21353/21102
- Lapuschkin et al. “Unmasking Clever Hans Predictors and Assessing What Machines Really Learn” Nature Communications 2019. https://www.nature.com/articles/s41467-019-08987-4
- Llorca, D. F., Charisi, V., Hamon, R., Sánchez, I., & Gómez, E. (2022). Liability regimes in the age of AI: a use-case driven analysis of the burden of proof. arXiv preprint arXiv:2211.01817. https://arxiv.org/pdf/2211.01817.pdf
- McKee, Leibo, Beattie & Everett “Quantifying the effects of environment and population diversity in multi-agent reinforcement learning”, AAMAS, https://link.springer.com/article/10.1007/s10458-022-09548-8
- Momennejad “A Rubric for Human-like Agents and NeuroAI”, Philosophical Transactions of the Royal Society B, 2022, https://www.momen-nejad.org/_files/ugd/a6d7e4_a8fda15ac61742698a6593c520516745.pdf
- Nushi et al. “Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure” HCOMP, 2018. https://arxiv.org/abs/1809.07424
- Rahwan et al. “Machine behaviour”, Nature 2019: https://www.nature.com/articles/s41586-019-1138-y
- Roser “The brief history of artificial intelligence: The world has changed fast – what might be next?”, https://ourworldindata.org/brief-history-of-ai#studying-the-long-run-trends-to-predict-the-future-of-ai , 2022
- Taddeo, M., Ziosi, M., Tsamados, A., Gilli, L. and Kurapati, S. (2022) Artificial Intelligence for National Security: The Predictability Problem, Centre for Emerging Technology and Security, Oxford Internet Institute, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4229440
- Tamkin et al. “Understanding the capabilities, limitations, and societal impact of large language models”, arxiv 2021, https://arxiv.org/pdf/2102.02503.pdf.
- Wei et al. “Emergent Abilities of Large Language Models”, https://openreview.net/pdf?id=yzkSU5zdwD , TMLR 2022
- Xiao et al. “Noise or Signal: The Role of Image Backgrounds in Object Recognition”. ICLR 2021, https://arxiv.org/abs/2006.09994
- Yampolskiy, R. “Unpredictability of AI” (2019), https://arxiv.org/abs/1905.13053
Must-read:
- Peña-Asensio "If you run, it chases you", https://arxiv.org/abs/2203.16630, 2022.