High-stakes prediction tasks (e.g., patient diagnosis) are often handled by
trained human experts. A common source of concern about automation in these
settings is that experts may exercise intuition that is difficult to model
and/or have access to information (e.g., conversations with a patient) that is
simply unavailable to a would-be algorithm. This raise
研究了用户与三个模拟算法模型的交互,发现较低水平的用户虽然能从 AI 建议中受益,但是他们的决策水平却没能达到 AI 的精度,高水平用户则通常能辨别何时应该遵循 AI 建议并保持或提高其性能,而中等水平者则最不稳定,AI 建议会对其性能产生帮助或伤害,此外,用户对 AI 性能的感知也对决策的精度影响非常大。该研究提供了关于人工智能协作相关复杂因素的见解,并提出了如何开发以人为本的人工智能算法以辅助用户在决策任务中的建议。