The context: Studies show that when people and AI systems work together, they can outperform either one acting alone. Medical diagnostic systems are often checked over by human doctors, and content moderation systems filter what they can before requiring human assistance. But algorithms are rarely designed to optimize for this AI-to-human handover. If they were, the AI system would only defer to its human counterpart if the person could actually make a better decision.
The research: Researchers at MIT’s Computer Science and AI Laboratory (CSAIL) have now developed an AI system to do this kind of optimization based on strengths and weaknesses of the human collaborator. It uses two separate machine-learning models; one makes the actual decision, whether that’s diagnosing a patient or removing a social media post, and one predicts whether the AI or human is the better decision maker.
The latter model, which the researchers call “the rejector,” iteratively improves its predictions based on each decision maker’s track record over time. It can also take into account factors beyond performance, including a person’s time constraints or a doctor’s access to sensitive patient information not available to the AI system.
There’s nothing worse than lighting a new candle and watching it sputter out, tunnel, or…
Discover how woven metal fabric transforms restaurant design with its versatility, from feature walls to…
Upgrading your workspace? Get inspired by design ideas for materials, lighting, and amenities, and tips…
In recent years, the global interest in peptides has surged due to their wide-ranging benefits…
Maximize your workspace without overspending. Explore practical ways to expand your office using smart layouts,…
Discover how to create a thriving STEM community through hands-on, collaborative projects that are perfect…