Human hand reaching for robotic hand

Imagine you’re trapped in a fire and when help comes it’s in the form of an AI-operated robot. What would you feel? Relief? Trepidation? One of the biggest hurdles with AI systems is building human trust in machines.

Parisa Kordjamshidi, associate professor in the Department of Computer Science and Engineering at Michigan State, uses neurosymbolic AI to build systems with greater transparency and explainability. This approach combines the strengths of neural networks that rely on data patterns with symbolic reasoning networks that apply explicit rules and logic. The result is systems better equipped to interact with humans in complex and unobserved situations – and systems worthy of our trust.

“The goal is not to make AI think like humans, just like we didn’t make planes to fly like birds,” Kordjamshidi said drawing on a parallel made by Daniel C. Dennett, author of Brainchildren: Essays on Designing Minds. “Instead we’re aligning AI with human values and making its reasoning more transparent for collaborative human-machine decision making.”

Professional Headshot of Parisa Kordjamshidi
Parisa Kordjamshidi

Neurosymbolic AI models will help agents that are deployed in safety-critical situations, like a firefighting robot navigating risky situations while human firefighters direct the machine robot and stay safe. 

Another application for neurosymbolic AI is in complex reasoning such as legal decision-making that operates according to rules of law and scientific reasoning. It provides more robust and reliable solutions with logical consistency in the application of law. Plus, it results in greater efficiency by reducing the time, money and energy required for training huge models.

Discover more about Kordjamshidi’s research: