Would google car recognize if there was a pregnant woman crossing the road, wait and compromise on a few seconds?
Artificial Intelligence with Empathy
I recently bumped into wit.ai, a Facebook company that is building query recognition software, that understands the “intent” of the query (not just the keywords) and splits the query into parametered values for intent, entity, action and the overall confidence level of its recognition. Say, when you type “how hot is Bangalore?”, it gives you back “intent = weather, entity = Bangalore, confidence = 85%” so you can further construct a highly human response for the user.
More than the software, what amazes me is it’s potential.
Often the AI projects that I hear about, talk of robotic intelligence, parsing inputs, matching keywords and giving informational output; when the whole purpose of AI is to make something that’s human. It takes more than intelligence to be human and that’s where empathy comes in, making it AI + E (artificial intelligence + empathy).
Behind every query is an intention and the responsibility of AI + E is to respond with solutions to fulfill that intent. Recognizing the intent and training a software to do that is not easy (as the simplest of things, require the most complex algorithms) and it’s got to be evolving as well, along with humans. With funded initiatives like wit.ai, part of the problem is attempted to be holistically solved, leaving greater possibilities for AI engineers to create.
This reminds me of a conversation with my wife couple of days ago – “why should search results only give back links matching the query, what if it also suggested people who may have an answer or an insight?”. My brother works for Google search and we have always debated on the Google car. Would the car recognize if there was a pregnant woman or a senior citizen crossing the road, wait and compromise on a few seconds? Or would it just scientifically make it error proof?
Every question we ask, seems to boil down to this fundamental thing – could AI be as human as we want it to be? Or would it need to be scientifically programmed for empathy too?