How does that machine make an intelligent decision?

Reading to your kids at night is a pleasure. I didn’t realize how much my kids loved it and how much I loved it as well. For some reason, I haven’t been doing as much reading as I would like, both to the kids at night and in general. So, it was time for a change and I took on a challenge.

So, I brushed up this book and mentioned that I would be reading this to them every night.

Both kids got excited and huddled straight into bed. And, of we started on this journey. The plan is to read a few pages every day so by the end of the year, we have finished this book. Yes, that’s slow but that’s intentional. There is so much to pack in the history of various civilizations. I will report on what we learned towards the end of the school year.

Back to the main story, “How does that machine make an intelligent decision?”. So, as I started reading the book, I realized that I needed more light and called out to the Google device, “Hey Google, change the brightness to 30-40%”. You see what’s interesting here? For me, its perfectly reasonable to ask for brightness that is 30-40% and all humans will deal with impreciseness without any issues. How about a machine, though? How does a machine deal with 30-40%? Should it ask a clarifying question? If it did, would it bother me and frustrate me? Should it just take an average and turn it up to 35%? Should it go to 30% given its night time and then prompt me if I needed to turn it up even more?

See, these decisions that come to us so naturally are not so natural for machines. Instructions that might sound simple may have many downstream decisions. That is not easy to preconceive. 

We keep talking about how machines make human life better with better decision making. But, in every bit of information that we receive back from the machine(s) to help us with decision making, there are a lot of decisions that go into creating those branches. Not an easy job, for a product manager.

So, it got me thinking and got me googling for various articles. One of them happened to be this one (it is mostly an informational piece about expectations in a temporal expressions):

Managing Uncertainty in Time Expressions for Virtual Assistants

At the end of the articles, it lists what humans would want from a virtual assistance (in context of managing uncertainty in time expressions), but I think those expectations might go even beyond the scope of the paper. Here are some of the expectations:

  • Implied flexibility
  • Implied constraints
  • Complex expressions
  • Respect uncertainty
  • Recognize uncertainty
  • Embrace flexibility
  • Notify intelligently
  • Leverage implicit knowledge

You can read more about it in the paper, but its great thought process to keep in mind when designing for such systems. I have been researching and haven’t come across many articles (yet) that describe how that that uncertainty is coded into the system? Does it have to be rule-based? Does it have to be derived from the order of the words? What additional context can be used?

  • Time of Day? – If nighttime, choose lower end of the range and reverse for daytime?
  • Previous brightness level? – If the lights are already at 30% and there is a request to change something, don’t keep the brightness at 30% unless mentioned specifically.

You can see the types of information humans can use in context comes to naturally and it is very hard to programmatic embed those contextual systems into a machine. But, I wouldn’t be surprised when it is done.

Reading the book “Super Intelligence” as we seek to find some answers.

Please comment and let me know if you got some interesting papers for me to read.

Papa, but I don’t want the computers to be smarter than us

Subscribe to continue reading

Subscribe to get access to the rest of this post and other subscriber-only content.