The scene of this article, is with children in one of our weekend classes at Pacific Rim Robotics.
When kids join us for the first time, we ask them what do robots mean to them. Friends ? helpers ? the Terminator ? And why they would like to study robotics and AI.
We get all sorts of interesting responses (with girls’ responses markedly different from those of boys 😀, but that’s the subject of a different post !). As part of this “initiation” exercise, we like to show them real world videos, talk about ways in which you can improve lives with all this cool tech, and so on. One of the most popular ones is the BB-8 car showcased by NVIDIA at CES 2017. All things Boston Dynamics are a close second. And on AI, TJBot is the all-time favorite.
Cool cars always have a way of capturing kids’ imagination. Seeing an autonomous car in action, many kids walk away looking at them as the “holy grail” of robotics. Many (grown-up) practitioners will agree…
So as you can imagine, the recent Uber accident was a real dampener. Kids generally don’t like bad stuff happening to anyone – so it’s confusing. As educators and practitioners, it is our responsibility – to help them understand things like this, and better prepare them to build a world that’s safer, compassionate – not just efficient !
So here’s how this played out. Kids are “visual learners”, as we all know – so nothing makes a point better than showing them something real and tangible. So we used some of the fantastic simulation tools that the great guys & gals at Udacity have created.
Specifically, there’s this highway driving simulator that involves “path planning”. You program an autonomous vehicle to safely drive with other simulated traffic on a highway. Naturally the goal is to be efficient – but while being very safe. So you have to watch out for cars all around you, and effectively do a “constrained optimization” as you plan your movement. The simulator does the equivalent of “sensor fusion” in the real world – telling you everything it can sense from the world around you – through RADAR, LIDAR, cameras and the like.
NVIDIA did something similar recently, by introducing a system to “stress test” autonomous driving algorithms for safety.
So we ran a “safe” simulation first. Our algorithm took over the “persona” of the ideal driver – who stays within the speed limit, maintains proper distance with cars in front, passes only when absolutely safe relative to cars in front and behind (equivalent of the good habit of checking your rear view mirrors !) – and while doing all this good stuff, maintains the best speed to reach the destination.
So, efficient … but also safe and polite 👍
The key here, was to explain to kids the notion of “control” they had over the algorithm. You fear what you don’t understand. So we talked about exactly which parameters of the algorithm gave the car this persona – of a safe, efficient driver.
Now of course we had to see what a “rogue” driver looks like!
Ask kids what a “jerk on the road” looks like, and you’ll get a unanimous characterization. Speeding, tailgating, braking and accelerating hard, cutting in and out of lanes with narrow gaps… yep, they’ve seen them. In Bangalore, that’s literally every road, every time of the day.
So we discussed how the algorithm can equally easily mimic this behavior. And together, we tweaked some of these parameters. Our algorithm now used a higher max speed, lesser distance margin, harder braking and acceleration limits, and a smaller permissible gap for lane changes. Of course, it still had to do all this without collisions.
So we ran this “aggressive” simulation next – where essentially this new guy was a pain in the butt for everyone on the highway. 👎😡
One of the very interesting observations the kids could make, was that ultimately the “rogue” algorithm didn’t benefit a whole lot ! It didn’t reach the destination that much sooner – just ended up taking huge risks all along !
So after we went through all this – what do you think was the central question from the kids in the room ?
So ma’m, how can we make sure everyone uses the “good algorithm” ?
And there you have it !
This was obvious to kids in our small classroom. And that is essentially the point of this post.
Ultimately we will make AI “in our own image”.
Like us – the AI can be “biased”, “unfair”, “aggressive”. By trying to suggest that this new “unexplainable” tech is the enemy, we are avoiding where the responsibility really lies – squarely on the humans.
You are free to build a biased AI that denies credit with a profit objective, or a healthcare AI that has not gone through clinical hardening, or an “aggressive mode” in your self-driving car, or an AI that promotes fake news because it is clickbait that gets you Ad revenue, and so on… Similarly, the “hacking” of self-driving cars in the “Fate of the Furious” (Fast & Furious 8) was done very willingly, by very bad people…
In short, there isn’t something evil – with consciousness and independent agency – that’s likely to do all this. It will be practitioners – who, by the way, understand the science completely.
Our children will indeed inherit a world where all this is a real possibility. However, our responsibility is to make sure they are well-prepared – to not be helpless victims of powerful technology – but masters of their fate, using it for solving the hardest challenges of our time.