Beneath the AI Hood : Balancing Progress and Ethics

Beneath the AI Hood : Balancing Progress and Ethics

"India experienced an alarming 1.5 lakh fatalities due to road accidents in 2022, according to the Ministry of Road Transport. This number does not even take into account the unreported accidents, amplifying the risks of human error in driving even further.


Imagine an imminent future where AI-powered, driverless cars are commonplace on Indian roads. AI, while not infallible, presents a different kind of challenge. The accidents that occur will inevitably be a result of errors in the AI. The central question, then, becomes: Are we, as a society, ready to accept fewer casualties than the current 1.5 lakh if it signifies progress and increased safety in the long run? This moral conundrum demands careful reflection. The discomfort associated with attributing a large number of deaths to an AI program is understandable, yet, the fact that the deaths of 1.5 lakh people are attributed to myriad reasons somehow offers a strange sense of comfort.

Pushing this further: Would we be willing to endure a transitional period where AI error rates match those of humans, knowing that with each update, the AI would improve and eventually surpass human performance?

One might logically argue that an AI should indeed exhibit superior accuracy compared to humans. But this introduces another quandary. If AI can surpass us in tasks such as driving, why then do we harbor anxieties about AI replacing us in other job sectors? Is it not a logical progression for technology to assume roles where it can outperform us in terms of efficiency and safety?

As a society, we are faced with this paradox. While the efficiency of AI appears desirable, the potential displacement of human roles is disconcerting.

For fans of Asimov, it's worth noting that his three laws of robotics, introduced in his 1942 short story "Runaround", long before computers were invented, are surprisingly relevant today:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

These laws were Asimov's early exploration of potential human-machine conflicts. Philosophy is timeless; its logic and thought processes remain the same, only the context changes. We can apply lessons learned from past contexts to current situations. As we stand on the cusp of another technological revolution - the AI revolution - the ethical and philosophical questions surrounding AI will continue to shape our future. Many debates will emerge, but at the heart of them all are questions like “should human intelligence and artificial intelligence be held to the same standards?” and “Are we, as a race, willing to improve artificial intelligence by trial and error?”.

Given that we had two excellent speakers (Mr. Neetan Chopra and Mr. Nitin Gupta) on our campus just yesterday, and both of them, quite independently, spoke about how generative AI will play a greater role in business delivery in the years to come. Both of them had some impressive ideas to discuss, and I felt that this could perhaps kick start conversations about AI without our classrooms. I am also trying to incorporate some of the more recent research of AI in our consumer behavior course (in addition to the current course outline). Let’s see how that comes up!

Have a great weekend!