India’s Karnataka state is
planning to send Google a legal notice after a search result showed Kannada, the state’s official language, as the “ugliest language in India.”
The search result quickly sparked outrage, with Karnataka minister Araving Limbavali demanding an apology from the tech giant:
- “The Kannada language has a history of its own, having come into existence as many as 2,500 years ago. It has been the pride of Kannadigas all through these two-and-a-half millennia.”
Amidst the hoopla (and suggestions on Twitter to jail the person responsible for the search algorithm) Google issued a statement apologizing for the “misunderstanding.”
- “The way content is described on the internet can yield surprising results to specific queries. Naturally, these are not reflective of the opinions of Google.”
THE TAKEAWAY
This is nothing more than a tech company getting caught in an awkward position because an algorithm spat out a haywire result.
What’s more interesting is how far algorithms can be liable, if at all, in situations with physical, tangible harm.
Take healthtech, for example. In 2018, a major healthcare AI vendor’s internal documents were leaked, revealing that the computer’s algorithms had produced erroneous and unsafe cancer treatment recommendations in multiple cases.
The company determined that the flaws could be traced to “synthetic” data: engineers had trained the AI on hypothetical data rather than real-world cases.
Situations like this are murky - the contours of legal precedent are court-developed, and we haven’t really seen many cases regulating AI liability before. Does liability fall to the last person who touched an algorithm, the first person who developed it, or a third alternative? We don’t know.
But, as more companies turn to AI (wisely or unwisely), these uncertainties will demand more clarification.