What's wrong with the AI alphabet soup?
By Rich Heimann
AI is frequently explained using the categories artificial narrow intelligence (ANI), artificial general intelligence (AGI), and artificial super-intelligence (ASI). Despite this strange conceptual framework providing nothing of real value, it finds its way into many discussions. If unfamiliar with these categories, consider yourself lucky and move on to another, more consequential article. If you are unlucky, I invite you to keep reading.
First and foremost, bemoaning categorizations — as I am about to do — has limited value because categories are arbitrarily similar and distinct, depending on how we classify things. For example, the Ugly Duckling Theorem demonstrates that swans and ducklings are identical if we wish to manipulate the properties for comparisons. All differences are meaningless unless we have some prior knowledge about those differences. Alas, this article will unpack these suspicious categories from a business perspective.
Read the full article on TechTalks.