In the media, in the work canteen, in power-point presentations, investment pitches, in LinkedIn, Facebook posts — you’ve heard the term bandied around in every possible location: “artificial intelligence” or “AI”. A few years ago, in academic circles the term “AI” was considered a slightly dirty word — synonymous with the over-optimistic attempts in the 1960s-70s to build intelligent programs based on the creation of complex systems for manipulation of symbols. Famously, these attempts failed spectacularly and led to the onset of the so-called “AI winter” — no money or serious research in building “intelligent” systems for decades. Only in 2012–13 when a very different approach based on so-called “deep-learning” achieved results in computer vision approaching human performance was it permissible to start talking about AI without sounding either old-fashioned, crazy or both. Since then the business community has adopted the term lock, stock and barrel; startups spring up daily with names such as “XXX-AI”, “Deep-XXX”, “XXX-Mind”, “Smart-XXX” or an induced combination. However, as a person involved in technical work, I still regularly encounter grumbling on the part of deep learning practitioners, developers or computer scientists on the over-use of the term “AI” and a tendency among serious technical experts to consider the term “AI” with some skepticism. …

Read the full article on Medium.com.