AI Camp: Self Driving Cars, Movie Recommendations and the Possible End of Humanity

Loren Davie
Anti Patter
Published in
3 min readJul 13, 2016

--

Yesterday I had the pleasure of attending AI Camp at the United Nations. Part of the OpenCamps series, AI Camp brought together a fascinating collection of brilliant people and covered topics both technical, commercial and philosophical. This world, which has many names (Artificial Intelligence, Machine Learning, Data Science, Recommendation Systems etc.) represents a confluence of science, technology, ethics, governance and commerce unlike any I’ve seen before. I know this sounds breathless, but I really believe that AI is one of the most important things humanity is working on right now, and will profoundly change society going forward.

The day opened with Chris Welty, a research scientist at Google and part of the original IBM Watson team (you may remember the computer that won at Jeopardy). Interestingly, the training of Watson was done with comprehensive Jeopardy question-and-answer datasets provided online by fans of the show. Without the Internet and obsessive Jeopardy fans, Watson’s victory wouldn’t have been possible. Welty drove home the point that these systems aren’tprogrammed as much as they are trained with large datasets. The omnipresence of data and the data transmission facility of the Internet is the engine that pushes AI forward.

Follow Welty was myself, with a talk about CAVE language. My slides are here. It was a somewhat expanded version of the talk I gave at World Information Architecture Day in February, introducing CAVE Language and describing its applicability to AI-powered contextual applications.

Some other highlights that stood out for me: Oliver Christie on “How big a threat is artificial intelligence to humanity?” While narrow artificial intelligence systems, such as Netflix’s recommendations, is unlikely to offer an existential threat anytime soon, as AI become more pervasive, capable and powerful, we should perhaps start considering it a bit like the way we think of nuclear weapons. Christie pointed to the Future of Life Institute’s page on AI, which outlines some of the benefits and risks of the transformative technology.

Also standout was Meetup.com’s Evan Estola, who gave a talk on how recommendations systems can go wrong. He discussed how, when the Ashley Madison data dump was released last year, Schenectady NY was shown as the second most common hometown of the members, based on the zip codes they had entered into the system. The catch? Schenectady’s zip code is ‘12345’, which is what people tend to enter when they don’t want to reveal their real zip code. Estola also discussed studies that revealed that women (with otherwise identical browsing histories) were less likely to be shown Google ads for jobs in the $200,000+ range, and how browsers with “African American-sounding” names were more likely to be shown ads for bonds bailsman, criminal lawyers, etc. No one ever programmed this discriminatory behavior explicitly into these systems, but the training data they used managed to carry a social prejudice along with it. A problem which can only be countered, said Estola, by human vigilance and intervention.

Finally, I’d like to mention Professor Bud Mishra from NYU. Although highly technical, with a background in artificial intelligence, data science and biology (amongst other things), Mishra’s talk was almost entirely philosophical. He asked the questions that the AI community needs to consider, such as “should we make systems that are capable of deception?”. It is this kind of introspection, a deeply pragmatic application of the philosophical that has really begun to endear the AI community to me.

--

--