Pedro HC Avelar

Personal Blog and Portfolio

Stanford HAI 2019 Fall Conference

2019-10-31 Pedro HC AvelarJournalling

My professor Luis Lamb was invited to the Stanford HAI 2019 fall conference and kindly allowed me to accompany him here. I’ll just give an overview of what I’ve seen each day here and what were the highlights for me. I’ll probably edit these posts later to update with information from my notes as well.

Day 1

Welcoming Session

The welcoming session was nicely done and really set the tone for the discussions coming forward. And calling out the importance of interdisciplinary research moving forward in AI.

AI and the Economy

This session was really good, both the talkers had confluent ideas and their concerns rested mainly on how to distribute the benefits of AI to the broader population. An interesting thing Erik Brynjolfsson commented is that with “the great decoupling” there has also come a decoupling in life expectancy: it fell for poor people for the first time in a while. Also, susan commented on how full autonomy of agents is difficult, and possibly in the near future AI will augment humans instead of replacing them, but designing systems to work with humans is a quite different task from designing a system that is supposed to take decisions autonomously.

Then there was talk about how companies need to change how they work completely to benefit from adopting AI systems, “You can’t put a transistor on a mechanical watch and expect it to be more digital”. Also, metrics of utility in the big tech age brought about that GDP doesn’t give a good measure anymore due to the free digital economy, and also discussions on what contributes to well being — covering things such as cost of living, commute time, etc. And how to use AI to improve the availability of such commodities to everyone.

Also, since nowadays “we have more power to change the world”, “our values are more important”.

Regulating Big Tech

This session was more of a debate than collaborative, with Alphabet’s representative Eric Schmidt raising that we don’t yet understand how humans and AI will coexist, stating that image and audio are solved problems, and pointing that now deep scene understanding and techniques for low resource problems are the main topic of research now. Also briefly commented on fake videos and misinformation, saying that we don’t want a world with misinformation and that things like China’s surveillance system isn’t exactly what we should want to be people’s first impression of AI.

Following this, Marietje Schaake gave a truly impressive talk on the importance of regulating big tech companies, elucidating how most of our current problems don’t come from over, but underregulation, to counter the argument that governments shouldn’t regulate big tech companies as to not stifle innovation. She gave cases of how much power tech companies currently hold, commented on the internet’s influence on democracy today and on how to keep fair competition and human rights in the current race for AI power.

She also says that while some tech companies fight against regulation on the grounds that they would regulate themselves, some companies have open data sharing policies with Chinese companies (whose government is known for privacy practices unaligned with western ideals of privacy), and how euro policies should be seen not only as a protection from big tech companies, but also from governments. That we need transparency of data and algorithms to be able to assess which biases may be contained in the systems we use, adn that trade secrets can’t be perpetual shields against transparecnt on data.

On whether we are too early or too late to regulate AI, she rallied that we should be proactive in doing so, while tech companies have lost trust on self-regualting efforts — following the EU’s precautionary principle, which is ridiculed by entities in the USA as “unscientifical”, but has protected people from unforeseen adverse effects before. That we should see internet users not as products or consumers, but as citizens, and that the USA, EU, Jap and Indian governments should join to define a democratic governance model for AI. (No Latin America/Africa, though?)

(Other parts of the debate will be added)

The Coded Gaze: Dangers of Supremely White Data and Ignoring Intersectionality

An amazing and inspiring talk by Jay Buolamwini on how to fight against machine bias and how to strive for Algorithmic Justice. Following her work on detecting and measuring bias against black people, in particular against black women, in face recognition systems.

(Other parts of the talk will be added)

Ethical Product Development: The Paths Ahead

(Other parts of the breakout session will be added)

Conversation with Reid Hoffman and DJ Patil

(Other parts of the conversation will be added)

Day 2

Conversation with Michael Kratsios and Eileen Donahoe

What do we (really) want from AI?

Artificially Intelligent Associations: AI, Civil Society, and Human Rights

Day 3 — AI Index Roundtable

Measuring AI

Closing Remarks