AI, Social Media and Democratic Challenges: Critical Thinking and Values Based Leadership Seminar

Participants from diverse backgrounds joined international experts from Singularity University, Stuff Limited and Victoria University of Wellington for our most recent seminar on AI, Social Media and Democratic Challenges. This seminar was a poignant example that Aspen Institute seminars are not about delivering simple answers or a set of neat conclusions at the end.

With the complex and interconnected topics of AI, social media and democratic challenges on the table, participants engaged in an open and disciplined discussion, enabling them to discover varying perspectives on somewhat polarised issues.

Moderators Neil Jacobstein (Chair AI & Robotics, Singularity University), Sinead Boucher (CEO and owner, Stuff Limited) and Miriam Lips (Prof. Digital Government, Victoria University of Wellington) provided insightful questions from current readings on the topic, chosen by the moderators to provide the basis for a 360-degree discussion. The questions challenged participants’ opinions on the subjects, including the impact and implications of large language models (eg ChatGPT), whether AI is a threat to democracy and the impact of policy and regulation on the innovation of AI technologies. 

While there were no 'neat conclusions', the seminar highlighted there are ways to accept differences and work towards common understanding and potential solutions. James O'Toole's Executive Compass, referred to in the seminar readings, provides an effective framework for facilitating productive conversations.  Each pole of the Compass represents the four values of liberty, efficiency, community and equality, which O’Toole believes form the basis of a good society. People may favour a particular region between the poles of the Compass, but most prefer a mix of values. While there are no correct positions, the Compass highlights how leaders need to balance these values to make good decisions that benefit and represent everyone.  Many of our seminar participants expressed the need for a mixed approach, representative of the different poles of O'Toole's Compass.

Overall, there was recognition of and need for personal ‘agency’ to get involved in creating policy, no matter how difficult, mitigating downside risks, whilst encouraging immense innovation, as the rate of AI development is increasing exponentially. As one participant poignantly said,

“I'm concerned we have a false sense of security”.

Here is what some other participants had to say post the seminar: 

“Thanks for organising the interesting discussion on AI over the last couple of days, the papers were very good I thought – as was the discussion.” Catherine Beard, Director, Advocacy, BusinessNZ. 

 “I had a very enjoyable time participating in this event and hearing discussions from many other industry members. I really appreciate this opportunity from the Aspen Institute! Thank you.”  Eric Shen, Bio Medical Engineering student, Auckland University. 

 “Thank you for having me for the Seminar, it was a great experience.”  Kieran Madden, Director, Maxim Institute. 

“I thought it was a very good seminar with some first rate participants ... I was pleased to participate and learned a great deal.” Sir Maarten Wevers, Chair, Aspen Institute New Zealand. 


Below is the material sent to the participants in advance prepared by the moderators.

Introduction

This seminar is the eighth in the Aspen Institute New Zealand series on Critical Thinking and Values Based Leadership. We anticipate continuing to focus on a set of challenging seminar topics including Conservation, Education, Public Health, Resilient Emergency Response, Polarization and Civility, and other topics.

Why focus on technology challenges to Democracy? Recent polls have indicated that given the meanness, gridlock, controversy, and violence associated with democracy as practiced (in the US at least) many young people are not convinced that democracy is the best system of government, and maybe it's not worth protecting. Perhaps a dictatorship with a strong, visionary, charismatic leader might be better? Winston Churchill famously said: “democracy is the worst form of government – except for all the others that have been tried.” The weak civics and history education around the world is only one of the many challenges to democracy. Even if people were fully informed of the lessons of history and were convinced that the messiness of democracy is worth enduring and embracing, we now have a new combination of social media and AI technologies that represent an unprecedented challenge to democracy and social order. These technologies could lead to disaster, human flourishing, or plausibly some of both.

What distinguishes this series is that it is inherently interdisciplinary. While many programs claim interdisciplinary approaches, they mostly stay inside relatively narrow frameworks - either social or technical. Discussions on topics around social media and technology tend to devolve into partisan squabbling, with each "side" attempting to score points and not acknowledge common ground. These conflicts are often fueled by AI algorithms used by social media platforms that select for extreme positions which generate strong emotional responses that lead to "user engagement" or clicks that can be monetized into targeted advertising.

In one of our readings, Reid Hoffman asks: "Is there a future where the massive proliferation of robots ushers in a new era of human flourishing, not human marginalization? Where AI-driven research helps us safely harness the power of nuclear fusion in time to help avert the worst consequences of climate change? It’s only natural to peer into the dark unknown and ask what could possibly go wrong. It’s equally necessary—and more essentially human—to do so and envision what could possibly go right."

The practice of democratic governance does not come naturally or easily. It is messy and more convenient to delegate to others. After all, "we the people" are busy. However, once we give up control, it is difficult, sometimes impossible, to get it back. If we are going to trust our representatives and technologies, we need to be able to verify that their actions on our behalf are worthy of our trust.

AI-based technologies are often embedded in news, search, and other services by corporations that are attempting to maximize profit - not human flourishing. The algorithms are described as black boxes that have weak or nonexistent explanatory skills. Even experts are not always sure what they are doing. Democracy with powerful innovative technologies is not a "set it and forget it" proposition.

Critical thinking about AI, social media and democracy is about systematically avoiding the bugs and biases that permeate ideology-driven systems of thought. It attempts, however imperfectly, to substitute evidence and rationality for unexamined assumptions and closed-minded evaluations. Both humans and machines use models to represent the world around them. When these models are deep, informed, and nuanced the resulting outputs tend to reflect that. When the models are shallow, unidimensional, randomly informed, or systematically misinformed, the resulting outputs reflect that as well. People can disagree strongly about the means to achieve common human values, but if there seem to be no common values to serve as ends, we will have other problems.

Aspen thought leader and moderator Jim O'Toole observed that when polarized people talk past each other, it is often because they are emphasizing only one subset of values in a whole system of values that they all actually care about in different measure. He referred to an Executive Compass with legitimate tensions or values tradeoffs between the poles representing economic efficiency, community (common good) values, liberty (freedom), and equality.

Few people who emphasize the economic efficiency that comes with competition and capitalism are willing to completely destroy our community or environment in the process. Those who emphasize social equality are rarely in favor of trashing all individual freedoms to achieve their goals. People may well favor a particular region between the poles in this Compass, but most want a mix of values, for example wealth opportunities and a clean environment. Others may fight for social equality but also want to preserve essential individual freedoms. There are no final or correct positions here. However, rather than systematically drive people to angry extremes, there are ways to accept the differences and work towards common understanding and potential solutions.

The Aspen Institute has sponsored over 60 years of productive and civil dialog about the necessary and nuanced trade-offs between these values. Aspen Institute Seminars are not about delivering simple answers or a set of neat conclusions at the end. Rather, they are an opportunity to read relevant materials for common grounding, engage in open and disciplined inquiry, and discover a 360-degree perspective on currently polarized issues. This requires thoughtful deliberation about the values underlying these differences. You may be challenged to respect, honour, and work with people who seem to be on the "other side" of the issues you are most passionate about. We need to take the edge off of our polarized political squabbling and examine some foundations for effective solutions to our common problems.

The seminar took place in three 2.5-hour sessions over two days.

Session One: Evaluating Large Language Models (eg, ChatGPT) - Capabilities, Adoption, Social Risks and Benefits

There are many types of AI, but this session focuses on the unprecedented explosion in AI development and the adoption of large language models like OpenAI's ChatGPT. What are large language models? How fast are they developing worldwide? How quickly are they being adopted? What were the initial concerns about risks, and have they been solved or not? Do they actually know what they are talking about? What hallmarks of human intelligence are they missing? Will they likely take our jobs and/or enhance them? Is AI's influence in society likely to grow? What could possibly go right - or wrong? Which values need to be balanced in the mix?

Session Two: The Current State of Independent Journalism and Democracy in the Age of Social Media and AI

Independent journalism and democracy - are they just a quaint preference, or do they actually matter? What elements of independent journalism are already affected by social media and AI? What does it mean to manage a significant media property in the age of social media and AI - the mix of benefits and threats.

Session Three: Democracy, Digital Media, and AI Policy Tradeoffs

What are the possibilities of digital governance? What are the tradeoffs between freedom to innovate and the need to regulate. What are we trying to do - avoid risks or capture competitive opportunities? If both, how? What have other countries like the EU and the US done so far? What actions should NZ take - and what might happen if we do nothing?

Each reading in the seminar Table of Contents is worthy of your attention. Some may be a bit long or complex, and it is okay to skim parts of them if you feel the need. However, we will refer to them specifically in each session. We will be asking you first about what the readings say, and then what you think about them. None of the articles included in this seminar's readings is offered as final or gospel, but rather each serves as a common point of departure in a lively and informative dialog.

Readings - Table of Contents

INTRODUCTION ..............................................................................................................…………………………………………........................3

SESSION ONE: EVALUATING LARGE LANGUAGE MODELS (eg, ChatGPT) - CAPABILITIES, ADOPTION, SOCIAL RISKS AND BENEFITS ...............................................................................……………………………………………………………………….......8

What Is ChatGPT? And Will It Steal Our Jobs? Adam Smith ....................................................……………………………….........16

Yes, This Time It’s Different, ChatGPT And Automation Come To Knowledge Work, Mark P. Mills ...........................19

Be Very Scared Of AI + Social Media In Politics, Carlos Santamaria .......................…………………………….........................24

ChatGPT Is Just A Taste Of A “Monster” GPT-4, Matthias Bastian ..........................................……………………………….......26

Technology Makes Us More Human, Reid Hoffman.........................................................………………………………...................28

SESSION TWO: THE CURRENT STATE OF INDEPENDENT JOURNALISM AND DEMOCRACY IN THE AGE OF SOCIAL MEDIA AND AI ...........................................................................………………………………………………………………................30

This Newspaper Doesn’t Exist: How ChatGPT Can Launch Fake News Sites In Minutes, Alex Mahadevan….......31

How ChatGPT Hijacks Democracy, Nathan E. Sanders and Bruce Schneier..........…………………………..........................34

Dangerous Speech, Misogyny, And Democracy, A Review Of The Impacts Of Dangerous Speech Since The End Of The Parliament Protest, Kayli Taylor, Kate Hannah, Dr Sanjana Hattotuwa …………………………………………....37

Spirals Of Delusion: How AI Distorts Decision-Making And Makes Dictators More Dangerous, Henry Farrell, Abraham Newman, and Jeremy Wallace .................................................………………………………………………………..................41

There's No Going Back On A.I.: 'The Genie Is Out Of The Bottle', Daniel Howley ..............…………………………….........49

We Need To Preserve American Democracy. Here’s How To Do It, Richard Hass ................………………………….........51

SESSION THREE: DEMOCRACY, DIGITAL MEDIA, AND AI POLICY TRADE-OFFS ......………………………..………..........53

Can The U.S. And Europe Agree On Rules For AI? Marc Rotenberg, Merve Hickok ..............………………………….......54

EU, US Step Up AI Cooperation Amid Policy Crunchtime, Luca Bertuzzi ..............................……………………………........58

Social Media Companies Promise To Reduce Harmful Content In New Zealand, Eva Corlett .………………………...61

Platforms Are Testing Self-Regulation In New Zealand. It Needs A Lot Of Work, Curtis Barnes, Tom Barraclough, Allyn Robins ..................................................................................……………………………………………………….............63

Ten Principles For Regulation That Does Not Harm AI Innovation, Daniel Castro .............…………………………............69

Previous
Previous

Welcome to the April edition of 'The Lookout'

Next
Next

Welcome to the March edition of 'The Lookout'