Advertisement

I’m on the Meta oversight board. We need AI protections now | Suzanne Nossel


The speed with which AI is transforming our lives is head-spinning. Unlike previous technological revolutions – radio, nuclear fission or the internet – governments are not leading the way. We know that AI can be dangerous; chatbots advise teens on suicide and may soon be capable of instructing on how to create biological weapons. Yet there is no equivalent to the Federal Drug Administration, testing new models for safety before public release. Unlike in the nuclear industry, companies often don’t have to disclose dangerous breaches or accidents. The tech industry’s lobbying muscle, Washington’s paralyzing polarization, and the sheer complexity of such a potent, fast-moving technology have kept federal regulation at bay. European officials are facing pushback against rules that some claim hobble the continent’s competitiveness. Although several US states are piloting AI laws, they operate in a tentative patchwork and Donald Trump has attempted to render them invalid.

Heads of AI platforms like OpenAI’s ChatGPT and Google’s Gemini say they care about safety. But owning the future of AI means pouring billions into models that not even their creators fully understand, and making choices like adding ads – and the capabilities that the Pentagon is now seeking from Anthropic – that raise risk. Anthropic, which styles itself as the most conscientious frontier AI company, says its model is trained to “imagine how a thoughtful senior Anthropic employee” would weigh helpfulness against possible harm. The directive echoes criticisms levied years ago over Silicon Valley companies that shaped the lives of users worldwide from insular boardrooms. Consumers don’t believe they are in good hands. Fully 77% of Americans surveyed last year think AI could pose a threat to humanity.

We are not stuck between the elusive hope of robust government regulation and having the most powerful companies in history police themselves. At least until legislators act, independent oversight offers the potential to adjudicate between AI’s potential and its perils. By embracing independent oversight, AI companies can demonstrate that they are serious enough about public trust to be willing to fight for it.

The logic behind independent oversight is straightforward. No matter the good intentions of corporate executives, their duties to shareholders and investors shape how they approach trade-offs between cost and safety, incentivizing revenue and profits. While long-term considerations of corporate reputation, customer loyalty and ethics can act as speedbumps, winning the AI race demands appetite for risk. Belated reckonings with how social media could fuel killings, throw elections and impair youth mental health illustrate how the intoxicating power of technology can obscure flashing warning signals.

Independent oversight of AI offers the potential to surface, analyze and address its risks, giving advocates and communities a bit more control over how these technologies remake society. Social media provides an example. In 2020, bruised by accusations it helped fuel the Rohingya crisis in Myanmar, Meta (then Facebook) created an oversight board, hoping to get the company out of the hot seat. Early the following year the company adopted a policy committing to following human rights law. While the board, now five years old, has fallen short of what some people hoped might serve as a “supreme court of Facebook”, its record offers key lessons as to the prospects for effective independent oversight for AI, and why it matters.

Oversight demands diverse perspectives. Like other frontier AI companies, Meta has users on every populated continent. Deciding what they can and cannot post from the safety of a Menlo Park campus left blind spots and stoked resentments. The oversight board’s 21 members bring broad cultural and professional expertise to the adjudication of sensitive questions of content moderation (such as whether a violent video should be sharable as news or removed as an affront to the victim’s dignity). The board, with members who have lived in more than 27 countries, includes conservatives and liberals, journalists, legal scholars, a former prime minister of Denmark and a Nobel peace prize laureate.

The oversight board uses Meta’s own “community standards” to assess whether posts violate rules including prohibitions against bullying or support for terrorists. The board holds Meta to its vow to uphold international human rights law, including Article 19 of the International Covenant on Civil and Political Rights, which enshrines freedom of expression. AI companies should make the same commitment and establish oversight to hold them to it. Unlike the first amendment in the US or the European Union’s “right to be forgotten” online, human rights law offers a common currency across borders. Its norms provide methods of reasoning to guide decisions on AI, such as whether a bot’s refusal to answer a question unjustifiably denies a user’s right to information, or whether the repurposing of user data violates privacy rights.

Accessibility, consultation and transparency are key. The oversight board accepts appeals from the public, announces the cases it chooses to review, invites public comments, and convenes sessions with experts and relevant communities. It has issued more than 200 decisions in detailed written opinions that have been cited by courts around the world.

A voluntary oversight body is only as strong as the powers vested in it by its originating company. While the oversight board would like broader powers, it has given credit to Meta for going well beyond the lightweight advisory councils that other tech players have periodically convened and dissolved. Meta’s oversight board has jurisdiction to decide whether a specific piece of content stays up or comes down, though using that authority over individual posts can feel like fighting a wildfire by blowing out embers. Its more consequential impact lies in choosing emblematic cases of errant content, offering public reasoning for decisions, and issuing recommendations to which Meta must respond. Meta has implemented 75% of the board’s more than 300 recommendations, as reported in December, leading to significant changes for billions of users.

These include providing notifications about what policy a user is alleged to have violated when content disappears, ensuring that rhetorical taunts and satire don’t get removed as threats, and ensuring that he company surges resources in crises like natural disasters and armed conflict. The board also issues detailed advisory opinions on larger policy issues, such as Meta’s extension of leniency for policy violations by high-profile posters, or how much Covid-related misinformation should be removed as the pandemic died down. Although the board operates independently in making its decisions and recommendations, it relies on Meta for crucial information such as whether specific content determinations are made by human beings or automation, and what precisely went wrong when content was mistakenly removed. AI companies will have to offer at least as much visibility for oversight to have any meaning.

As ever, money matters. Meta periodically puts the oversight board’s funding in a trust so that it cannot be cut off overnight. But more diversified and assured resources would enhance the board’s independence. Oversight of cutting-edge tech costs money. It requires funding for an expert staff to support analysis and decision-making and consultants who bring specific cultural and linguistic expertise. Given the hundreds of billions being invested in AI, however, the price of even robust oversight is negligible.

AI is taking over our classrooms, colleges and corporations. Independent oversight is the least AI companies can do to make sure that, wittingly or not, they do not take over our rights as well.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

Social Media Auto Publish Powered By : XYZScripts.com