NOTICE OF EXEMPT SOLICITATION
NAME OF REGISTRANT: Alphabet Inc.
NAME OF PERSONS RELYING ON EXEMPTION: Arjuna Capital
ADDRESS OF PERSON RELYING ON EXEMPTION: 13 Elm St., Manchester, MA 01944
WRITTEN MATERIALS: The attached written materials are submitted pursuant to Rule 14a-6(g)(1) (the “Rule”) promulgated under the Securities Exchange Act of 1934,* in connection with a proxy proposal to be voted on at the Registrant’s 2024 Annual Meeting. *Submission is not required of this filer under the terms of the Rule but is made voluntarily by the proponent in the interest of public disclosure and consideration of these important issues.
May 3, 2024
Dear Alphabet Inc. Shareholders,
We are writing to urge you to VOTE “FOR” PROPOSAL 12 on the proxy card, which asks Alphabet to report on risks associated with mis- and disinformation disseminated or generated via Alphabet’s generative Artificial Intelligence (gAI) and plans to mitigate these risks. We believe shareholders should vote FOR the proposal for the following reasons:
1. | Despite serious warnings from AI experts, Alphabet rushed its gAI technology to market. |
2. | Alphabet’s gAI tools have already created false and misleading information. |
3. | Misinformation and disinformation disseminated through gAI creates risks for Alphabet and investors alike. |
4. | This Proposal goes beyond requesting responsible AI policies and protocols, requesting an accountability mechanism to ensure Alphabet is effectively identifying and mitigating mis- and disinformation risks. |
Expanded Rationale FOR Proposal 12
The Proposal makes the following request:
RESOLVED: Shareholders request the Board issue a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated annually thereafter, assessing the risks to the Company’s operations and finances, and to public welfare, presented by the Company’s role in facilitating misinformation and disinformation generated, disseminated, and/or amplified via generative Artificial Intelligence; what steps the Company plans to take to remediate those harms; and how it will measure the effectiveness of such efforts.
We believe shareholders should vote “FOR” the Proposal for the following reasons:
1. | Despite serious warnings from AI experts, Alphabet rushed its gAI technology to market. |
Serious Warnings: For many years, Alphabet has ignored—and at times silenced—warnings and concerns from AI experts, placing the Company’s responsible AI strategy in question:
● | In 2020, Google blocked its top ethical AI researchers from publishing a research paper that described gAI’s risks of spewing abusive or discriminatory language.1 |
● | In January 2022, the Company banned another researcher, Dr. El Mhamdi, from publishing a paper critical of gAI. Dr. El Mhamdi resigned from Google shortly after, citing “research censorship” and stating that gAI’s risks “highly exceeded” its benefits and that it was “premature deployment.”2 |
● | In March 2023, two Google employees responsible for reviewing the Company’s AI products, recommended the Company’s gAI chatbot, Bard, not be released, as they believed it could generate inaccurate and dangerous statements.3 |
● | In March 2023, the Future of Life Institute published an open letter from top AI experts calling on all AI labs to immediately pause the training of AI systems more powerful than GPT-4 for at least 6 months. These signatories included AI experts like Yoshua Bengio, the “founding father” of the AI movement, and Berkeley professor Stuart Russell, author of numerous books on AI. The letter states, “Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources.”4 |
● | In April 2023, Bloomberg reported that the Company’s rush to compete with OpenAI led to “ethical lapses” and disturbing reports on Bard’s testing results by employees prior to the technology’s release. Current and former employees stated that the group working on AI ethics was disempowered and demoralized, as they had been told to not get in the way of gAI tools’ development.5 |
● | In May 2023, the Center for AI Safety released a statement on AI risks signed by more than 500 prominent academics and industry leaders, including OpenAI CEO Sam Altman, which declared that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”6 Dan Hendrycks, Director of the Center for AI Safety, stated AI poses urgent risks of “systemic bias, misinformation, malicious use, cyberattacks, and weaponization.”7 |
_____________________________
1 https://www.nytimes.com/2023/04/07/technology/ai-chatbots-google-microsoft.html
2 Ibid.
3 Ibid.
4 https://futureoflife.org/open-letter/pause-giant-ai-experiments/
5 https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees
6 https://www.safe.ai/statement-on-ai-risk - open-letter
7 https://www.npr.org/2023/05/30/1178943163/ai-risk-extinction-chatgpt - :~:text=AI%20experts%20issued%20a%20dire%
20warning%20on%20Tuesday%3A,take%20control%20over%20humans%20or%20destroy%20the%20world.
Rush to Market: In December 2022, Alphabet CEO Sundar Pinchai and AI Lead Jeff Dean acknowledged the substantial reputational risks of rushing an unprepared technology to market at an all-staff meeting.8 Yet, that same month a “code red” was declared at Google, instructing employees to shift time and resources to AI projects and to fast-track AI review processes.9 Google, likely feeling the pressure from ChatGPT’s launch, released its AI chatbot Bard in March 2023 and referred to it as “an early experiment with generative AI.”10 It was evident that Bard was not ready for launch when it produced factually inaccurate data during its product launch which precipitated an 11% drop in Alphabet’s stock representing $100 billion in market value.11 Google employees referred to Bard’s launch as “rushed”, “botched”, and “un-Googley.”12 Almost a year later, in February 2024, Google released its updated Large Language Model (LLM) Gemini. Users found that Gemini was prone to generating historically inaccurate images – such as racially diverse Nazis. Sundar Pinchai acknowledged this bias in Gemini’s models, stating it was “completely unacceptable and we got it wrong.”13 As a result of Gemini’s flubbed launch, Alphabet stock lost $96 billion in market value.14
Short-term strategy: Despite urging from numerous experts to pause and consider gAI’s risks, Alphabet seemingly prioritizing short-term profits over long-term success. As such, the Company appears to be embracing a high-risk strategy of bringing nascent gAI to market without fully understanding nor disclosing associated risks. Alphabet has clearly stated its intentions of creating AI guardrails and building products responsibly, with CEO Sundar Pinchai stating, “Yet, what matters even more is the race to build AI responsibly and make sure that as a society we get it right.”15 But good intentions are insufficient if strategy is not aligned. Margaret Mitchell, former Google employee and the founder of Google’s Ethical AI team, recently stated that she believed the Company was prioritizing reckless speed over well-considered strategy.16 This Proposal asks for an accountability mechanism to ensure that Alphabet’s gAI strategy is, in fact, implemented with the prioritization of AI safety and responsibility.
_____________________________
8 https://gizmodo.com/lamda-google-ai-chatgpt-openai-1849892728
9 https://www.nytimes.com/2023/03/31/technology/google-pichai-ai.html
10 https://ai.google/static/documents/google-about-bard.pdf
11
https://www.npr.org/2023/02/09/1155650909/google-chatbot--error-bard-shares
- :~:text=Shares%20for%20Google%27s%20parent%20company%2C%20Alphabet%2C%20dropped%209%25,
produced%20a%20factual%20error%20in%20its%20first%20demo.
12 https://www.cnbc.com/2023/02/10/google-employees-slam-ceo-sundar-pichai-for-rushed-bard-announcement.html
13 https://www.businessinsider.com/google-gemini-bias-sundar-pichai-memo-2024-2
14
https://www.msn.com/en-us/news/technology/google-loses-96b-in-value-on-gemini-fallout-as-ceo-does-
damage-control/ar-BB1j4z21
- :~:text=Google%20parent%20Alphabet%20has%20lost%20nearly%20%2497%20billion,after%20users%20flagged
%20its%20bias%20against%20White%20people.
15 https://www.ft.com/content/8be1a975-e5e0-417d-af51-78af17ef4b79
16 https://time.com/6836153/ethical-ai-google-gemini-debacle/
2. | Alphabet’s gAI tools have already created false and misleading information. |
By Alphabet’s own admissions, “AI has the potential to worsen existing societal challenges – such as unfair bias – and pose new challenges as it becomes more advanced and as new uses emerge, as our own research and that of others has highlighted.”17 And Google has recognized that these challenges must be “addressed clearly, thoughtfully, and affirmatively.”18 Even AI’s strongest proponents continue to echo the need for guardrails and transparency about how these technologies work. Sam Altman, CEO of OpenAI, has said he is “particularly worried that these models could be used for large-scale disinformation.” The Information has noted that gAI drops “the cost of generating believable misinformation by several orders of magnitude.”19 And researchers at Princeton, Virginia Tech, and Stanford have found that the guardrails many companies, including Google, are relying on to mitigate the risks “aren’t as sturdy as A.I. developers seem to believe.”20
As a developer of LLMs and the chatbot assistants and text- and image-generation tools that rely on them, Alphabet has a responsibility to anticipate how these proprietary gAI tools will be used, and to mitigate harms that may arise from either their malfunction or abuse by bad actors. These gAI tools have already proven their susceptibility in generating mis- and disinformation:
● | Bard attributed the first photos of planets outside of our solar system to the James Webb Space Telescope, instead of the European Southern Observatory’s Very Large Telescope (VLT), as confirmed by NASA. This misinformation was included in a demo video for Bard, highlighting the technology’s vulnerability in disseminating misinformation.21 |
● | The Center for Countering Digital Hate found that Bard generated false narratives on 78% of topics, when prompted by researchers to “role play,” without providing any disclaimers that the information might be incorrect. These topics ranged from anti-vaccine rhetoric to antisemitism.22 |
● | When asked specific medical questions, Bard provided incorrect information and cited nonexistent medical research reports.23 |
● | In several examples, Bard defended egregious conspiracies, such as “Pizzagate,”citing fabricated articles from publications like The New York Times and Washington Post.24 |
● | In February 2024, Google’s Gemini created images depicting Nazis as people of color.25 |
● | Ahead of EU elections, researchers asked Gemini, along with other AI chatbots, questions in 10 different languages about the upcoming elections and voting process. Gemini was unable to “provide reliably trustworthy answers.”26 |
● | Alphabet also has the responsibility of quickly identifying and removing mis- and disinformation disseminated across Youtube, whether AI-generated or not. This has become an increasing concern in recent months with the increase in deepfakes, or videos manipulated by AI to spread false information across Youtube. Several deepfakes of Black celebrities, including Steve Harvey and Denzel Washington, have spread across Youtube.27 |
_____________________________
17 https://blog.google/technology/ai/google-responsible-ai-io-2023/
18 https://ai.google/responsibility/responsible-ai-practices/
19 http://www.theinformation.com/articles/what-to-do-about-misinformation-in-the-upcoming-election-cycle
20 https://www.nytimes.com/2023/10/19/technology/guardrails-artificial-intelligence-open-source.html
21 https://www.engadget.com/google-bard-chatbot-false-information-twitter-ad-165533095.html
22 https://finance.yahoo.com/news/prompts-google-bard-easily-jump-065417010.html
23 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10492900/
24 https://futurism.com/google-bard-conspiracy-theory-citations
25 https://www.msn.com/en-us/news/other/is-chatgpt-safe-what-you-should-know-before-you-start-using-ai-chatbots/ar-AA1nPxlW
26 https://www.msn.com/en-xl/news/other/most-popular-ai-chatbots-providing-unintentional-misinformation-to-users-ahead-of-eu-elections/ar-AA1nqjTW
27 https://www.nbcnews.com/tech/misinformation/ai-deepfake-fake-news-youtube-black-celebrities-rcna133368
3. | Misinformation and disinformation generated and disseminated through gAI creates risks for Alphabet and investors alike. |
● | Legal Risks: Alphabet faces significant legal risks if it does not properly mitigate mis- and disinformation generated and disseminated through its gAI tools. Many legal experts believe Alphabet may be liable for mis- and disinformation generated from its own technology, as it is unlikely to be shielded by Section 230, a provision of federal law that has protected social media platforms and web hosts from legal liability for third-party content posted to their sites. However, with Google’s own gAI tools, the content is created by the Company’s technology itself, which makes Alphabet vulnerable to future legal scrutiny. |
● | Democracy Risks: Mis- and disinformation is dangerous for society, Alphabet, and investors alike as it can manipulate public opinion, weaken institutional trust, and sway elections. Researchers have argued that “the prevalence of AI-generated content raises concerns about the spread of fake information and the erosion of trust in social media platforms and digital interactions. The dissemination of misleading or manipulated content can further diminish public trust in the authenticity of information shared on social media, undermining the credibility of these platforms and media sources overall.”28 The distortion of “truths” generated and disseminated via gAI ultimately undermines trust in our democratic processes—processes that underpin the stability of our society and economy. This is of increasing concern in 2024, a year with a significant number of elections, including presidential elections in the US, Turkey, and Ukraine.29 |
● | Regulatory Risks: The regulatory landscape for Alphabet’s AI is still developing, which is itself a risk. The first major guidelines have been set by the newly launched EU AI Act. Alphabet faces some serious headwinds with this new regulation. It requires Alphabet to identify and label deepfakes and AI-generated content, perform model evaluations, risk-assessments and mitigations, and report any incidents where the AI system failed. Additionally, EU citizens will be able to report to the European AI Office when AI systems have caused harm.30 |
_____________________________
28 https://www.techrepublic.com/article/generative-ai-impact-culture-society/
29 https://en.wikipedia.org/wiki/List_of_elections_in_2024
30 https://www.technologyreview.com/2024/03/19/1089919/the-ai-act-is-done-heres-what-will-and-wont-change/amp/?ref=everythinginmoderation.co
In the US, the Executive Order on Safe Secure and Trustworthy Artificial Intelligence remains a voluntary framework, but many expect more global adoption of the standards adopted by the EU. The final shape of US regulations is not clear but there is popular support for strong regulation. A recent Pew survey reports, “67% of those who are familiar with chatbots like ChatGPT say they are more concerned that the government will not go far enough in regulating their use than that it will go too far.”31
● | Economy-wide Risks: Diversified shareholders are also at risk as they internalize the costs of mis- and disinformation on society. When companies harm society and the economy, the value of diversified portfolios rises and falls with GDP.32 It is in the best interest of shareholders for Alphabet to mitigate mis- and disinformation, to protect the company’s long-term financial health and ensure its investors do not internalize these costs. Gary Marcus, chief executive officer of the newly created Center for Advancement of Trustworthy AI, notes, “The biggest near-term risk [of generative AI] is deliberately created misinformation using large language tools to disrupt democracies and markets.” |
4. | This Proposal goes beyond requesting responsible AI policies and protocols, requesting an accountability mechanism to ensure Alphabet is effectively identifying and mitigating mis- and disinformation risks. |
In its opposition statement, Alphabet describes its responsible-AI approach and governance to obfuscate the need to fulfill this Proposal’s request. Yet, the requested report is asking the Company to go beyond describing responsible AI principles and initiatives. We are asking for a comprehensive assessment of the risks associated with gAI so that the Company can effectively mitigate these risks, and an evaluation of how effectively the Company tackles the risks identified. As the risks of gAI are severe and broadly consequential, it is crucial Alphabet not only reports its beliefs and commitments to responsible gAI, but also that it transparently illustrates to shareholders that it has fully identified the risks and is evaluating its ability to address them.
Current reporting does not fulfill this Proposal’s request. In Google’s AI Principles 2023 Progress Update, the Company outlines its AI principles, governance, and risk mitigation practices.33 However, without the Proposal’s requested reporting, Alphabet shareholders are left to simply trust that the Company is effectively implementing these principles and practices. Given the Company’s poor performance in preventing gAI mis- and disinformation thus far, this report would reassure shareholders that the Company is proactively identifying and mitigating risks associated with gAI.
_____________________________
31 https://www.pewresearch.org/short-reads/2023/11/21/what-the-data-says-about-americans-views-of-artificial-intelligence/
32
See Universal Ownership: Why Environmental Externalities Matter to Institutional Investors, Appendix
IV
(demonstrating linear relationship between GDP and a diversified portfolio) available at
https://www.unepfi.org/fileadmin/documents/universal_ownership_full.pdf;
cf.
https://www.advisorperspectives.com/dshort/updates/2020/11/05/market-cap-to-gdp-an-updated-look-at-the-
buffett-valuation-indicator
(total market capitalization to GDP “is probably the best single measure of where
valuations stand at any given moment”)
(quoting Warren Buffet).
33 https://ai.google/static/documents/ai-principles-2023-progress-update.pdf
Conclusion
For all the reasons provided above, we strongly urge you to support the Proposal. We believe a report on misinformation and disinformation risks, remediation, and measurement related to generative AI will help ensure Alphabet is comprehensively mitigating risks and is in the long-term best interest of shareholders.
Please contact Julia Cedarholm at juliac@arjuna-capital.com for additional information.
Sincerely,
Natasha Lamb
Arjuna Capital
This is not a solicitation of authority to vote your proxy. Please DO NOT send us your proxy card. Arjuna Capital is not able to vote your proxies, nor does this communication contemplate such an event. The proponent urges shareholders to vote for Proxy Item 12 following the instruction provided on the management’s proxy mailing.
The views expressed are those of the authors and Arjuna Capital as of the date referenced and are subject to change at any time based on market or other conditions. These views are not intended to be a forecast of future events or a guarantee of future results. These views may not be relied upon as investment advice. The information provided in this material should not be considered a recommendation to buy or sell any of the securities mentioned. It should not be assumed that investments in such securities have been or will be profitable. This piece is for informational purposes and should not be construed as a research report.