Notice of Exempt Solicitation

 

 

Name of Registrant: Alphabet, Inc.

Name of Person Relying on Exemption: Shareholder Association for Research and Education (SHARE)

Address of Person Relying on Exemption: Unit 401, 401 Richmond Street West, Toronto, ON M5V 3A8, Canada

Date: May 01, 2024

 

 

 

This is not a solicitation of authority to vote your proxy.

Please DO NOT send us your proxy card as it will not be accepted

 

 

 

 

Shareholder Proposal Number 13 Regarding Human Rights Impact Assessment of Targeted Ad Policies

 

We, the Proponents, urge shareholders to vote FOR Proposal Number 13 – Shareholder Proposal regarding a Human Rights Assessment of Targeted Ad Policies (the “Proposal”) – at the Alphabet, Inc. (“Alphabet”, “Google” or the “Company”) Shareholder Meeting on June 7, 2024.

 

The Proposal asks Alphabet’s Board of Directors to:

 

Publish an independent third-party Human Rights Impact Assessment, examining the actual and potential human rights impacts of Google’s artificial intelligence-driven targeted advertising policies and practices. This Assessment should be conducted at a reasonable cost; omit proprietary and confidential information, as well as information relevant to litigation or enforcement actions; and be published on the company’s website by June 1, 2025.

 

 

Summary

 

·      Advertising accounts for a significant portion of Alphabet’s revenue. Despite growing scrutiny from regulators, civil society organizations, and investors, we believe Alphabet has not implemented sufficient human rights safeguards to identify, assess, address, and mitigate actual or potential human rights risks that may stem from its Artificial Intelligence (“AI”)-driven targeted advertising practices.

 

·      Regulatory developments seen in the U.S. and EU will impact Alphabet’s advertising practices, particularly as it relates to privacy, transparency, and accountability. Non-compliance with relevant laws may lead to significant regulatory, financial, and legal risks for the Company and its shareholders.

 

·      Alphabet has not demonstrated how it ensures alignment with its stated human rights commitment, which explicitly reference the UN Guiding Principles on Business and Human Rights (“UNGPs”). Alphabet has a longstanding, public commitment endorsing the UNGPs. The UNGPs explicitly state that companies must conduct human rights due diligence on their products and services, particularly if the scale and scope of the impacts are likely to be important. As Alphabet’s targeted advertising practices have global ramifications and impacts, a third-party Human Rights Impact Assessment (“HRIA”) is the first step in this process.

 

·      As shareholders, the Proponents are requesting reassurance that the Company is living up to its public commitments around human rights and AI and is taking all necessary steps to identify and mitigate actual or potential risks that may stem from a core aspect of its business.

 

 

   
 

 

Since 2021, the Proponents1 have been engaging with Alphabet on AI-driven targeted advertising and the existing and potential risks that such technology may pose to the Company and its shareholders. In 2022, the initial Proponents2 filed a similar shareholder proposal at Alphabet. Despite the wide support received by the proposal (47.30% of support from Class A shareholders), there has not been any visible indication regarding Alphabet’s intention to implement the requested assessment.

 

Google’s online advertising accounted for more than 75% of Alphabet’s revenue in 2023.3 Alphabet’s ad business, including Google Search, YouTube Ads, and Google Network, has grown significantly in recent years, reaching more than $237 billion in 2023.4 Algorithmic systems are deployed to enable the delivery of targeted advertisements, determining what users see. However, targeted advertising often results in and exacerbates, systemic discrimination and other human rights violations. Alphabet itself recognizes that “evolving AI-related efforts may give rise to risks related to harmful content, inaccuracies, discrimination, intellectual property infringement or misappropriation, defamation, data privacy, cybersecurity, and other issues [...] [O]ur implementation of AI systems could subject us to competitive harm, regulatory action, legal liability (including under new and proposed legislation and regulations), new applications of existing data protection, privacy, intellectual property, and other laws, and brand or reputational harm. Some uses of AI will present ethical issues and may have broad effects on society.”5 Notably, Google’s current ad infrastructure is driven by third-party cookies, which enable other companies to track users across the internet by accumulating vast troves of personal and behavioral data on Google users. This may further expose Google to violations of user privacy.

 

The Company recognizes that “new and evolving products and services, including those that use AI, raise ethical, technological, legal, regulatory, and other challenges, which may negatively affect our brands and demand for our products and services.”6 Despite such risks, in its 2023 Annual Report, Alphabet confirmed that the Company is “expanding [its] investment in AI across the entire company. This includes generative AI and continuing to integrate AI capabilities into [its] products and services.”7 Although targeted advertising plays a significant role in Google’s business model, there are well-documented human rights risks associated with AI-driven targeted advertising. Yet in our views, Alphabet has not demonstrated a sufficiently robust and transparent due diligence system to identify, address, and prevent the adverse human rights impacts stemming from its AI-driven targeted advertising technology.

 

Google has previously published a summary of a third-party HRIA of a celebrity facial recognition algorithm.8 Its targeted ad systems, which affect billions, merit at least the same level of due diligence and public disclosure, particularly as Google’s peers9,10 develop new approaches to targeting advertisements.

 

_____________________________

1 The Proponents include The United Church of Canada Pension Plan represented by SHARE, CommonSpirit Health, and Mercy Investments.

2 At that time, the initial Proponents included The United Church of Canada Pension Plan represented by SHARE and CommonSpirit Health.

3 “2023 Annual Report,” Alphabet, April 26, 2024, p. 8, https://abc.xyz/assets/52/88/5de1d06943cebc569ee3aa3a6ded/goog023-alphabet-2023-annual-report-web-1.pdf

4 “Alphabet Inc. Form 10K,” Alphabet Inc., January 31, 2024, p.63, https://abc.xyz/assets/4b/01/aae7bef55a59851b0a2d983ef18f/596de1b094c32cf0592a08edfe84ae74.pdf.

5 “2023 Annual Report,”, p.13.

6 Ibid, p.9.

7 “2023 Annual Report,” p. 8.

8 “Google Celebrity Recognition API Human Rights Assessment,” BSR, October 2019, https://services.google.com/fh/files/blogs/bsr-google-cr-api-hria-executive-summary.pdf.

9 “Updates to detailed targeted,” Meta Platforms, Inc., accessed April 29, 2024, https://www.facebook.com/business/help/458835214668072.

10 “Changes coming to Targeting,” TikTok, February 2024, https://ads.tiktok.com/help/article/changes-coming-to-targeting?lang=en.

 

   
 

 

As mentioned in Alphabet’s opposition statement, as part of Google’s AI Principles, the Company has committed to not design or deploy AI technologies “whose purpose contravenes widely accepted principles of international law and human rights.”11 However, Google has not demonstrated how it ensures alignment with its stated commitment. Shareholders need reassurance that the Company is living up to its public commitments around AI. Because third-party HRIAs are aligned and grounded with international human rights law, conducting a HRIA would be wholly in line with Alphabet’s existing AI principles.

 

1.Targeted advertising technologies can negatively impact human rights

 

Alphabet’s 2023 Annual Report confirms that the “unintended consequences, uses, or customization of [the Company’s] AI tools and systems may negatively affect human rights, privacy, employment, or other social concerns.”12

 

Targeted advertising is a form of online advertising that uses the traits, interests, and preferences of a consumer to display customized ads. Advertisers procure this information by tracking a person’s activity across the Internet,13 most notably through snippets of code known as third-party cookies. Companies and advertisers use cookies and other technological levers to algorithmically infer users’ interests. They can also acquire data through direct purchases, data-sharing agreements, and other contractual relationships that potentially put users’ human rights in jeopardy.14 Ads are predominantly delivered to consumers through automated auctions that factor in the advertiser’s targeting parameters. These bidding processes take place within seconds after a consumer clicks on a link.

 

As targeted advertising has become more widespread and sophisticated, consumers’ awareness of how these systems can compromise their privacy has grown.15 According to a 2023 report published by Pew Research Center, among those who’ve heard about AI, 70% have little to no trust in companies to make responsible decisions about how they use it in their products...[and] 81% say the information companies collect will be used in ways that people are not comfortable with.”16

 

Gender and Racial Discrimination

 

There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because AI systems are trained on data in which societal biases are embedded. For example, a 2022 study found that gender-neutral internet searches yield results that nonetheless produce “male-dominated” output.17 Pernicious errors in targeting can lead to algorithmic bias, in which automated systems create consistently unfair outcomes, such as privileging one group over another, often aggravating existing inequities.18

 

_____________________________

11 Sundar Pichai, “AI at Google: our principles,” Alphabet Inc., June 7, 2018, https://blog.google/technology/ai/ai-principles/.

12 “2023 Annual Report,” p. 13.

13 “What is targeted advertising?,” GCFGlobal, accessed April 29, 2024, https://edu.gcfglobal.org/en/thenow/what-is-targeted-advertising/1/.

14 Michelle Boorstein and Heather Kelly, “Catholic group spent millions on app data that tracked gay priests,” The Washington Post, March 9, 2023, https://www.washingtonpost.com/dc-md-va/2023/03/09/catholics-gay-priests-grindr-data-bishops/.

15 Leslie K. John, Tami Kim, and Kate Barasz, “Ads That Don’t Overstep,” Harvard Business Review, February 2018, https://hbr.org/2018/01/ads-that-dont-overstep.

16 Michelle Faverio, “Key findings about Americans and data privacy,” Pew Research Center, October 18, 2023, https://www.pewresearch.org/short-reads/2023/10/18/key-findings-about-americans-and-data-privacy/.

17 “Gender Bias in Search Algorithms Has Effect on Users, New Study Finds,” New York University, July 12, 2022, https://www.nyu.edu/about/news-publications/news/2022/july/gender-bias-in-search-algorithms-has-effect-on-users--new-study-.html.

18 Melba Newsome, “Biased Algorithms Exacerbate Racial Inequality in Health Care,” UC Berkeley, August 12, 2020, https://alumni.berkeley.edu/california-magazine/online/biased-algorithms-exacerbate-racial-inequality-health-care/.

 

   
 

 

Another study found that targeted ads contribute to larger systems of racial discrimination. In particular, the study found that such technology “often, and will likely continue to, discriminate, reproducing new and old forms of social and racial sorting within communities and society.”19

 

Machine algorithms can treat similarly situated people differently. Business models provide very little transparency on where personal information ends up.20 Research has highlighted numerous examples of algorithmic decision-making replicating and even amplifying human biases.21 Although the right to privacy is crucial to everyone, privacy violations have particularly negative impacts on demographic groups who are at a higher risk of exclusion.22

 

2.Google’s existing policies and practices are insufficient in identifying, addressing, and mitigating potential or existing human rights impacts

 

In Alphabet’s opposition statement, the Company argues that the 2023 Ads Safety Report, the Political content policy, and its personalized advertising policies are existing mechanisms that protect user privacy and safety. However, these policies and practices leave significant gaps unaddressed.

 

·2023 Ads Safety Report: The report does not provide clarity on which platforms generated the bad ads, what “inappropriate content” entails in the context of this reporting, and whether such ads could generate or have generated human rights harms, such as discrimination. While this report provides insight on how Google’s existing policies may be enforced, it does not provide shareholders with information on how Google’s existing policies are preventing adverse human rights impacts.

 

·Political content policy: Google’s political content policies provide requirements for political and election advertising based on the region. However, recent reporting from civil society organizations suggest that the policy is not effective. In April, Access Now and Global Witness reported that Google’s current policies and practices for YouTube Ads may be insufficient in identifying ads that fuel election misinformation and political misrepresentation ahead of India’s general election.23 The report found that the review process for ads does not have the level of friction required to ensure effective review, and to prevent impermissible ads from being published. According to Access Now and Global Witness, “the election season is underway in the largest democratic exercise on earth – and yet the video sharing and social media platform YouTube is failing to detect and restrict content designed to disenfranchise some voters and incite others to block particular groups from voting…YouTube has again shown its policy enforcement to be unreliable at best, negligent at worst.”24 The report's authors say “the findings point to a growing divide between countries in the global south, where platforms often fail to prevent the spread of election disinformation, and countries in the global north where platforms have invested more resources.”25 As suggested by the report, failure by the Company or its affiliates to effectively enforce its own political ads policies to prevent disinformation, combined with the reach enabled by targeting ads, may result in significant harms to human rights on a global scale.26 On April 9, 2024, over 200 civil society organizations, researchers, and journalists sent a letter to tech companies, including Google, to combat AI-driven disinformation and to reinforce content moderation.27,28 One week later, the Global Coalition for Tech Justice, a group of over 160 civil society organizations, also called on tech companies, including Google, “to urgently adopt greater measures to safeguard people and elections amid rampant online disinformation and hate speech.”29,30

 

_____________________________

19 Ho-Chun Herbert Chang, Matt Bui, and Charlton McIlwain, “Targeted Ads and/as Racial Discrimination: Exploring Trends in New York City Ads for College Scholarships,” IEEE Computer Society (Sept. 2021): 12-13.

20 Arwa Mahdawi, “Targeted ads are one of the world's most destructive trends. Here's why,” The Guardian, November 5, 2019, https://www.theguardian.com/world/2019/nov/05/targeted-ads-fake-news-clickbait-surveillance-capitalism-data-mining-democracy.

21 James Manyika, Jake Silberg, and Brittany Presten, “What do we do about the biases in AI?,” Harvard Business Review, October 25, 2019, https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.

22 Samatha Lai and Brooke Tanner, “Examining the intersection of data privacy and civil rights,” Brookings Institute, July 18, 2022, https://www.brookings.edu/blog/techtank/2022/07/18/examining-the-intersection-of-data-privacy-and-civil-rights/.

23 ““Votes will not be counted”: Indian election disinformation ads and YouTube,” Access Now and Global Witness, April 2, 2024, https://www.globalwitness.org/en/campaigns/digital-threats/votes-will-not-be-counted-indian-election-disinformation-ads-and-youtube/.

24 ““Votes will not be counted”: Indian election disinformation ads and YouTube.”

25 Billy Perrigo, “Exclusive: YouTube Approved Ads Promoting Disinformation on India’s Election,” TIME, April 2, 2024, https://time.com/6961504/youtube-ads-disinformation-india-election/.

26 Perrigo, “YouTube Approved Ads Promoting Disinformation on India’s Election.”

27 “Letter to Tech Companies 2024,” April 9, 2024, https://www.freepress.net/sites/default/files/2024-04/coalition_letter_urging_tech_companies_to_strengthen_election_integrity_policies_final_april_9.pdf.

28 Yasmeen Serhan, “Exclusive: Tech Companies Are Failing to Keep Elections Safe, Rights Groups Say,” TIME, April 16, 2024, https://time.com/6967334/ai-elections-disinformation-meta-tiktok/.

29 Serhan, “Tech Companies Are Failing to Keep Elections Safe, Rights Groups Say.”

30 “A hundred days into the elections megacycle and Tech Platforms are failing the biggest test of 2024,” Global Coalition for Tech Justice, April 16, 2024, https://yearofdemocracy.org/a-hundred-days-into-the-elections-megacycle-and-tech-platforms-are-failing-the-biggest-test-of-2024/.

 

   
 

 

·Personalized advertising policies in the U.S. and Canada: These policies aim to prohibit employment, housing, credit, and consumer finance advertisers from targeting or excluding ads based on gender, age, parental status, marital status, or zip code. Although Alphabet has other policies prohibiting the personalization based on sensitive categories, such as race and ethnicity, its existing personalized advertising policies do not indicate whether it prohibits employment, housing, credit, and consumer finance advertisers from targeting or excluding ads based on race. In addition, such personalized advertising policies are only limited to four sectors, although there are other sectors, such as the tobacco sector,31 that have historically targeted marketing towards Black, Latine, and Indigenous communities.32 As mentioned above, targeted ads can also exacerbate racial discrimination, which can lead to adverse human rights risks and impacts.

 

An independent third-party HRIA examining the actual and potential human rights impacts of Google’s AI-driven targeted advertising policies and practices will provide the Company and its shareholders with a more comprehensive analysis of potential or existing gaps of relevant policies, and recommendations on how to address them.

 

3.Failure to safeguard human rights exposes shareholders to material risks

 

3.1. Regulatory risks

 

There is growing consensus among civil society experts, academics, and policymakers that targeted advertising can lead to the erosion of human rights. Legislation in Europe33,34,35 and the U.S.36 is poised to severely restrict or even ban targeted ads37 largely due to concerns about underlying algorithms. Given the importance of advertising for Alphabet’s business model, the failure to implement and demonstrate effective human rights due diligence may expose shareholders to regulatory risks. Google has also confirmed that new or changing laws and regulations on the development, use, and provision of AI technologies and other digital products and services may subject the Company to regulatory action and legal liability.38

 

_____________________________

31 “Stopping menthol, saving lives. Ending Big Tobacco’s predatory marketing to Black communities,” Campaign for Tobacco-Free Kids, February 21, 2023, https://assets.tobaccofreekids.org/content/what_we_do/industry_watch/menthol-report/2021_02_tfk-menthol-report.pdf.

32 Mayuri Chandran and Kevin A. Schulman, “Racial disparities in healthcare and health,” Health Service Research 57 no.2 (April 2022): 218-222.

33 “The Digital Services Act package,” European Commission, accessed April 29, 2024, https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package.

34 “AI Act,” European Commission, accessed April 29, 2024, https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

35 Relevant regulations include: Digital Markets Act; AI Act; Political Ads Regulation; ePrivacy Regulation; Platform Workers Directive; Regulation on child sexual abuse material.

36 “Year in Review: The Top 10 US Data Privacy Developments From 2023,” WilmerHale LLP, January 5, 2024, https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240105-year-in-review-the-top-10-us-data-privacy-developments-from-2023.

37 “Questions and answers on the Digital Services Act,” European Commission, February 23, 2024, https://ec.europa.eu/commission/presscorner/detail/en/QANDA_20_2348.

38 “2023 Annual Report,” p.16-17.

 

   
 

 

·Several pieces of legislation drafted in the U.S. Congress have focused on enforcing algorithmic accountability to better control targeted advertising. 39 In January 2022, Congresswoman Eshoo (D-CA) introduced the Banning Surveillance Advertising Act,40 which would prevent advertising platforms from targeting individuals based on some forms of personal information and behavioral data outright. In February 2023, President Joe Biden’s State of the Union called for legislation to stop tech companies from collecting data on kids and teenagers.41
·At the beginning of 2023, only five states—California, Colorado, Virginia, Utah, and Connecticut —possessed comprehensive data privacy legislation. At the end of 2023, eight additional states enacted their own comprehensive laws.42,43,44 These laws will give consumers more control over how their data is processed and stored, such as enabling them to opt out of the processing of personal data for targeted advertising purposes.
·In October 2023, the Biden Administration released the Executive Order on Safe, Secure and Trustworthy Artificial Intelligence, outlining a comprehensive outlook on the Administration's legal, regulatory, and policy stance toward the advancement and deployment of Artificial Intelligence within the United States.45
·On April 17, 2024, the American Privacy Rights Act (“APRA”) was brought back to a house committee.46 This proposed legislation is designed to establish the first comprehensive data privacy law at the federal level in the U.S.
·The Digital Services Act (“DSA”) prevents online platforms from using sensitive information, such as sexual orientation, race, and religion for targeted ads.47 The DSA outlines obligations for Very Large Online Platforms (“VLOPs”) and Very Large Online Search Engines (“VLOSEs”). Alphabet’s platforms, including Google Maps, Google Play, Google Shopping, and YouTube are considered VLOPs, and Google Search is considered a VLOSE. In other words, much of Google’s businesses are affected by the DSA. With greater regulatory scrutiny under the DSA, Google faces stricter requirements regarding transparency and accountability in their targeted ads operations. For non-compliance with orders from supervisory authorities, companies can face fines of up to 6% of their global annual turnover.48,49,50

 

_____________________________

39 H.R. 5596 - Justice Against Malicious Algorithms Act; S. 3572 / H.R. 6580 - Algorithmic Accountability Act of 2022; S. 2024 / H.R. 5951 - Filter Bubble Transparency Act; S. 3029 / H.R. 2154 - Protecting Americans from Dangerous Algorithms Act; S. 2918 / H.R. 5439 - Kids Internet Design and Safety Act; S. 1896 / H.R. 3611 - Algorithmic Justice and Online Platform Transparency Act; S. 3663 - Kids Online Safety Act; H.R. 6796 - Digital Services Oversight and Safety Act of 2022

40 S. 3520 / H.R. 6416 - Banning Surveillance Advertising Act 11. H.R. 3451 - Social Media DATA Act

41 Alfred Ng, “Biden calls for ban of online ads targeting children,” Politico, February 7, 2023, https://www.politico.com/news/2023/02/07/biden-calls-for-ban-of-online-ads-targeting-children-00081731.

42 “2023 Consumer Data Privacy Legislation,” National Conference of State Legislatures, September 28, 2023, https://www.ncsl.org/technology-and-communication/2023-consumer-data-privacy-legislation.

43 “Year in Review: The Top 10 US Data Privacy Developments From 2023.”

44 “Which States have consumer data privacy laws?,” Bloomberg Law, March 18, 2024, https://pro.bloomberglaw.com/insights/privacy/state-privacy-legislation-tracker/.

45 “Quick Take: Biden Administration Seeks to Shape Domestic and International Approach to AI Through Executive Order,” WilmerHale LLP, October 30, 2023, https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20231030-quick-take-biden-administration-seeks-to-shape-domestic-and-international-approach-to-ai-through-executive-order.

46 Lauren Feiner, “A real privacy law? House lawmakers are optimistic this time,” The Verge, April 17, 2024, https://www.theverge.com/2024/4/17/24133323/american-privacy-rights-act-house-lawmakers-legislative-hearing.

47 Questions and answers on the Digital Services Act.”

48 “Questions and answers on the Digital Services Act.”

49 Jet Klokgieters, “Guest blog: General applicability of the Digital Services Act,” UCD Centre for Digital Policy, February 17, 2024, https://digitalpolicy.ie/general-applicability-of-the-digital-services-act/.

50 “The enforcement framework under the Digital Services Act,” European Commission, accessed April 29, 2024, https://digital-strategy.ec.europa.eu/en/policies/dsa-enforcement.

 

   
 

 

·In April 2024, European Data Protection Board, the EU privacy watchdog, published a report related to the “consent or pay models” adopted by large online platforms. The report stated that “if controllers choose to charge a fee for access to the ‘equivalent alternative’, controllers should consider also offering a further alternative, free of charge, without behavioural advertising.”51 This alone could cause a monumental shift in how consumers interact with ads.
·In March 2024, the European Parliament approved the Artificial Intelligence Act (“AI Act”), which was deemed the world’s first all-encompassing legal framework for AI. It establishes regulations across the EU concerning data quality, transparency, human oversight, and accountability.52 The AI Act was unanimously endorsed by 27 member states.
·Since the AI Act applies to providers and developers of AI systems marketed or used within the EU, and given Google's market share within the EU, Google’s business operations will be affected. The EU AI Act aims to prohibit AI practices that pose unacceptable risks, address risks specifically created by AI applications, and set clear requirements for AI systems for high-risk applications. 53 It prohibits companies from using AI to target specific demographic groups based on sensitive attributes like their race or religion. As Google's targeted ads leverage AI to deliver personalized ads, Google needs to ensure that its products and services comply with the specific requirements of the EU AI Act.

 

As seen by the DSA and the AI Act, there is a growing global trend for governments to improve protection for users, establish a powerful transparency and accountability framework, and ensure companies respect the fundamental rights of users online.

 

Non-compliance or violation of relevant regulations may lead to significant financial and legal risk for the Company. For example, in 2019, Google and YouTube agreed to pay US$170 million in a settlement with the Federal Trade Commission (“FTC”) over allegations of violations of the Children’s Online Privacy Protection Act Rule. According to the complaint filed by the FTC and the New York Attorney General, YouTube has allegedly collected personal information from children without their parents’ consent to deliver targeted ads on child-directed channels.54

 

_____________________________

51 “Opinion 08/2024 on Valid Consent in the Context of Consent or Pay Models Implemented by Large Online Platforms,” European Data Protection Board, April 17, 2024, https://www.edpb.europa.eu/system/files/2024-04/edpb_opinion_202408_consentorpay_en.pdf.

52 “The European Parliament Adopts the AI Act,” WilmerHale LLP, March 14, 2024, https://www.wilmerhale.com/en/insights/blogs/wilmerhale-privacy-and-cybersecurity-law/20240314-the-european-parliament-adopts-the-ai-act.

53 “AI Act,”

54 “Google and YouTube Will Pay Record $170 Million for Alleged Violations of Children’s Privacy Law,” Federal Trade Commission, September 4, 2019, https://www.ftc.gov/news-events/news/press-releases/2019/09/google-youtube-will-pay-record-170-million-alleged-violations-childrens-privacy-law.

 

   
 

 

3.2. Legal risks

 

Alphabet’s failure to comply with laws aiming at protecting users’ rights or align with the requirements set by internationally recognised human rights standards may expose the Company to material legal risks. As public scrutiny over privacy rights increased in recent years, Alphabet has been subjected to legal issues over its data collection practices and policies. The Company acknowledges that “there are ongoing investigations and litigation in the U.S. and the EU, including those relating to our collection and use of location information and advertising practices, which could result in significant fines, judgments, and product changes.”55 For example:

 

·In 2024, “Google agreed to destroy billions of data records to settle a lawsuit claiming it secretly tracked the internet use of people who thought they were browsing privately.”56 Under the settlement, Google will update disclosures about what it collects in "private" browsing, a process that the Company has already begun. Google will also let Incognito users block third-party cookies for five years. The court said that “this settlement ensures real accountability and transparency from the world’s largest data collector and marks an important step toward improving and upholding our right to privacy on the internet.”57
·In 2022, a bipartisan group of Attorneys General from Texas, Indiana, Washington State, and the District of Columbia filed a lawsuit against Google over “deceptive location tracking practices” invading users’ privacy.58 The defendants claim that “Google has systematically misled, deceived, and withheld material facts from users in Texas about how their location is tracked and used and how to stop Google from monetizing their movements.”59,60 In the three settlements, Google has agreed to pay Indiana, Washington, and District of Columbia $20 million, $39.9 million and $9.5 million, respectively.61
·In 2022, a coalition of forty Attorneys General entered into a record $391.5 million settlement agreement with Google over its location tracking practices. The investigation found that “Google violated state consumer protection laws by misleading consumers about its location tracking practices in various ways since at least 2014.”62 Under the settlement, Google agreed to increase consumer transparency on how their location data are tracked and how to opt out from location tracking. In application of the settlement, the Company will also limit its use and storage of certain types of location information.63

 

3.3. Reputational risk

 

As one of the world’s largest technology companies, Alphabet has an outsized influence on society. This status exposes the Company to significant scrutiny from the public as well as from governments, regulators, and lawmakers. In the past decade, Alphabet subsidiaries, including Google and YouTube, have been subject to high-profile controversies and criticisms over human rights-related issues, including data privacy and misinformation. These issues have resulted in regulatory scrutiny, public backlash, and negative media coverage, all of which can deteriorate the Company's reputation in the long run. Alphabet recognizes that “expectations relating to [environmental, social, and governance] considerations could expose [Alphabet] to potential liabilities [and] reputational harm.”64

 

_____________________________

55 “2023 Annual Report,” p.78

56 Jonathan Stempel, “Google to destroy browsing data to settle consumer privacy lawsuit,” Reuters, April 1, 2024, https://www.reuters.com/technology/google-destroy-browsing-data-settle-consumer-privacy-lawsuit-2024-04-01/.

57 “Google to destroy billions of private browsing records to settle lawsuit,” The Guardian, April 1, 2024, https://www.theguardian.com/technology/2024/apr/01/google-destroying-browsing-data-privacy-lawsuit.

58 “Google accused of ‘deceptive’ location tracking in fresh round of lawsuits,” The Guardian, January 25, 2022, https://www.theguardian.com/technology/2022/jan/24/google-sued-privacy-texas-district-of-columbia.

59 “The State of Texas, Plaintiff v. Google LLC, Plaintiff’s Original Petition,” Texas Attorney General, January 24, 2022, https://www.texasattorneygeneral.gov/sites/default/files/images/executive-management/Google%20Geolocation%20Original%20Petition-fm.pdf.

60 “AG Paxton Sues Google for Deceptively Tracking Users’ Location Without Consent,” Ken Paxton Attorney General of Texas, January 24, 2022, https://www.texasattorneygeneral.gov/news/releases/ag-paxton-sues-google-deceptively-tracking-users-location-without-consent.

61 Andrew Serwin and Matt Dhaiti, “Google to pay $29.5 million to Indiana and District of Columbia to settle location privacy suits,” DLA Piper, January 9, 2023, https://www.dlapiper.com/en-us/insights/publications/2023/01/google-to-pay-295-million-to-indiana-and-district-of-columbia-to-settle-location-privacy-suits.

62 “Attorney General Josh Shapiro announces $391 million settlement with Google over location tracking practices,” Pennsylvania Attorney General, November 14, 2022, https://www.attorneygeneral.gov/taking-action/attorney-general-josh-shapiro-announces-391-million-settlement-with-google-over-location-tracking-practices/.

63 “Attorney General Josh Shapiro announces $391 million settlement with Google over location tracking practices.”

64 “2023 Annual Report,” p. 19.

 

   
 

 

In 2023, Brookings Institute conducted a survey between the years 2018-2021 to understand consumer confidence in U.S. entities, including U.S. government agencies, non-profit, and commercial institutions. The tech sector showed the most significant loss of confidence, with Google dropping significantly in that three-year period.65

 

Although Alphabet is considered as a dominant player in the industry, growing awareness and concerns among consumers and regulators about the potential risks associated with the human rights impacts of its products may create greater opportunities to other technology actors that offer alternative revenue models that allow users to retain greater control over their data.

 

4.A Human Rights Impact Assessment is necessary to reinforce Google’s due diligence and protect long-term shareholder value

 

Alphabet recognizes the risks that stem from its AI technologies.66 Given the importance of targeted advertising to Alphabet’s business model, studies reflecting inconsistent enforcement of ad policies and high risk of policy-violating ads being published, and the well-documented human rights risks associated with targeted advertising, a robust and transparent HRIA in line with internationally recognized human rights standards is necessary.

 

An independent third-party assessment would help inform Alphabet’s management, the Board of Directors, and shareholders about the human rights risks that the Company faces in its ads business and the merits of its human rights approach, including its policies and practices. In addition, such an assessment would help the management and the Board of Directors manage the risks associated with failure to respect these human rights, guide management’s approach to protect the human rights of its users, including the steps to remedy any negative human rights impacts stemming from its technologies.

 

Considering the material nature of the regulatory, legal, and reputational risks that Alphabet faces and, by extension its shareholders, it is key for the Company to increase the degree of transparency it provides so that investors can take informed investment decisions. The Proponents believe that with the fast pace of technological change and product upgrades,67 there is a heightened need for greater transparency on these issues. Upon criticism by human rights and tech experts, Google’s rollback of Federated Learning of Cohorts (also known as “FLoC”) and replacement of FLoC with Topics API in 2022 is a good example of the risks associated with AI-driven targeted advertising policies and practices and the fact that such technology does not always consider human rights implications.68 The pace of technological changes and the human rights issues associated with certain technology warrant companies like Google to ensure that its shareholders understand what these technologies are, what the human rights implications of such technologies are, and most importantly what the Company is doing to mitigate the risks.

 

_____________________________

65 Sean Kates, Jonathan Ladd, and Joshua A. Tucker, “How Americans’ confidence in technology firms has dropped,” Brookings Institute, June 14, 2023, https://www.brookings.edu/articles/how-americans-confidence-in-technology-firms-has-dropped-evidence-from-the-second-wave-of-the-american-institutional-confidence-poll/.

66 “2023 Annual Report,” p.7-9.

67 “How has technology changed – and changed us – in the past 20 years?,” World Economic Forum, November 18, 2020, https://www.weforum.org/agenda/2020/11/heres-how-technology-has-changed-and-changed-us-over-the-past-20-years/.

68 David Nield, “What’s Google FLoC? And How Does It Affect Your Privacy?,” Wired, May 9, 2021, https://www.wired.com/story/google-floc-privacy-ad-tracking-explainer/.

 

   
 

 

Alphabet explicitly endorses the UNGPs69 — the authoritative global standard on the role of businesses in ensuring respect for human rights in their own operations and through their business relationships. The UNGPs explicitly state that companies must conduct human rights due diligence on their products and services, particularly if the scale and scope of the impacts are likely to be important.70 According to Principle 21 of the UNGPs, “in order to account for how [business enterprises] address their human rights impacts, business enterprises should be prepared to communicate this externally, particularly when concerns are raised by or on behalf of affected stakeholders. Business enterprises whose operations or operating contexts pose risks of severe human rights impacts should report formally on how they address them.”71 Such reports are expected to be published and accessible to the public and in all instances, the UNGPs highlight that “communications should provide information that is sufficient to evaluate the adequacy of an enterprise’s response to the particular human rights impact involved.”72 An HRIA, therefore, is the first step in the human rights due diligence process and is wholly aligned with Alphabet’s commitment to the UNGPs.

 

The Proponents believe that the limited steps Alphabet has taken to mitigate risks associated with targeted advertising remain insufficient relative to the scale and materiality of the risks mentioned above. A third-party HRIA would provide an assessment with the proper expertise, objectivity, and comprehensiveness73,74 necessary to address the wide and varied range of human rights risks faced by Alphabet’s billions of global users.

 

5.Conclusion

 

Alphabet has one of the largest footprints of any entity in the world. According to Alphabet’s 2023 Annual Report, the Company boasts 15 Google products that each serve more than half a billion people and businesses, and six that serve more than 2 billion users each.75 As reported by The Economist, “humans collectively spend 22 [billion] hours a day on Alphabet’s platforms."76

 

This unmatched reach and influence require an equally unmatched commitment to preserving and respecting human rights across all parts of the business model. Given concerns around the fairness, accountability, and transparency of the underlying algorithmic systems, targeted advertising has been heavily scrutinized for its adverse impacts on human rights and will likely face increasing regulatory and legal risks.

 

A robust HRIA will enable Alphabet to better identify, mitigate, and prevent such adverse human rights impacts that expose the Company to regulatory, legal, and reputational risks while protecting long-term shareholder value.

 

_____________________________

69 “Human Rights,” Alphabet Inc., accessed April 29, 2024, https://about.google/intl/ALL_us/human-rights/.

70 “Guiding Principles on Business and Human Rights,” United Nations, 2011, p.15, https://www.ohchr.org/sites/default/files/Documents/Publications/GuidingPrinciplesBusinessHR_EN.pdf.

71 “Guiding Principles on Business and Human Rights,” p.23.

72 Ibid., p. 23-24.

73 Désirée Abrahams and Yann Wyss, “The UN Global Compact Guide to Human Rights Impact Assessment and Management (HRIAM),” International Finance Corporation, 2010, https://d306pr3pise04h.cloudfront.net/docs/issues_doc%2Fhuman_rights%2FGuidetoHRIAM.pdf.

74 “Human rights impact assessment guidance and toolbox,” The Danish Institute for Human Rights, August 25, 2020, https://www.humanrights.dk/tools/human-rights-impact-assessment-guidance-toolbox.

75 “2023 Annual Report,” p.2.

76 “Is there more to Alphabet than Google Search?,” The Economist, July 30, 2023, https://www.economist.com/business/2023/07/30/is-there-more-to-alphabet-than-google-search.

 

   
 

 

For these reasons, we urge Alphabet’s shareholders to vote FOR PROPOSAL NUMBER 13 Regarding Human Rights Impact Assessment of Targeted Ad Policies.

 

Any questions regarding this exempt solicitation or Proposal Number 13 should be directed to Juana Lee, Associate Director, Shareholder Advocacy at SHARE at jlee@share.ca.

 

 

 

THE FOREGOING INFORMATION MAY BE DISSEMINATED TO SHAREHOLDERS VIA TELEPHONE, U.S. MAIL, EMAIL, CERTAIN WEBSITES AND CERTAIN SOCIAL MEDIA VENUES, AND SHOULD NOT BE CONSTRUED AS INVESTMENT ADVICE OR AS A SOLICITATION OF AUTHORITY TO VOTE YOUR PROXY. THE COST OF DISSEMINATING THE FOREGOING INFORMATION TO SHAREHOLDERS IS BEING BORNE ENTIRELY BY THE FILERS. PROXY CARDS WILL NOT BE ACCEPTED BY ANY FILER. PLEASE DO NOT SEND YOUR PROXY TO ANY FILER. TO VOTE YOUR PROXY, PLEASE FOLLOW THE INSTRUCTIONS ON YOUR PROXY CARDS.

 

 

 

This is not a solicitation of authority to vote your proxy.

Please DO NOT send us your proxy card as it will not be accepted