Arts (MECO)/Law IV
"Technology is nothing. What’s important is that you have a faith in people, that they’re basically good and smart, and if you give them tools, they’ll do wonderful things with them."
- Steve Jobs
In 1994, Steve Jobs called on people to trust in the tech world, and we responded enthusiastically. The result was a forceful wave of technological utopianism that has since prevailed as the token of modern society. But far too often do technologists fall short of their promise for “wonderful things”. Just this year, European Union antitrust regulators fined Google a record $3.6 billion for manipulating search engine results to favour its own shopping service. In 2014, Facebook secretly altered 700,000 of its users’ news feeds to leverage their emotive and consumptive behaviours. The Snowden leaks of 2013 exposed that the powerhouses of the tech industry knowledgeably permitted U.S. intelligence agency NSA to piggyback on their real-time mining of customer data for untenable surveillance purposes. Yet people appear to be largely unfussed—a recent study returned astounding statistics that most national populations trust corporations more than their own governments with matters of social responsibility. Moreover, these attitudes were remarkably more pervasive in western democratic societies.
But no matter how polished or persuasive their corporate social responsibility agendas might be, commercial enterprises are, by default, unqualified to act as guardians of the public interest. The tech world is an oligopoly of powerful, profit-driven companies that are unaccountable to the public and opaque in their commercial operations, leaving much to be desired in the sphere of fair and democratic governance. As Milton Friedman famously wrote in Capitalism and Freedom:
"There is one and only one social responsibility of business – to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud."
Friedman highlights two important points—firstly, that a corporation’s only “social” responsibility is to appease shareholder interests; and secondly, that this function must be qualified by strict adherence to “the rules of the game”. As Internet intermediaries like Google and Facebook assume increasing control over public information flows, it is critical to democracy that these “rules” are (1) unambiguously defined, and (2) capable of being checked.
The purpose of this article is to evaluate the regulatory landscape of the Internet against these two normative goals. Analysing how Google and Facebook have responded to “fake news” as a case study, I argue that Internet intermediaries—as modern gatekeepers of public information—must aim to be as unbiased as possible. However, I also contend that the overreaching application of trade secrecy laws has created a “black box society” which renders the detectability of breaches almost impossible. In response, this article discusses the doctrine of “information fiduciaries” as grounds for potential reform, and suggests procedural changes to enable the enforceability of arising legal duties. Together, these measures aim to uphold the importance and seriousness of corporate accountability in the exciting, but uncertain, age of big data.
Google, Facebook and fake news
For its critics, Google’s fiery “Don’t be Evil” motto was a gift that kept on giving. Its irony peaked in 2013 with Julian Assange’s New York Times article, “The Banality of ‘Don’t be Evil’”, followed shortly by the explosive chain of media controversies arising from the Snowden Leaks. After its 2015 corporate restructure under new holdings parent Alphabet Inc., Google at last abandoned its problem-child for a less-than-exciting substitute: “Do the Right Thing”. This positive reinterpretation of Google’s mantra was paraded as a torch on the Company’s maturing corporate ethics – “do the right thing – follow the law, act honourably, and treat each other with courtesy and respect”, it reads in full. But just as “evil” was prescribed by “what Sergey [Brin] says is evil”, the ambit of what is “right” remains equally murky.
The strategy behind Google’s subtle semantic shift has begun to surface. In the aftermath of U.S. President Trump’s surprise election, Google CEO Sundar Pichai announced the Company’s moral obligation to combat “fake news”. “There should just be no situation where fake news gets distributed,” Pichai stated. “I don’t think we should debate it as much as work hard to make sure we drive news to its more trusted sources, have more fact checking and make our algorithms work better.” Fake news is undoubtedly one of the greatest societal dilemmas of our times, and urgent solutions are needed. However, the reality is that the problem lies beyond a clear distinction between “real” and “fake” information—we live in an era of “post-truth politics” where communications are infiltrated by spin, emotion and opinion. This means that with a lot of questions, the answers are contestable and possess no objective “truth value”.
The challenge we face is politicised and discursive; it involves reconciling the need to maintain accurate information online, and the democratic importance of protecting fair comment. But as American journalist Robert Bridge notes, buried under the Google CEO’s conviction for “facts” and “trust” is an unambiguously-stated belief that there shouldn’t be debate. Whilst there are legitimate reasons for ranking search results to prioritise reputable sources, these processes have the potential to be unduly politicised if left unchecked. If an algorithm goes so far as to eliminate undesirable or unpopular opinions, this is a serious threat to the “neutrality” of the Internet—the normative principle that Internet intermediaries should be as unbiased as possible. After all, Google was never elected to be arbiter of what information should and shouldn’t enter the public sphere. Given their recent breach of European antitrust laws, how can the company be trusted with such a task if it fails to uphold its own Code to provide users with “unbiased access to information”? Rather, this violation permits the possibility that Google might masquerade its political interests as the public interest by making self-serving assessments of what information should not be accessible. Without accusing Google of in fact having these intentions, this kind of unjustified censorship would pose a real and serious threat to democracy.
Facebook, which too has faced mountainous post-election criticism, has trialled a different approach to addressing fake news by introducing a “context” button that provides descriptions of articles’ sources. This approach is arguably better than Google’s proposal of unabated censorship—instead of leaving power with potentially biased corporate employees, Facebook’s new tool puts the discretionary filtering process in the hands of the user, and is therefore more consistent with principles of individual autonomy and democratic debate. Yet whilst Zuckerberg’s belief that Facebook “must be extremely cautious about becoming arbiters of truth” is a forceful counterpoint to Pichai’s activist approach, his words are disappointingly empty. Last year, Facebook was alleged to have suppressed conservative news articles from its “trending” module. This incident occurred less than two years after the Social Network failed to give notice or disclose their experimentation with 700,000 news feeds to study and manipulate users’ purchasing behaviours.
Trade secrecy laws and the black box society
Today, Internet intermediaries like Google and Facebook have become integral to the way we access, consume and interpret information. Whilst the Internet was once celebrated for decentralising power and diversifying democratic participation, these intermediaries now own the “control points” over information flows, resulting in a relentless cycle of user reliance and corporate growth. Whilst democracy requires these companies to be as unbiased as possible, we have witnessed corporate conduct that has threatened the public interest either through inadvertent bias, or deliberate deception and exploitation. Even if there are laws to penalise these behaviours, there is an absence of effective regulatory measures which allow for their detection. In other words, any obligation imposed on Internet intermediaries to comply with the rules of “open and free competition without deception or fraud”, is rendered meaningless without an effective system of adjudication.
In recent years, academics have expressed their discomfort with the rise of the “black box society” in which private corporations are empowered by laws to withhold information from the scrutiny of courts, even where there is a strong public interest justification for disclosure. US trade secrecy laws, for example, offer companies absolute protection from disclosure where a formula, pattern, device or compilation of information satisfies the following three criteria: (1) it confers a form of competitive advantage, (2) secrecy of the information is maintained and (3) the information is not publicly available. Should a trade secret be leaked, its owner may seek an injunction against the misappropriator from using the information for the life of the secret.
Search engines and social media platforms currently have the right to protect the secrecy of their ranking and indexing algorithms in litigation, and there are legitimate reasons for doing so. Firstly, trade secrecy laws have an important function in encouraging business ethics and innovation by protecting competitive advantage. Secondly, trade secrets defend the public interest by ensuring accessibility to relevant, high quality information. For example, if indexing algorithms were made public, outsiders could compromise the quality of search results and news feeds by using link farms and spam blogs to “game” the system. This has the potential to harm the public interest by favouring the communications of well-financed or fraudulent players, thus compromising equal accessibility and participation in the public sphere.
On the other hand, if secrecy is maintained to hide product deficiencies, limit criticism or deceive the public, the rationale behind the doctrine is severely undermined. In this respect, trade secrecy laws have the potential to cloak unethical or illegal operations from detection. To this end, David Levine argues that the trade secrecy doctrine must give way to values of transparency and accountability in contexts relating to the provision of public infrastructure. Whether we can go so far to classify Internet intermediaries as “public infrastructure” is a difficult question, and requiring full public disclosure of their algorithms is not only unfairly burdensome but highly uncommercial.
However, consider a scenario where a user’s personal data is classified under a company’s trade secrets—this was the case in 2011 when media activist Max Schrems requested access to all data held by the Social Network in accordance with European law. Facebook complied, but withheld some categories of personal data as being exempt from subject access requests, including “information which is a trade secret or intellectual property of Facebook Ireland Limited or its licensor”. These highly problematic scenarios, in combination with the unsatisfactory track records of the tech giants, warrant the conferral of power for at least someone to “look under the hood”. This article now turns to a proposal for law reform which aims to balance legitimate commercial interests with the protection of end users.
Looking into the black box
Professor Jack Balkin’s concept of the “information fiduciary” persuasively reconciles the conflict between protecting users and the right of businesses to preserve commerciality by safeguarding their trade secrets. Balkin’s proposal extends the common law doctrine of fiduciary relationships—those wherein a principal reposes trust and confidence in a fiduciary who undertakes to put the principal’s interests ahead of their own—to relationships involving the collection, use and disclosure of information. He helpfully explains this using the following example:
"Suppose that a doctor, lawyer, or accountant sold personal information about their clients to a data broker. Suppose that they used personal information to manipulate a client’s actions for the doctor, lawyer, or accountant’s benefit. Or suppose that they simply disclosed it in order to gain a business advantage at the expense of their client. If they did any of these things, they would likely be liable for a violation of professional conduct … Even absent an express promise not to reveal, use or sell information, there is a duty not to do so in ways that will harm the interests of the client, or that pose a conflict of interest."
Critically, Balkin argues that the new digital age has produced a new category of fiduciary relationships between online service providers and their end users. However, not all online service providers should be information fiduciaries; for such a relationship to be found, the following four criteria must be satisfied:
- The end user is placed in a position of significant vulnerability, for example, where online service providers are able to monitor the user’s activity and collect increasing amounts of information about the user;
- The end user is reasonably dependent on the online service provider for access to important online services;
- The online service provider represents itself as a trustworthy expert, and requires the disclosure of the end user’s personal information in exchange for services; and
- The online service provider holds a disproportionate volume of data about the end user, which has the potential to be used against the user’s interest.
Ultimately, it is the nature of a relationship which should inform judgments on what a certain fiduciary’s duties are, and the extent to which they apply. For example, Google and Facebook’s strong reliance on user data for commercial success creates an inherent conflict of interest between the service provider and the end user. In this context, where users are unable to assess whether their interests are being protected or how their data is being used, an information fiduciary relationship should be recognised. However, we should be careful not to impose on these companies a positive obligation to, for example, prevent bad decisions or censor all content that has the potential to be harmful. For example, a company like Facebook should have the duty to enable end users to control the disclosure of their personal information through effective privacy settings, but not to prevent them from making regrettable choices about how much they reveal about themselves on social media—such a requirement would have the effect of curtailing users’ personal autonomy and undermine the purpose of the platform.
Whilst Balkin’s evolved category of the information fiduciary may represent a sound legal doctrine for informing regulatory guidance, its application is toothless absent an effective system of accountability and enforceability. When it is suspected that an Internet intermediary has breached its fiduciary duty, courts or regulators should be able to define and investigate the suspected breach by looking into to the “black box” of the company’s algorithms or commercial operations without being constrained by trade secrecy defences. These powers, however, need not necessitate the total sacrifice of legitimate private and public interests in secrecy. Consider the American case of Search King, Inc. v Google Technology, Inc., wherein the federal District Court for the Western District of Oklahoma rejected the plaintiff’s request for the disclosure of Google’s source code as part of an application for a preliminary injunction. A more sensible approach, it is argued, would have been to have Google file its code “under seal”, and allow Search King’s lawyers to view its contents during pre-trial discovery. This method would facilitate judicial scrutiny of protected information, whilst minimising potential harm arising from full disclosure to the public or the plaintiff competitor.
The difficulty of adapting this approach to the realm of information fiduciaries is that Internet intermediaries’ trade secrets are likely to involve complex coding, algorithms and operational structures which are beyond the expertise of an ordinary lawyer. To overcome this difficulty whilst preventing the disclosure of legitimate trade interests, I propose that an impartial government agency with specific expertise in Internet affairs should be established for the purposes of bringing cases on behalf of individual complainants, and have access to what would otherwise be protected information in pre-trial discovery. If a breach is suspected, the government’s case should be heard in an adversarial, closed court system, where representatives of the Internet intermediary can have the opportunity to explain complex information and contest misguided understandings. On hearing submissions from both sides, the judges would then independently decide whether there has been a breach of a fiduciary obligation. If breaches are found, complainants could take further action for relief, without being able to access to the company’s legitimate trade secrets.
The concentration of communicative power in the hands of Internet intermediaries puts at risk our fundamental values of fairness and equality in democratic participation. Crucially, the generous operation of trade secrecy laws has given rise to a black box society where corporate accountability is emptied of its meaning. In this article, I have advocated law reform that is grounded in Professor Balkin’s concept of the information fiduciary—a proposal which aims to balance conflicts between business secrecy and transparency with regard to the protection of users’ interests. Importantly, I have also introduced judicial and administrative mechanisms to ensure that breaches of fiduciary duties are detectable. However, even these proposed measures are futile if we continue to express wilful blindness to the unacceptable trespasses of the tech giants. Protecting society’s interests entails the accountability of all parties, including end users as recipients of Internet services. This means that, when faced with new technologies, we should pause to think critically and carefully before taking up Steve Job’s call to trust in his wonderful world.
 Jeff Goodell, Interview with Steve Jobs (The Rolling Stone, 16 June 1994).
 Helen Kennedy, Post, Mine, Repeat: Social Media Data Mining Becomes Ordinary (Palgrave Macmillan, 2016).
 Goodell, above n 1.
 ‘Google Fined Record $3.57 billion by European Union Over Shopping Service’, Australian Broadcasting Corporation (online), 27 June 2017 <http://www.abc.net.au/news/2017-06-27/google-fined-record-$3.57-billion-by-european-union/8657470>.
 Robert Booth, ‘Facebook Reveals News Feed Experiment to Control Emotions’, The Guardian (online), 30 June 2014 <https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds>.
 Glenn Greenwald, No Place to Hide: Edward Snowden, the NSA and the Surveillance State (Penguin Books, 2014) 20; Garry Robson and CM Olavarria, ‘Big Collusion: Corporations, Consumers, and the Digital Surveillance State’ in Robert A Cropf and Timothy C Bagwell (eds), Ethical Issues and Citizen Rights in the Era of Digital Government Surveillance (Information Science Reference, 2016) 127, 132.
 Edelman, ‘2016 Edelman Trust Barometer’ (Global Report, 2016) 27.
 Milton Friedman, Capitalism and Freedom (University of Chicago Press, first published 1970, 2002 ed) 133.
 Tim Wu, The Master Switch: The Rise and Fall of Information Empires (Vintage, 2010).
 This article looks predominantly at US jurisdictions since most of the Internet intermediaries concerned are American-owned multinational companies with ownership of intellectual property that has been developed in the US.
 See generally, Franke Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015).
 Julian Assange, ‘The Banality of ‘Don’t be Evil’’, The New York Times (online), 1 June 2013 <http://www.nytimes.com/2013/06/02/opinion/sunday/the-banality-of-googles-dont-be-evil.html>.
 Mirren Gidda, ‘Edward Snowden and the NSA Files’, The Guardian (online), 22 August 2013 <https://www.theguardian.com/world/2013/jun/23/edward-snowden-nsa-files-timeline>.
 Tanya Basu, ‘New Google Parent Company Drops ‘Don’t Be Evil’ Motto’, Time (online), 4 October 2015 <http://time.com/4060575/alphabet-google-dont-be-evil/>.
 Alphabet, Inc., Code of Conduct (updated 21 September 2017) Alphabet Investor Relations <https://abc.xyz/investor/other/code-of-conduct.html>.
 Josh McHugh, ‘Google vs. Evil’, Wired (online), 1 January 2003 <https://www.wired.com/2003/01/google-10/>.
 Kamal Ahmed, Interview with Sundar Pichai (BBC Interview, 15 November 2016) <http://www.bbc.com/news/business-37988095>.
 Nick Enfield, ‘Navigating the Post-truth Debate: Some Key Co-ordinates’, The Conversation (online) <https://theconversation.com/navigating-the-post-truth-debate-some-key-co-ordinates-77000>.
 See Edwin Baker, ‘Scope of the First Amendment Freedom of Speech’ (1978) 25 UCLA Law Review 964, 965. Contra William P Marshall, ‘In Defense of the Search for Truth as a First Amendment Justification’ (1996) 35 Georgia Law Review 1, 1.
 Robert Bridge, ‘Welcome to 1984: Big Brother Google Now Watching Your Every Political Move’, RT (online), 9 September 2017 <https://www.rt.com/op-edge/402588-google-eric-schmidt-republicans/>.
 See Oren Bracha and Frank Pasquale, ‘Federal Search Commission? Access, Fairness, and Accountability in the Law of Search’ (2007) 93 Cornell Law Review 1149, 1167.
 See Tim Wu, ‘Network Neutrality, Broadband Discrimination’ (2003) 2 Journal on Telecommunications and High Technology Law 141.
 Alphabet, Inc., Google Code of Conduct (updated 7 August 2017) Alphabet Investor Relations <https://abc.xyz/investor/other/google-code-of-conduct.html>.
 See e.g. Paul Smith, ‘Can Facebook Kill Fake News After the Fantasy US Election Campaign?’, Australian Financial Review (online), 18 November 2016 <http://www.afr.com/technology/social-media/facebook/can-facebook-kill-fake-news-after-the-fantasy-us-election-campaign-20161117-gsry9f>.
 Josh Constine, ‘Facebook Tries Fighting Fake News with Publisher Info Button on Links’, Tech Crunch (online), 5 October 2017 <https://techcrunch.com/2017/10/05/facebook-article-information-button/>.
 Mark Zuckerberg, Facebook status update (13 November 2016) Facebook <https://www.facebook.com/zuck/posts/10103253901916271?pnref=story>.
 Nellie Bowels and Sam Thielman, ‘Facebook accused of censoring conservatives, report says’, The Guardian (online), 10 May 2016 <https://www.theguardian.com/technology/2016/may/09/facebook-newsfeed-censor-conservative-news>.
 Booth, above n 5.
 See Yochai Benkler, The Wealth of Networks: How Social Production Transforms Markets and Freedom (Yale University Press, 2006) 1.
 Bracha and Pasquale, above n 23, 1161.
 Friedman, above n 9.
 See generally Tarleton Gillespie, Wired Shut: Copyright and the Shape of Digital Culture (MIT Press, 2007); Dan L Burk and Julie E Cohen, ‘Fair Use Infrastructure for Rights Management Systems’ (2001) 15 Harvard Journal of Law and Technology 41, 54-70.
 American Law Institute, Restatement (First) of Torts (1939) §757 cmt B; Robert G Bone, ‘A New Look at Trade Secret Law: Doctrine in Search of Justification’ (1998) 86 California Law Review 241, 248.
 This however, does not exclude the permissibility of reverse engineering and independent discovery; American Law Institute, above n 35.
 See James Grimmelmann, ‘The Structure of Search Engine Law’ (2007) 93 Iowa Law Review 1, 48.
 Consider, for example, Google’s current lawsuit over a former employee’s transfer of self-driving car technology trade secrets to new employer, Uber. See David S Levine, ‘Secrecy and Unaccountability: Trade Secrets in our Public Infrastructure’ (2007) 59 Florida Law Review 135, 157-62.
 Grimmelmann, above n 37, 13-4.
 Levine, above n 38, 135.
 Gianclaudio Malgieri, ‘Trade Secrets v Personal Data: A Possible Solution for Balancing Rights’ (2016) 6, No. 2 International Data Privacy Law 102, 103.
 Bracha and Pasquale, above n 23, 1202.
 Jack M Balkin, ‘Information Fiduciaries and the First Amendment’ (2016) 49, No. 4 UC Davis Law Review 1183.
 Ibid, 1186.
 Ibid, 1205-1206.
 Ibid, 1221.
 Ibid, 1222.
 Ibid, 1229.
 Bracha and Pasquale, above n 23, 1203-4.
 Search King, Inc. v Google Technology, Inc., No. CIV-02-1457-M, 2003 U.S. Dist. LEXIS 27193 (W.D. Okla., 17 October 2002).
 James Grimmelman, ‘The Google Dilemma’ (2008) 53 New York Law School Law Review 939, 945-7.