Home » Emerging Technologies » What WikiLeaks Never Told You About Google
Emerging Technologies

What WikiLeaks Never Told You About Google

Artificial intelligence 2015 robot

Why Our World Is Changing With Google

It is well-known that Google has infiltrated almost every aspect of technology, and by way of technology, has manifested itself into almost every aspect of our lives. Some ways are straightforward – if people visit Google.com, they are well aware that they are utilizing the search engine. However, it is the more covert ways where Google sustains its omnipresence that prove to be problematic. From GPS tracking to scanning your personal emails to derive information about your favorite restaurants, there are multiple ways where Google is capable of permeating myriad specific elements of our lives that most people never realize are possible(1). With the obvious exception of AdWords, one has to question the reason behind this widespread saturation. Why is Google collecting information about us that isn’t useful in their advertising sales, and what are they doing with said information(2)? Ultimately, the question becomes ‘what is Google’s endgame?’

The answer to that question is surprisingly easy to find with, yes, a Google search – Google has made no secret about their aim from the get-go, which is the creation of artificial intelligence(3), essentially a program-based continuation of human thought and behavior. Prospectively, the A.I. will be able to interact with our lives flawlessly, as Google’s ideal prototype would “serve as a cross between a personal assistant and a brain extension.” Thus, collecting what may be perceived as arbitrary information about our personal lives proves actually being an attempt at “reading our data so that it will be able to read our minds.”(4)

One has to question these methods behind the conception of Google’s A.I. Is collecting the data of consumers really the best way to formulate what is supposed to represent a “brain extension” of humanity?(5). Besides, for the distinctly behaviorist approach in determining what makes up human nature, what about the people who exist outside the framework of their user base? If their data isn’t included, and the A.I. doesn’t represent an extension of their thoughts and needs, are they just simply excluded from reaping the benefits of the new technology, or are they expected to assimilate to be included? In this post, we will analyze the significance and implications of Google’s goals for A.I and look at who is behind it, and what this means for those left behind.

What Is Google’s Lifeblood?

Google chrome logo 2015Engineers and algorithms are Google’s lifeblood, and any technological advances within the company can be traced back to them. In order to understand the impending A.I., it is important to look at the profiles of these engineers and consider what the information culled from them means with regards to the resulting algorithmic determinations. In 2013, Google released charts of their workplace diversity, and as a reporter from the PBS Newshour website divulged, It’s not good.(6). The grim, yet completely predictable numbers show that men amount to 70 percent of the workforce. The racial demographics are no more encouraging, as they show the employees as 61 percent white, with Asians trailing in a distant second at 30 percent(7). Less than 10 percent of the staff was either black or Latino, and the mixed-race – or aptly named “other” – fell in at less than 1 percent(8) of the Google populace. It is a strong likelihood that the person behind this breakdown of collected public information, as well as the resulting algorithms used to decide what human behavior should look like, is a white male. One could conceivably add “privileged” to that description given that the starting base salary for even the lowest-ranking software engineers at Google is around 100K(9). The Atlantic reports that, for Americans under the age of 31 (the majority age of their engineers), this puts them in the 1 percent(10).

Thus, this presents an inherent flaw in Google’s reliance on an algorithm to help predict human thought and reaction – while the users gleaned for the data may represent a large scope of the world’s population, the algorithms deciphering the meaning of the data basically represent the creation of someone who relates to exactly one percent of the public, and that’s solely in America. Go a little further and that percentage shrinks drastically. There is always the chance in the development of new technologies that there will be a disconnect between the creators and the public (case in point, Google Glass). However, in this case it is particularly dangerous because A.I. is touted as being specifically built to react symbiotically with our brains, begging the question of whose brains, specifically, our new brains being fashioned after.

Google’s A.I. Mistake and the Lessons They’re Learning

Google Glass, though it wasn’t embraced by the public, it stands as an example of how naïve it is to think that A.I. is a far-off aspiration; that technology itself is a form of “soft AI.”(11). In the same vein, video games that can play with us, and even Siri, are all considered forms of soft A.I., with hard A.I. being the goal that Google is working toward, which would essentially be considered a robot. The title of an article in Forbes magazine even asked the pertinent question “What’s Driving Google’s Obsession With Artificial Intelligence and Robots?” While we have seen plenty of videos of Larry Page qualifying Google’s put on just about everything, it is a valid question because it is difficult to thoroughly swallow Page’s here-to-save-the-world routine.

Forbes’ interpretation is this: “What drives the Google founders is an acute understanding of the possibilities that long-term developments in information technology have deposited in mankind’s lap. Computing power has been doubling every 18 months since 1956. Bandwidth has been tripled and electronic storage capacity has been quadrupling every year. Put those trends together and the only reasonable inference is that our assumptions about what networked machines can and cannot do need urgently to be updated.(12)” While these things may be true, they don’t entirely explain the motivation, and it certainly doesn’t touch upon the implications of just throwing out a constant stream of changing technological advances into the world without even considering the consequences.

Google Glass was also a prime example of Google’s failure to connect with people’s real-world needs. In fact, any recent Bay Area denizen can tell you that the devices, which were essentially operating systems strapped to your face, were universally reviled. They showed not only a privileged class’s detachment from the public, but also represented a physical manifestation of the displacement of so many people in San Francisco due to the gentrification that the tech culture is repeatedly blamed for. In addition to these projected characteristics, the glasses posed a direct violation of privacy to anyone who came into contact with them, as the wearer had the ability to be recording everything they saw at any time. This posed a problem in institutions like museums because there are many copyright issues with art, one of which is, depending on the artist, that if the museum doesn’t own the pieces being showcased, the public cannot take pictures or video. This led to Google Glass being banned from most of the museums in San Francisco for a time. There was also the issue with a recording people in situations or places that they might not want the world to know about, such as bars or strip clubs. With the rise of YouTube and similar sites, there is always a chance that any video where you appear could go public. There was a highly publicized falling out at a bar last year that involved a girl wearing Google Glass, where she ended up getting the glasses ripped from her face and was basically assaulted for wearing them(13). One witness to the altercation described the crowd as angry toward the woman because they were “just upset that she would be recording outside of a bar this late with obvious, embarrassing behavior going on… just rather insulted that someone thinks it’s okay to record them the entire time they’re in public.” Others said that “she was running around very excited…and people were telling her, ‘you’re being an ***, take those glasses off”,and “you know, the crowd at Molotov’s is not a tech-oriented crowd for the most part, it’s probably one of the more punk-rock bars in the city. So, you know, it’s not really Google Glass country.(14)” The last statement – “Google Glass country” – sums up the situation perfectly; those that use that device are not really a part of our general population. They live in their own country, and, according to some, they should stay there.

All of this may show how removed Google is from the average person. However, it does not necessarily denote A.I. ending up the same way. First of all, Google is nothing if not a fluid company, one that changes with the times and the needs of their customers. They more than likely learned a lesson from Google Glass, which was a financial loss for the company.(15) Yet it does show just how badly they miscalculated what this early version of a “brain extension” might entail for the majority of the population. It’s difficult not to believe that this is due at least in part to the demographics of the engineers behind the product. They may have collected the data from us, but how they interpreted that data was unique to their perspectives. Even when they attempt to hide behind the neutrality of algorithms and claim a lack of involvement, what they may be forgetting is that they are the ones that created the algorithms.

And looking even further at the data-collecting process, who among us do the stats and numbers really represent? An algorithm can’t pick up on nuances that define human behavior, or the reasoning behind it. This is really where the biases of the designers have the opportunity to shine through. An article in Wired details a study in which they “find racial bias in online ad delivery(16).” The article talks about how Harvard computer scientist Latanya Howard began to notice her name coming up on InstantCheckMate a lot, and wondered if it was due to her name being too “black sounding”(17). Howard launched an investigation, only to discover that when searching for names that are considered traditionally “black,” criminal-related ads, along with ads for arrest records and background checks, popped up 60 percent of the time, while, when searching with monikers traditionally found among white communities, these ads appeared less than 50 percent of the time(18). A representative from InstantCheckMate vehemently denied creating an algorithm that discriminates against any users based on race, and stated that “As a point of fact, InstantCheckmate would like to state unequivocally that it has never engaged in racial profiling in Google AdWords. We have absolutely no technology in place to even connect a name with a race and have never made any attempt to do so. The very idea is contrary to our company’s most deeply held principles and values.” Of course, this is fairly meaningless, seeing as the AdWords technology is run by Google and not by the advertising companies themselves, so even if it was the intention of InstantCheckMate to have ads that were shown at specific times due to race, that does not mean that it could not happen. It is worth noting that Google, of course, has denied the accusations of racial profiling as well(19).

Issues like these arise from the fact that there is a serious divide among classes in general, and technology is no different, even if the designers and engineers behind it are oblivious to their own slants and micro-aggression. At least in the case of the Harvard study, there was a measurable disconnect, one that could be acknowledged and exposed. What happens to those who aren’t even recognized enough to be discriminated against? It isn’t being considered that, in all the data being collected in developing A.I., a huge part of the world’s population is not involved in creating it. In the Los Angeles Times last year, it was reported that 60 percent of the world’s population does not have Internet access, and when looking at Africa alone, that number jumps to 82 percent(20). That is a huge amount of people not being data-mined to create what is intended to be a universal extension of ourselves. That Google does not take these facts into account is all the more disconcerting when the founders appear on multiple TV shows, TED talks and various interviews raving about their technology having the power to change lives, when they are well aware they only mean a select part of lifes.

Google and the Evolution of Artificial Intelligence 2015

Google and the Evolution of Artificial Intelligence

The problem with Google’s A.I., besides the unintentionally elitist foundation upon which it is built, is the very process with which they are building it. Using behaviorism as a method could be considered by some as extraordinarily misguided. Behaviorists move forward with the approach that behavior can be measured, collected, and therefore predicted(21). While this technically is true, in that prediction can always be made, yet they may simply not be correct. None of this explains the real behaviour, however, and the reasons for why it is occurring. In cognitivism, the reasoning stems from cognitive processes; if you want to be able to truly understand and predict human behavior, you need to study the inner workings of the mind.

Noam Chomsky gave a particularly good example of precisely why behaviorism is an ill-advised model for artificial intelligence. In an interview with the Atlantic on “Where artificial intelligence went wrong,” Chomsky chided Google and their efforts. He used meteorology as an analogy to the flaw in their approach, and spoke of the “black box” of behavior – the opposite of behaviorism, or learned behavior, is that it is more of a study of our instinctual construct, which is a universal human trait, outside of our circumstances or behaviors. In basing the human brain off of behavioral traits, it is actually doing the opposite of deciphering a “collective consciousness, and is instead basing intrinsic characteristics on external factors and designing a code based on those and the bias of the collected data.” (22)

I’ll get my statistical priors, if you like, there’s a high probability that tomorrow’s weather here will be the same as it was yesterday in Cleveland, so I’ll stick that in, and where the sun is will have some effect, so I’ll stick that in, and you get a bunch of assumptions like that, you run the experiment, you look at it over and over again, you correct it by Bayesian methods, you get better priors. You get a pretty good approximation of what tomorrow’s weather is going to be. That’s not what meteorologists do — they want to understand how it’s working. And these are just two different concepts of what success means, of what achievement is. In my own field, language fields, it’s all over the place. Like computational cognitive science applied to language, the concept of success that’s used is virtually always this. So if you get more and more data, and better and better statistics, you can get a better and better approximation to some immense corpus of text, like everything in The Wall Street Journal archives — but you learn nothing about the language.

Even if one ascribes to the behaviorist school of thought, it is difficult to say that Chomsky is incorrect. There is no certainty in statistics, and they do nothing for the innate understanding of human nature. Furthermore, it is strange that, in being so driven toward a model of A.I., Google isn’t interested in human nature. This alone is a reason that Google’s A.I. will never be the connective force they claim it will; it has nothing to do with the people it’s designed for – which has been shown to be only the first-world segment of the planet – and is simply a result of their actions.

If anyone wanted to see a situation showing just how far off this approach is, all they would need to do is to look at their own Internet usage and contemplate how that information could be misinterpreted. For example, if your husband tells you about a coworker who broke his leg and finds it impossible to get relief, you may find it strange that he can’t find a solution, so, out of boredom and curiosity, you Googlehow to alleviate leg pain,” and, months later, you will find yourself still being inundated with ads and offers from physical therapists, when your own health is perfectly fine. It may not be particularly harmful, but it can be annoying, compounded by the fact that nothing about it relates at all to your own life or anything you care about. That will likely be the case with the burgeoning A.I. technology.

The Simplest Way for Google to Develop Artificial Intelligence

What Google really should be doing in developing an artificial intelligence, Chomsky proposes, is to ask “How does intelligence actually work? How does our brain give rise to our cognitive abilities, and could this ever be implemented in a machine?(23)” Pondering these questions would serve a far greater good in the creation and purpose of A.I., and not analyzing data that is compiled from search engines and perceived habits. As far as a preferred psychological school of thought, cognitivism is the direction Chomsky suggests(24). “Cognitivism is the doctrine that knowledge is a collection of abstract symbolic representations that exist in the mind of the learner”

Cognitists see our functions as intrinsic, and that the cognitive functions are based upon universal archetypes and symbolism. There is an unconscious projection test that professors are known for giving to their first-year psychology students. It is simply to give everyone a pen and paper and have them clear their minds and, without even looking at the page, continuously draw whatever comes to them. More often than not, a large part of the group end up drawing almost identical cyclical scribbles. These are considered universal symbols that exist within all of us in our collective unconscious, as described in the cognitivsm theory of Carl Jung(25). There is something wonderful in the connectivity of the collective unconscious that is immediately disregarded from Google’s A.I. and data-mining.

Unfortunately, the very essence of Google is data, from the data they derive from advertising to the data they collect from our everyday lives, tracking our every move. As we move toward hard A.I., the current manifestation of Google’s soft A.I. is supposed to be the Web 3.0, the semantic web, which is a way Google justifies much of their information tracking. The semantic web is purportedly supposed to saunter into the realm of “knowing” what a person would wish, and in what context. Though one has to wonder precisely from which person’s desires it will draw these conclusions? It certainly won’t be those of someone who doesn’t even have access to a computer. This just serves to further push away those who lack the access to most basic necessities, much less access to the Internet. There are consequences in this that go completely ignored by Google CEOs. What are the psychological factors in play when considering new technological advances without taking into account millions of people? These people are essentially being told that their lives don’t matter, that they are not a part of this revolution. Either that are they expected to assimilate to Google’s conception of “life.

On the flip side, there lies cognitivsm. If the cognitive functions are based on archetypes and symbolism created within all of our brains, then these would be universal, hence no one is left out. This concept is where all symbols come from, all dream interpretations, and basically is the deconstruction of everything that makes us human and, essentially, the same. In a sense, our collective unconscious is like a giant Cloud drive, which is an ironic comparison, yet one that holds true. This is where we initially grounded our ideas of A.I. as being a beautiful concept. Potentially and hypothetically, A.I. could serve as an extension of that collective unconsciousness, and, in keeping with the Cloud drive metaphor, as a sort of attached software.

Global Universal Warehouses Could Save the World

In considering Google, we kept thinking back to Paul Otlet and Henry LaFontaine. A quote from Alex Wright’s book on Otlet expressed the scientist/philopher’s idealist plans:

An ardent “internationalist,” Otlet believed in the inevitable progress of humanity toward a peaceful new future, in which the free flow of information over a distributed network would render traditional institutions — like state governments — anachronistic. Instead, he envisioned a dawning age of social progress, scientific achievement, and collective spiritual enlightenment. At the center of it all would stand the Mundaneum, a bulwark and beacon of truth for the world(26).

He and LaFontaine were late-19th Century information scientists who, like Google sought to bring information to the masses, for them by way of a “global universal warehouse (27).” However, their goal was that of world peace. In fact, throughout most of their writings, peace seemed to be a common theme. But how exactly would forming a “global universal warehouse” be the way toward peace? Looking at the way H.G. Wells described the “World Brain“(28), Pierre Teilhard de Chardin’s  notion of “collective conscious“(29), or Marshall McLuhan’s phrasing of the “global village,” all of these turns of phrase are indicative of unity, which could definitely be a reason in attaining peace, though we think there is more that lies beneath the surface(30). The “collective conscious” goes along with Jung’s idea of the “collective unconscious.” Similarly, de Chardin supposes this on the conscious level. This idea that there is a part of an “aura” that extends from one person to another, that connects us all, seems to be the very root of empathy and understanding. It is ultimately through these things that peace is attained. It is one thing to bring people together through knowledge, but it is another to connect them all on an innate level, to let people realize that there is no “us vs. them,” there is just “us.” No matter how Utopian the goal, it is one that has been attempted through the use of technology for centuries.

 

Google A.I. may have the capacity to carry out these utopist ideals, only if their behaviourist approach does not leave-out the inner workings of what makes us human. While this might be a more practical way to collect factual data, human nature is far more complicated than merely a sum of its actions, and our behaviours are not what bring us together, but essentially are what set us apart from one another. Google has already proven that the definitive reality that it strives to convey is actually carefully curated by algorithmic bias, they will need to break this rule.

Google’s Unexpected Intentions

Also proven to be carefully curated are Google’s espoused intentions. However, if you read between the lines, it is easy to find the exclusivity in their goals to better humankind. Demis Hassabis, spokesman for Google, recently discussed the future of Google’s A.I., and what he was excited about with regard to the technology. Unsurprisingly, his answer centered around video games, self-driving cars and helping scientists with issues of climate change(31). While these are all valid things to work toward, they are things that are perhaps difficult for most of the world to relate. So, when given the chance to use A.I. to discuss issues, third-world matters of hunger, poverty and war are deemed unimportant, or at least not worth talking about. This is also an issue of access, and for most of the world, everything mentioned by Hassabis would literally be impossible to take part in, thus further pushing them and their lives out of the line of sight, until they are indistinguishable in the horizon of the periphery.

The message we send people can result in a damaging psychology, affecting both self-worth and optimism, already in dire circumstances, when people are told they aren’t even part of what is supposedly a representation of the human race. In the paper I’ve Got Nothing to Hide and Other Misunderstandings of Privacy by Daniel Solove, he describes the disheartening effects of feeling of helplessness people feel when their control over their situations is relinquished to powerful institutions. Though he is actually discussing the way we in the first world are rendered helpless by the government and corporations delving into our privacy, his words can apply almost perfectly to the situation of those left behind by Google’s technological advances: “It affects the power balances between people and the institutions…It not only frustrates the individual by creating a sense of helplessness and powerlessness, but also affects social structure by altering the kind of relationships people have with each other,” and how the different classes interact with those on the other side of privilege(32).

Our first approach with this analysis of Google’s efforts involving artificial intelligence was in defense of Google. We honestly started out thinking the concept of A.I. was quite beautiful. We still do find the concept to be just that. However, we now know that is not the reality. We knew all along that it was created by a very specific, privileged sector of the population, one that could perhaps never understand what it was like to go without the Internet altogether. We was aware that most new and impressive technologies become widely available due to the exploitation and later disregard for the less fortunate (to quote the comedian Louis C.K, “maybe every incredible human achievement in history was done with slaves”). Our original stance wasn’t in defense of these things, but in defense of artificial intelligence, which is to be so in sync with our lives as to be essentially an extension of ourselves.

We was capable of justifying it by thinking that, even if it is all filtered through the mind of an upper-class white person, they at least will be well-educated, forward-thinking, left-leaning people, and the things that the algorithm are exposing are simply aspects of humanity that are ugly. But they are real, things we can see about ourselves that we don’t like. Which, to a certain degree, is true, in that all of those Google Search auto-fills are the direct result of the users’ actions. Perhaps we could use it as a lesson on the real prevalence of racism in our society, our tendencies to stereotype, and our wish to not feel so alone in our questioning of fear.

Alan turing quote 2015

We further wrote off our doubts in A.I. by rationalizing that sometimes negative things can offer a reflection of ourselves, and as we become better selves, so can the technology. Wouldn’t we rather have people who understood us and our needs as opposed to a system that fails to understand us? The under-represented will always be under-represented, the unfortunate burden of being the minority. How is this any different? Should every group get equal representation? Does it make sense to do that, even from an economic point of view? As far as social justice goes, perhaps it goes against our ethical social responsibilities, but don’t we, on the whole, design things that can be used by as many people as possible (i.e. the majority)? Assimilation is a downfall, but are the rights and rituals of culture and individuality as important as we think they are? Isn’t being human being about sharing a universal condition, rendering everything else petty?

Of course, this way of thinking is misguided, and while we knew this initially, the more we read about the course of A.I., the further we drifted away from our first rationalizing, allowing our misgivings to take over. There is no justifying the turning of a blind eye toward 60 percent of the world, toward our losses in privacy, nor toward Google’s relentless data-mining. Even if you were capable of overlooking these elements, fundamentally, the way Google is proceeding with A.I. isn’t going to be capable of producing the results they are so eager for. In studying stats and charts, they might be able to see what we’ve done, but they can’t know why we did it. Until they retool their entire approach, their A.I. attempts will likely stay in the realm of soft. And a government or new corporation may pop Google’s A.I. bubble.

Conclusion

Even if we were capable of accomplishing a more ideal A.I. than Google’s, we are essentially at the mercy of the corporations and the engineers that design the technology. Maybe once the technology is in, we can tweak it and give it a new meaning. And it could become public domain, like Santa Claus. Ultimately, love, isolation, loneliness, struggle, joy, these are things that everyone shares. Artificial intelligence should be an extension of those things. It should be an extension of the building blocks of what make us human. It should be based on humanity and not what we think a select few would want or like. In the most idealistic model, A.I. would be able to bring us closer together and bridge the understanding between cultures, and not leave the majority of the world’s citizens in the dust.

_______________________________________

References:

  1. Vaidhyanathan, Siva. The Googlization of Everything: (and Why We Should Worry). Berkeley: U of California, 2011. Print.
  2. Levy, Steven. “Secret of Googlenomics: Data-Fueled Recipe Brews Profitability.” WIRED. N.p., 22 May 09. Web. 20 Mar. 2015.
  3. Mahfood, Barry. “Google’s Real Goal Is Artificial Intelligence.” Google’s Real Goal Is Artificial Intelligence ~. N.p., 6 May 2013.
  4. Ibid.
  5. Ibid.
  6. Mahfood, Barry. “Google’s Real Goal Is Artificial Intelligence.” Google’s Real Goal Is Artificial Intelligence ~. N.p., 6 May 2013.
  7. Jacobson, Murrey. “Google Finally Discloses Its Diversity Record, and It’s Not Good.” PBS. PBS, 28 May 2014. Web. 20 Mar. 2015.
  8. Ibid.
  9. Thompson, Derek. “How Much Income Puts You in the 1 Percent If You’re 30, 40, or 50?” The Atlantic. Atlantic Media Company, 30 Oct. 2014. Web. 20 Mar. 2015.
  10. Ibid.
  11. Davies, Chris. “Google Glass Controls and Artificial Intelligence Detailed.” SlashGear. N.p., 16 July 2012. Web. 20 Mar. 2015.
  12. Cohen, Reuven. “What’s Driving Google’s Obsession With Artificial Intelligence And Robots?” Forbes. Forbes Magazine, 28 Jan. 2014. Web.
  13. Vazquez, Joe. “Woman Wearing Google Glass Says She Was Attacked In San Francisco Bar.” CBS San Francisco. N.p., 25 Feb. 2014.
  14. Ibid.
  15. Johnson, Lauren. “Google Exec Blames Google Glass Failure on Bad Marketing.” AdWeek. N.p., 2015.
  16. Solon, Olivia. “Study Finds Racial Bias in Online Ad Delivery (Wired UK).” Wired UK. N.p., 4 Feb. 2013.
  17. Solon, Olivia. “Study Finds Racial Bias in Online Ad Delivery (Wired UK).” Wired UK. N.p., 4 Feb. 2013.
  18. Ibid.
  19. Ibid.
  20. Rodriguez, Salvador. “60% of World’s Population Still Won’t Have Internet by the End of 2014.” Los Angeles Times. Los Angeles Times, 7 May 2014.
  21. Katz, Yarden. “Noam Chomsky on Where Artificial Intelligence Went Wrong.” The Atlantic. Atlantic Media Company, 01 Nov. 2012. Web.
  22. Katz, Yarden. “Noam Chomsky on Where Artificial Intelligence Went Wrong.” The Atlantic. Atlantic Media Company, 01 Nov. 2012. Web.
  23. Katz, Yarden. “Noam Chomsky on Where Artificial Intelligence Went Wrong.” The Atlantic. Atlantic Media Company, 01 Nov. 2012. Web.
  24. Ibid.

  25. Shelburne, Walter A. “Existential perspective in the thought of Carl Jung.” Journal of Religion and health 22, no. 1 (1983): 58.
  26. “The Birth of the Information Age: How Paul Otlet’s Vision for Cataloging and Connecting Humanity Shaped Our World.” Brain Pickings RSS. N.p
  27. Ibid.

  28. Wells, Herbert George. World brain. Best Classic Books, 2013.
  29. Steinhart, Eric. “Teilhard de Chardin and transhumanism.” Journal of Evolution and Technology 20, no. 1 (2008): 3.
  30. McLuhan, Marshall. Understanding media: The extensions of man. MIT press, 1994.
  31. McFarland, Matt. “Google’s Artificial Intelligence Breakthrough May Have a Huge Impact on Self-driving Cars and Much More.” Washington Post. The Washington Post, n.d.
  32. Solove, Daniel J. “‘I’ve Got Nothing to Hide’and Other Misunderstandings of Privacy.” San Diego law review 44 (2007): 745.

Bibliography:

Cohen, Reuven. “What’s Driving Google’s Obsession With Artificial Intelligence And Robots?” Forbes. Forbes Magazine, 28 Jan. 2014. Web. 20 Mar. 2015

Davies, Chris. “Google Glass Controls and Artificial Intelligence Detailed.” SlashGear. N.p., 16 July 2012. Web. 20 Mar. 2015.

Johnson, Lauren. “Google Exec Blames Google Glass Failure on Bad Marketing.” AdWeek. N.p., 2015.

Jacobson, Murrey. “Google Finally Discloses Its Diversity Record, and It’s Not Good.” PBS. PBS, 28 May 2014. Web. 20 Mar. 2015. .

Katz, Yarden. “Noam Chomsky on Where Artificial Intelligence Went Wrong.” The Atlantic. Atlantic Media Company, 01

Levy, Steven. “Secret of Googlenomics: Data-Fueled Recipe Brews Profitability.” WIRED. N.p., 22 May 09. Web. 20 Mar. 2015.

Mahfood, Barry. “Google’s Real Goal Is Artificial Intelligence.” Google’s Real Goal Is Artificial Intelligence ~. N.p., 6 May 2013. Web. 20 Mar. 2015.

McFarland, Matt. “Google’s Artificial Intelligence Breakthrough May Have a Huge Impact on Self-driving Cars and Much More.” Washington Post. The Washington Post, n.d. Web. 20 Mar. 2015.

McLuhan, Marshall. Understanding media: The extensions of man. MIT press, 1994.

Rodriguez, Salvador. “60% of World’s Population Still Won’t Have Internet by the End of 2014.” Los Angeles Times. Los Angeles Times, 7 May 2014. Web. Mar. 2015.

Solon, Olivia. “Study Finds Racial Bias in Online Ad Delivery (Wired UK).” Wired UK. N.p., 4 Feb. 2013. Web. Mar. 2015.

Solove, Daniel J. “‘I’ve Got Nothing to Hide’and Other Misunderstandings of Privacy.” San Diego law review 44 (2007): 745.

Shelburne, Walter A. “Existential perspective in the thought of Carl Jung.” Journal of Religion and health 22, no. 1 (1983): 58.

Steinhart, Eric. “Teilhard de Chardin and transhumanism.” Journal of Evolution and Technology 20, no. 1 (2008): 3.

“The Birth of the Information Age: How Paul Otlet’s Vision for Cataloging and Connecting Humanity Shaped Our World.” Brain Pickings RSS. N.p., n.d. Web. 20 Mar. 2015

Thompson, Derek. “How Much Income Puts You in the 1 Percent If You’re 30, 40, or 50?” The Atlantic. Atlantic Media Company, 30 Oct. 2014. Web. 20 Mar. 2015.

Vaidhyanathan, Siva. The Googlization of Everything: (and Why We Should Worry). Berkeley: U of California, 2011. Print.

Vazquez, Joe. “Woman Wearing Google Glass Says She Was Attacked In San Francisco Bar.” CBS San Francisco. N.p., 25 Feb. 2014. Web. 20 Mar. 2015

Wells, Herbert George. World brain. Best Classic Books, 2013.

COMMENTS

SUBSCRIBE FOR FREE...

Enter your email address to subscribe to Richtopia.com and receive notifications of new posts by email.

As Seen On Forbes