292 lines
16 KiB
HTML
Executable File
292 lines
16 KiB
HTML
Executable File
<!DOCTYPE html>
|
|
<html lang="en">
|
|
<head>
|
|
<meta charset="UTF-8">
|
|
<title>At least ten books by women about artificial intelligence (AI), both for and against</title>
|
|
<link href="./style.css" rel="stylesheet" type="text/css" media="all">
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
<meta name="author" content="Vane Vander">
|
|
</head>
|
|
<body>
|
|
<h1>At least ten books by women about artificial intelligence (AI), both for and against</h1>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>The AI Mirror</td>
|
|
<td>Shannon Vallor</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet"></td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Unmasking AI</td>
|
|
<td>Joy Buolamwini</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">Companies that claim to fear existential risk from AI could show a genuine commitment to safeguarding humanity by not releasing the AI tools they claim could end humanity.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Empire of AI</td>
|
|
<td>Karen Hao</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">In 2023, Stanford researchers would create a transparency tracker to score AI companies on whether they revealed even basic information about their large deep learning models, such as how many parameters they had, what data they were trained on, and whether there had been any independent verification of their capabilities. All ten of the companies they evaluated in the first year, including OpenAI, Google, and Anthropic, received an F; the highest score was 54 percent.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>The Algorithm</td>
|
|
<td>Hilke Schellmann</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">I would read a text in German, my native language. After every question Christine asked, I read in German the Wikipedia entry for psychometrics, which deals broadly with measurements in psychology.<br>Here is what I read: <i>Die Psychometrie ist das Gebiet der Psychologie, das sich allgemein mit Theorie und Methode des psychologischen Messens befasst...</i> And so on and so forth. No words in English crossed my lips.<br>I thought after answering all the questions in German I would get an error message from the system saying it couldn't compute any scores.<br>I was surprised when I got a message with the results. In fact, the AI gave me a score of 6 out of 9 for English competency, and overall my skill level in English was deemed "competent."</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Against Reduction</td>
|
|
<td>Noelani Arista, et al.</td>
|
|
<td>Academic</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">A typical chilling forecast of AI is that it will be smarter, stronger, and more powerful than us, but the real fear should be that it might not be better. It could be instilled with values from our past, with less nuance, more bias, and replete with reductionist tropes.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Feminist AI</td>
|
|
<td>Jude Browne, et al.</td>
|
|
<td>Academic</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">...such algorithms reinforce the status quo: those who have the most resources and the highest likelihood of success receive more resources. Through predictive algorithms, the past is recursively projected into the future, thus foreclosing options that could lead to more equitable distribution of resources and more diversity in the pool of those likely to succeed.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>AI Needs You</td>
|
|
<td>Verity Harding</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">As Jessica Montgomery of the University of Cambridge has highlighted, a survey from 2017 indicated that the public felt AI-enabled art was the least useful AI technology, and yet we've seen hundreds of millions of dollars invested into programs that use AI to generate images.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Future Tense</td>
|
|
<td>Martha Brockenbrough</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">A computer can multiply two 20-digit numbers with speed and accuracy. This would be hard for even a math champ.<br>But a human can button a shirt, tie shoes, or fix a bowl of cereal with milk - things that would be tough to accomplish for an AI-powered robot.<br>Human beings think the multiplication challenge is hard. We think getting dressed and eating breakfast is easy. But when it comes to the computational power each task takes, we have it backward.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>The AI Con</td>
|
|
<td>Emily M. Bender</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">Corrado himself admits that he wouldn't want the tool to be a part of his family's "healthcare journey." But in the same breath, he says the large language model will take "the places in healthcare where AI can be beneficial and [expand] them by 10-fold." (We note that a tenfold increase on zero is still zero. So his statement might actually be technically true.)</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Code-Dependent</td>
|
|
<td>Madhumita Murgia</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">Empathy teaches us that everyone is flawed, yet still worthy of mercy. A risk score says the opposite: this is your digitally fixed reality, you have criminality inside you waiting to burst out. Your circumstances mean you don't deserve forgiveness.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Supremacy</td>
|
|
<td>Parmy Olson</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">Imagine if a large food manufacturer like Unilever made increasingly delicious snacks but refused to put the ingredients on its packaging or explain how that food was made. That's essentially what OpenAI was doing. You could learn more about what was in a pack of Doritos than you could about a large language model.</t>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>The Atlas of AI</td>
|
|
<td>Kate Crawford</td>
|
|
<td>Academic</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">There were categories for apples and airplanes, scuba divers and sumo wrestlers. But there were cruel, offensive, and racist labels, too: photographs of people were classified into categories like "alcoholic," "ape-man," "crazy," "hooker," and "slant eye." All of these terms were imported from WordNet's lexical database and given to crowdworkers to pair with images.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Robot Souls</td>
|
|
<td>Eve Poole</td>
|
|
<td>Academic</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">But I think we would all want to argue that there is still something qualitatively different between AI learning to appreciate the colour red, and a human spontaneously doing so. In French this would be the difference between the verbs for knowing, savoir and connaître. Savoir is the kind of knowing that we can give AI; connaître, that familiarity with red, comes from somewhere else.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>The New Age of Sexism</td>
|
|
<td>Laura Bates</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">When users become desensitized to venting their frustration at Siri or Alexa for offering substandard responses or for not being clever or efficient enough, there is a risk that both they and others present, such as children growing up in homes where AI assistants are regularly used, absorb the belief that it is normal and acceptable to speak to women in a similar way.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Confronting Dystopia</td>
|
|
<td>Eva Paus</td>
|
|
<td>Academic</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">Prophecies about the devastating impact of new technologies on jobs and working conditions are not new, going back to at least the early nineteenth century, when the Luddites smashed the steam-powered looms that were threatening their jobs.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="f">
|
|
<tr class="info">
|
|
<td>Your Face Belongs to Us</td>
|
|
<td>Kashmir Hill</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">When the prototype "automated people meter" successfully detected a person sitting on the couch, their name popped up above their head in a sans-serif white font on Turk's desktop computer. It was working perfectly until the film crew surprised Turk by bringing in a black Labrador.<br>The system tagged the dog as "Stanzi," the one woman in the experiment.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<p><a class="button" href="#moids">> Show books by men too?</a></p>
|
|
<div id="moids">
|
|
<p><a class="button" href="#">> Aahh! Never mind!</a></p>
|
|
<table class="m">
|
|
<tr class="info">
|
|
<td>Nexus</td>
|
|
<td>Yuval Noah Harari</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">When we engage in a political debate with a computer impersonating a human, we lose twice. First, it is pointless for us to waste time in trying to change the opinions of a propaganda bot, which is just not open to persuasion. Second, the more we talk with the computer, the more we disclose about ourselves, thereby making it easier for the bot to hone its arguments and sway our views.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="m">
|
|
<tr class="info">
|
|
<td>My Life as an Artificial Creative Intelligence</td>
|
|
<td>Mark Amerika</td>
|
|
<td>Academic</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">We need to move beyond thinking about creativity as a simple automatic process that can be "programmed" and engineered. Instead, we need to see it as a product of human creativity, a social behavior, an aspect of our nature that was not merely shaped by technology, but was instead shaped by human cultural experiences.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="m">
|
|
<tr class="info">
|
|
<td>How to Think About AI</td>
|
|
<td>Richard Susskind</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">It's conceivable before long that there will, at some point, be robots that can run 100 metres faster than Usain Bolt or shoot lower scores on the golf course than Tiger Woods, even at their best. But would we be interested in this? You might well be if you are fascinated by robotic performance. But most of us were thrilled by Bolt and Woods in their prime precisely because they were flesh-and-blood humans like us... When we read great literature or listen to fine music or view superb paintings, part of the thrill is precisely that another human has been involved in the work - striving, communicating, creating, and, in turn, inspiring, stimulating, and elevating our lives. Again, an indispensable and intrinsic part of that experience is the knowledge that another human is at the other end... And so, no matter how capable our systems are, it's likely that many forms of human expression, not least live performance, will continue to be valued by humans for their own sake.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="m">
|
|
<tr class="info">
|
|
<td>The AI Delusion</td>
|
|
<td>Gary Smith</td>
|
|
<td>Academic</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">The average brain has nearly 100 billion neurons, which is far, far more than can be replicated by the most powerful computers. On the other hand, compared to humans, the African elephant has three times as many neurons and one dolphin species has nearly twice as many neurons in the cerebral cortex. Yet, elephants, dolphins, and other creatures do not write poetry and novels, design skyscrapers and computers, prove theorems and make rational arguments. So it isn't just a numbers game.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="m">
|
|
<tr class="info">
|
|
<td>Ethical Machines</td>
|
|
<td>Reid Blackman</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">Does your head of HR know about Amazon's biased hiring AI? Do they know how it happened? Do they know how bias can creep into an AI? Do they know the ethical, reputational, and legal implications of using biased AI? Does your chief medical officer know Optum's AI recommended paying more attention to white patients than to sicker Black patients? Are your doctors and nurses familiar with it? Does your advertising agency know about Facebook's AI that advertised houses for sale to white people and houses for rent to Black people?</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="m">
|
|
<tr class="info">
|
|
<td>Co-Intelligence</td>
|
|
<td>Ethan Mollick</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">Another consequence is that we could reduce the quality and depth of our thinking and reasoning. When we use AI to generate our first drafts, we don't have to think as hard or as deeply about what we write. We rely on the machine to do the hard work of analysis and synthesis, and we don't engage in critical and reflective thinking ourselves. We also miss the opportunity to learn from our mistakes and feedback and the chance to develop our own style.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="m">
|
|
<tr class="info">
|
|
<td>AI Snake Oil</td>
|
|
<td>Arvind Narayanan & Sayash Kapoor</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">To generate a single token - part of a word - ChatGPT has to perform roughly a trillion arithmetic operations. If you asked it to generate a poem that ended up having about a thousand tokens (i.e., a few hundred words), it would have required about a quadrillion calculations - a million billion. To appreciate the magnitude of that number, if every single person in the world together performed arithmetic at the rate of one calculation per minute, eight hours a day, a quadrillion calculations would take about a year. All that to generate one single response.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="m">
|
|
<tr class="info">
|
|
<td>If Anyone Builds It, Everyone Dies</td>
|
|
<td>Eliezer Yudkowsky & Nate Soares</td>
|
|
<td>Casual</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">And how many families would still own an original biological dog, we wonder, if with biotechnology you could make a synthetic sort of dog that was just as bouncy and cuddly and cheerful, and never threw up on your couch or got sick and tragically died? If it's just an option being offered in pure imagination and theory, it's easy to say no, when you don't have to pay for that in stained couches and crying children. But we wouldn't bet on conventional dogs being popular a hundred years later if those sorts of dogs come onto the market.<br>Similarly, human beings are not likely to be the best version of whatever the AI wants - if those preferences even involve keeping something vaguely human-shaped around, if it even has any preferences like that at all.<br>We would not be its favorite things, among all things it could create.</td>
|
|
</tr>
|
|
</table>
|
|
<br>
|
|
<table class="m">
|
|
<tr class="info">
|
|
<td>The Myth of Artificial Intelligence</td>
|
|
<td>Erik J. Larson</td>
|
|
<td>Academic</td>
|
|
</tr>
|
|
<tr>
|
|
<td class="snippet">We can summarize these positions about AI and people as follows. <i>Kurzweilians</i> (mythologists about AI, full-stop) wax mystical about machines after the Singularity having consciousness, emotions, motives, and vast intelligence... <i>Russellians</i> want to keep <i>Ex Machina</i> in movies, downsizing talk about superintelligence to more mathematically respectable ideas about general computation achieving "objectives." Unfortunately, Russellians tend to lump human beings into restricted definitions of intelligence, too. This reduces the perceived gap between human and machine, but only by reducing human possibility along with it.</td>
|
|
</tr>
|
|
</table>
|
|
</div>
|
|
</body>
|
|
</html>
|