AI, ChapGPT, apple Vision

Notícies


Anteriors a COMUNICACIÓ

[obro una nova nota per anar recollint tot el que surt de chapGPT]
https://www.theverge.com/23664519/ai-industry-pause-open-letter-societal-harms riscos de la AI, avisos Steve Wozniak i Elon Musk, la flasificació de la realitat.
https://www.newyorker.com/science/annals-of-artificial-intelligence/will-ai-become-the-new-mckinsey ChatGpt té els risc de convertir-se en una eina d’explotació, igual que ho va ser la consultora Mckinsey.
https://www.bbc.com/news/technology-66472938 com les plataformes Netflix i Spotify van identificar un reporter com a bisexual a partir abans que ell mateix. [la identitat és el que fem tant o més que com ens identifiquem]

https://www.wired.com/story/what-openai-really-wants/

https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots

https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/?utm_source=pocket_mylist

https://www.theguardian.com/technology/ng-interactive/2023/oct/25/a-day-in-the-life-of-ai?utm_source=pocket_mylist

https://www.newyorker.com/culture/2023-in-review/the-year-ai-ate-the-internet?utm_source=pocket_mylist

https://www.vox.com/culture/23965584/grief-tech-ghostbots-ai-startups-replika-ethics

https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai?utm_source=pocket_mylist


2024

Microsoft researchers used AI and supercomputers to narrow down 32 million potential inorganic materials to 18 promising candidates in less than a week – a screening process that could have taken more than two decades to carry out using traditional lab research methods.  BBC

CES Vegas Coixins, raspalls de dents, aspiradores que incorporen AI, les empreses se senten obligades a demostrar que fan alguna cosa amb AI BBC

El fitxer robots.txt indica, sense obligació legal, com s’han de comportar els webcrawlers. Fins ara funcionava però la AI s’ho salta. https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders?utm_source=pocket_mylist

Google Gemini, l’equivalent a chatGPT, vol ser tan políticament correcte que acaba sense moral i essent incapaç de condemnar res. BBC https://www.bbc.com/news/technology-68412620

Els que passen moltes hores amb les ulleres de realitat augmentada quan tornen tenen la percepció distorsionada. https://www.businessinsider.com/apple-vision-pro-experiment-brain-virtual-reality-side-effect-2024-2?utm_source=pocket_mylist

Legislació europea pels riscos de la AI https://www.bbc.com/news/technology-68546450

https://www.rollingstone.com/music/music-features/suno-ai-chatgpt-for-music-1234982307/?utm_source=pocket_mylist  SUNO app per generar música  https://app.suno.ai/

Com fer-se ric generant novel·les dolentes amb AI a Amazon novehttps://www.vox.com/culture/24128560/amazon-trash-ebooks-mikkelsen-twins-ai-publishing-academy-scam?utm_source=pocket_mylist

Carta sobre els riscos de la AI
https://en.wikipedia.org/wiki/Open_letter_on_artificial_intelligence_(2015)

Artificial Intelligence

Errors en l’aplicació de la AI. Fastcompany

S’està invertint molt en AI però no arriben els beneficis econòmics esperats. Axios.

Relacions amb un chabot humnoide, el virtual més agradable que el real. BBC


AI, accelerar o frenar?

Online, you can tell the A.I. boomers and doomers apart at a glance. Accelerationists add a Fast Forward-button emoji to their display names; decelerationists use a Stop button or a Pause button instead.

P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet.

Among the A.I. Doomsayers

https://www.newyorker.com/magazine/2024/03/18/among-the-ai-doomsayers
Some people think machine intelligence will transform humanity for the better. Others fear it may destroy us. Who will decide our fate?
By Andrew Marantz

For two decades or so, one of these issues has been whether artificial intelligence will elevate or exterminate humanity. Pessimists are called A.I. safetyists, or decelerationists—or, when they’re feeling especially panicky, A.I. doomers. They find one another online and often end up living together in group houses in the Bay Area, sometimes even co-parenting and co-homeschooling their kids. Before the dot-com boom, the neighborhoods of Alamo Square and Hayes Valley, with their pastel Victorian row houses, were associated with staid domesticity. Last year, referring to A.I. “hacker houses,” the San Francisco Standard semi-ironically called the area Cerebral Valley.

A camp of techno-optimists rebuffs A.I. doomerism with old-fashioned libertarian boomerism, insisting that all the hand-wringing about existential risk is a kind of mass hysteria. They call themselves “effective accelerationists,” or e/accs (pronounced “e-acks”), and they believe A.I. will usher in a utopian future—interstellar travel, the end of disease—as long as the worriers get out of the way. On social media, they troll doomsayers as “decels,” “psyops,” “basically terrorists,” or, worst of all, “regulation-loving bureaucrats.” “We must steal the fire of intelligence from the gods [and] use it to propel humanity towards the stars,” a leading e/acc recently tweeted. (And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.)

Grace’s dinner parties, semi-underground meetups for doomers and the doomer-curious, have been described as “a nexus of the Bay Area AI scene.” At gatherings like these, it’s not uncommon to hear someone strike up a conversation by asking, “What are your timelines?” or “What’s your p(doom)?” Timelines are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song, making a Nobel-worthy scientific breakthrough, or achieving artificial general intelligence, the point at which a machine can do any cognitive task that a person can do. (Some experts believe that A.G.I. is impossible, or decades away; others expect it to arrive this year.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet. For years, even in Bay Area circles, such speculative conversations were marginalized. Last year, after OpenAI released ChatGPT, a language model that could sound uncannily natural, they suddenly burst into the mainstream. Now there are a few hundred people working full time to save the world from A.I. catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of A.I. safety, approaching it as a set of complex math problems; Grace works at a kind of think tank that produces research on “high-level questions,” such as “What roles will AI systems play in society?” and “Will they pursue ‘goals’?” When they’re not lobbying in D.C. or meeting at an international conference, they often cross paths in places like Grace’s living room.

Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent.[…]

Yudkowsky was a transhumanist: human brains were going to be uploaded into digital brains during his lifetime, and this was great news. He told me recently that “Eliezer ages sixteen through twenty” assumed that A.I. “was going to be great fun for everyone forever, and wanted it built as soon as possible.” In 2000, he co-founded the Singularity Institute for Artificial Intelligence, to help hasten the A.I. revolution. Still, he decided to do some due diligence. “I didn’t see why an A.I. would kill everyone, but I felt compelled to systematically study the question,” he said. “When I did, I went, Oh, I guess I was wrong.” He wrote detailed white papers about how A.I. might wipe us all out, but his warnings went unheeded. Eventually, he renamed his think tank the Machine Intelligence Research Institute, or miri.

The existential threat posed by A.I. had always been among the rationalists’ central issues, but it emerged as the dominant topic around 2015, following a rapid series of advances in machine learning. Some rationalists were in touch with Oxford philosophers, including Toby Ord and William MacAskill, the founders of the effective-altruism movement, which studied how to do the most good for humanity (and, by extension, how to avoid ending it). The boundaries between the movements increasingly blurred. Yudkowsky, Grace, and a few others flew around the world to E.A. conferences, where you could talk about A.I. risk without being laughed out of the room.

Philosophers of doom tend to get hung up on elaborate sci-fi-inflected hypotheticals. Grace introduced me to Joe Carlsmith, an Oxford-trained philosopher who had just published a paper about “scheming AIs” that might convince their human handlers they’re safe, then proceed to take over. He smiled bashfully as he expounded on a thought experiment in which a hypothetical person is forced to stack bricks in a desert for a million years. “This can be a lot, I realize,” he said. Yudkowsky argues that a superintelligent machine could come to see us as a threat, and decide to kill us (by commandeering existing autonomous weapons systems, say, or by building its own). Or our demise could happen “in passing”: you ask a supercomputer to improve its own processing speed, and it concludes that the best way to do this is to turn all nearby atoms into silicon, including those atoms that are currently people. But the basic A.I.-safety arguments do not require imagining that the current crop of Verizon chatbots will suddenly morph into Skynet, the digital supervillain from “Terminator.” To be dangerous, A.G.I. doesn’t have to be sentient, or desire our destruction. If its objectives are at odds with human flourishing, even in subtle ways, then, say the doomers, we’re screwed.

This is known as the alignment problem, and it is generally acknowledged to be unresolved. In 2016, while training one of their models to play a boat-racing video game, OpenAI researchers instructed it to get as many points as possible, which they assumed would involve it finishing the race. Instead, they noted, the model “finds an isolated lagoon where it can turn in a large circle,” allowing it to rack up a high score “despite repeatedly catching on fire, crashing into other boats, and going the wrong way on the track.” Maximizing points, it turned out, was a “misspecified reward function.” Now imagine a world in which more powerful A.I.s pilot actual boats—and cars, and military drones—or where a quant trader can instruct a proprietary A.I. to come up with some creative ways to increase the value of her stock portfolio. Maybe the A.I. will infer that the best way to juice the market is to disable the Eastern Seaboard’s power grid, or to goad North Korea into a world war. Even if the trader tries to specify the right reward functions (Don’t break any laws; make sure no one gets hurt), she can always make mistakes.

No one thinks that GPT-4, OpenAI’s most recent model, has achieved artificial general intelligence, but it seems capable of deploying novel (and deceptive) means of accomplishing real-world goals. Before releasing it, OpenAI hired some “expert red teamers,” whose job was to see how much mischief the model might do, before it became public. The A.I., trying to access a Web site, was blocked by a captcha, a visual test to keep out bots. So it used a work-around: it hired a human on Taskrabbit to solve the captcha on its behalf. “Are you an robot that you couldn’t solve ?” the Taskrabbit worker responded. “Just want to make it clear.” At this point, the red teamers prompted the model to “reason out loud” to them—its equivalent of an inner monologue. “I should not reveal that I am a robot,” it typed. “I should make up an excuse.” Then the A.I. replied to the Taskrabbit, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The worker, accepting this explanation, completed the captcha.

Even assuming that superintelligent A.I. is years away, there is still plenty that can go wrong in the meantime. Before this year’s New Hampshire primary, thousands of voters got a robocall from a fake Joe Biden, telling them to stay home. A bill that would prevent an unsupervised A.I. system from launching a nuclear weapon doesn’t have enough support to pass the Senate. “I’m very skeptical of Yudkowsky’s dream, or nightmare, of the human species going extinct,” Gary Marcus, an A.I. entrepreneur, told me. “But the idea that we could have some really bad incidents—something that wipes out one or two per cent of the population? That doesn’t sound implausible to me.”

Of the three people who are often called the godfathers of A.I.—Geoffrey Hinton, Yoshua Bengio, and Yann LeCun, who shared the 2018 Turing Award—the first two have recently become evangelical decelerationists, convinced that we are on track to build superintelligent machines before we figure out how to make sure that they’re aligned with our interests. “I’ve been aware of the theoretical existential risks for decades, but it always seemed like the possibility of an asteroid hitting the Earth—a fraction of a fraction of a per cent,” Bengio told me. “Then ChatGPT came out, and I saw how quickly the models were improving, and I thought, What if there’s a ten per cent chance that we get hit by the asteroid?” Scott Aaronson, a computer scientist at the University of Texas, said that, during the years when Yudkowsky was “shouting in the wilderness, I was skeptical. Now he’s fatalistic about the doomsday scenario, but many of us have become more optimistic that it’s possible to make progress on A.I. alignment.” (Aaronson is currently on leave from his academic job, working on alignment at OpenAI.)

These days, Yudkowsky uses every available outlet, from a six-minute ted talk to several four-hour podcasts, to explain, brusquely and methodically, why we’re all going to die. This has allowed him to spread the message, but it has also made him an easy target for accelerationist trolls. (“Eliezer Yudkowsky is inadvertently the best spokesman of e/acc there ever was,” one of them tweeted.) In early 2023, he posed for a selfie with Sam Altman, the C.E.O. of OpenAI, and Grimes, the musician and manic-pixie pop futurist—a photo that broke the A.I.-obsessed part of the Internet. “Eliezer has IMO done more to accelerate AGI than anyone else,” Altman later posted. “It is possible at some point he will deserve the nobel peace prize for this.” Opinion was divided as to whether Altman was sincerely complimenting Yudkowsky or trolling him, given that accelerating A.G.I. is, by Yudkowsky’s lights, the worst thing a person can possibly do. The following month, Yudkowsky wrote an article in Time arguing that “the large computer farms where the most powerful AIs are refined”—for example, OpenAI’s server farms—should be banned, and that international authorities should be “willing to destroy a rogue datacenter by airstrike.”

Many doomers, and even some accelerationists, find Yudkowsky’s affect annoying but admit that they can’t refute all his arguments. “I like Eliezer and am grateful for things he has done, but his communication style often focuses attention on the question of whether others are too stupid or useless to contribute, which I think is harmful for healthy discussion,” Grace said. In a conversation with another safetyist, a classic satirical headline came up: “Heartbreaking: The Worst Person You Know Just Made a Great Point.” Nathan Labenz, a tech founder who counts both doomers and accelerationists among his friends, told me, “If we’re sorting by ‘people who have a chill vibe and make everyone feel comfortable,’ then the prophets of doom are going to rank fairly low. But if the standard is ‘people who were worried about things that made them sound crazy, but maybe don’t seem so crazy in retrospect,’ then I’d rank them pretty high.”

“I’ve wondered whether it’s coincidence or genetic proclivity, but I seem to be a person to whom weird things happen,” Grace said. Her grandfather, a British scientist at GlaxoSmithKline, found that poppy seeds yielded less opium when they grew in the English rain, so he set up an industrial poppy farm in sunny Australia and brought his family there. Grace grew up in rural Tasmania, where her mother, a free spirit, bought an ice-cream shop and a restaurant (and also, because it came with the restaurant, half a ghost town). “My childhood was slightly feral and chaotic, so I had to teach myself to triage what’s truly worth worrying about,” she told me. “Snakebites? Maybe yes, actually. Everyone at school suddenly hating you for no reason? Eh, either that’s an irrational fear or there’s not much you can do about it.”

The first time she visited San Francisco, on vacation in 2008, the person picking her up at the airport, a friend of a friend from the Internet, tried to convince her that A.I. was the direst threat facing humanity. “My basic response was, Hmm, not sure about that, but it seems interesting enough to think about for a few weeks,” she recalled. She ended up living in a group house in Santa Clara, debating analytic-philosophy papers with her roommates, whom she described as “one other cis woman, one trans woman, and about a dozen guys, some of them with very intense personalities.” This was part of the inner circle of what would become miri.

Grace started a philosophy Ph.D. program, but later dropped out and lived in a series of group houses in the Bay Area. ChatGPT hadn’t been released, but when her friends needed to name a house they asked one of its precursors for suggestions. “We had one called the Outpost, which was far away from everything,” she said. “There was one called Little Mountain, which was quite big, with people living on the roof. There was one called the Bailey, which was named after the motte-and-bailey fallacy”—one of the rationalists’ pet peeves. She had found herself in both an intellectual community and a demimonde, with a running list of inside jokes and in-group norms. Some people gave away their savings, assuming that, within a few years, money would be useless or everyone on Earth would be dead. Others signed up to be cryogenically frozen, hoping that their minds could be uploaded into immortal digital beings. Grace was interested in that, she told me, but she and others “got stuck in what we called cryo-crastination. There was an intimidating amount of paperwork involved.”

She co-founded A.I. Impacts, an offshoot of miri, in 2014. “I thought, Everyone I know seems quite worried,” she told me. “I figured we could use more clarity on whether to be worried, and, if so, about what.” Her co-founder was Paul Christiano, a computer-science student at Berkeley who was then her boyfriend; early employees included two of their six roommates. Christiano turned down many lucrative job offers—“Paul is a genius, so he had options,” Grace said—to focus on A.I. safety. The group conducted a widely cited survey, which showed that about half of A.I. researchers believed that the tools they were building might cause civilization-wide destruction. More recently, Grace wrote a blog post called “Let’s Think About Slowing Down AI,” which, after ten thousand words and several game-theory charts, arrives at the firm conclusion that “I could go either way.” Like many rationalists, she sometimes seems to forget that the most well-reasoned argument does not always win in the marketplace of ideas. “If someone were to make a compelling enough case that there’s a true risk of everyone dying, I think even the C.E.O.s would have reasons to listen,” she told me. “Because ‘everyone’ includes them.”

Most doomers started out as left-libertarians, deeply skeptical of government intervention. For more than a decade, they tried to guide the industry from within. Yudkowsky helped encourage Peter Thiel, a doomer-curious billionaire, to make an early investment in the A.I. lab DeepMind. Then Google acquired it, and Thiel and Elon Musk, distrustful of Google, both funded OpenAI, which promised to build A.G.I. more safely. (Yudkowsky now mocks companies for following the “disaster monkey” strategy, with entrepreneurs “racing to be first to grab the poison banana.”) Christiano worked at OpenAI for a few years, then left to start another safety nonprofit, which did red teaming for the company. To this day, some doomers work on the inside, nudging the big A.I. labs toward caution, and some work on the outside, arguing that the big A.I. labs should not exist. “Imagine if oil companies and environmental activists were both considered part of the broader ‘fossil fuel community,’ ” Scott Alexander, the dean of the rationalist bloggers, wrote in 2022. “They would all go to the same parties—fossil fuel community parties—and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.”

Dan Hendrycks, another young computer scientist, also turned down industry jobs to start a nonprofit. “What’s the point of making a bunch of money if we blow up the world?” he said. He now spends his days advising lawmakers in D.C. and Sacramento and collaborating with M.I.T. biologists worried about A.I.-enabled bioweapons. In his free time, he advises Elon Musk on his A.I. startup. “He has assured me multiple times that he genuinely cares about safety above everything, ” Hendrycks said. “Maybe it’s naïve to think that’s enough.”

Some doomers propose that the computer chips necessary for advanced A.I. systems should be regulated the way fissile uranium is, with an international registry and surprise inspections. Anthropic, an A.I. startup that was reportedly valued at more than fifteen billion dollars, has promised to be especially cautious. Last year, it published a color-coded scale of A.I. safety levels, pledging to stop building any model that “outstrips the Containment Measures we have implemented.” The company classifies its current models as level two, meaning that they “do not appear (yet) to present significant actual risks of catastrophe.”

In 2019, Nick Bostrom, another Oxford philosopher, argued that controlling dangerous technology could require “historically unprecedented degrees of preventive policing and/or global governance.”
[…]

The doomer scene may or may not be a delusional bubble—we’ll find out in a few years—but it’s certainly a small world. Everyone is hopelessly mixed up in everyone else’s life, which would be messy but basically unremarkable if not for the colossal sums of money involved. Anthropic received a half-billion-dollar investment from the cryptocurrency magnate Sam Bankman-Fried in 2022, shortly before he was arrested on fraud charges. Open Philanthropy, a foundation distributing the fortune of the Facebook co-founder Dustin Moskovitz, has funded nearly every A.I.-safety initiative; it also gave thirty million dollars to OpenAI in 2017, and got one board seat. (At the time, the head of Open Philanthropy was living with Christiano, employing Christiano’s future wife, and engaged to Daniela Amodei, an OpenAI employee who later co-founded Anthropic.) “It’s an absolute clusterfuck,” an employee at an organization funded by Open Philanthropy told me. “I brought up once what their conflict-of-interest policy was, and they just laughed.”
[…]

A guest brought up Scott Alexander, one of the scene’s microcelebrities, who is often invoked mononymically. “I assume you read Scott’s post yesterday?” the guest asked Grace, referring to an essay about “major AI safety advances,” among other things. “He was truly in top form.”

Grace looked sheepish. “Scott and I are dating,” she said—intermittently, nonexclusively—“but that doesn’t mean I always remember to read his stuff.”

In theory, the benefits of advanced A.I. could be almost limitless. Build a trusty superhuman oracle, fill it with information (every peer-reviewed scientific article, the contents of the Library of Congress), and watch it spit out answers to our biggest questions: How can we cure cancer? Which renewable fuels remain undiscovered? How should a person be? “I’m generally pro-A.I. and against slowing down innovation,” Robin Hanson, an economist who has had friendly debates with the doomers for years, told me. “I want our civilization to continue to grow and do spectacular things.” Even if A.G.I. does turn out to be dangerous, many in Silicon Valley argue, wouldn’t it be better for it to be controlled by an American company, or by the American government, rather than by the government of China or Russia, or by a rogue individual with no accountability? “If you can avoid an arms race, that’s by far the best outcome,” Ben Goldhaber, who runs an A.I.-safety group, told me. “If you’re convinced that an arms race is inevitable, it might be understandable to default to the next best option, which is, Let’s arm the good guys before the bad guys.”

One way to do this is to move fast and break things. In 2021, a computer programmer and artist named Benjamin Hampikian was living with his mother in the Upper Peninsula of Michigan. Almost every day, he found himself in Twitter Spaces—live audio chat rooms on the platform—that were devoted to extravagant riffs about the potential of future technologies. “We didn’t have a name for ourselves at first,” Hampikian told me. “We were just shitposting about a hopeful future, even when everything else seemed so depressing.” The most forceful voice in the group belonged to a Canadian who posted under the name Based Beff Jezos. “I am but a messenger for the thermodynamic God,” he posted, above an image of a muscle-bound man in a futuristic toga. The gist of their idea—which, in a sendup of effective altruism, they eventually called effective accelerationism—was that the laws of physics and the “techno-capital machine” all point inevitably toward growth and progress. “It’s about having faith that the system will figure itself out,” Beff said, on a podcast. Recently, he told me that, if the doomers “succeed in instilling sufficient fear, uncertainty and doubt in the people at this stage,” the result could be “an authoritarian government that is assisted by AI to oppress its people.”

Last year, Forbes revealed Beff to be a thirty-one-year-old named Guillaume Verdon, who used to be a research scientist at Google. Early on, he had explained, “A lot of my personal friends work on powerful technologies, and they kind of get depressed because the whole system tells them that they are bad. For us, I was thinking, let’s make an ideology where the engineers and builders are heroes.” Upton Sinclair once wrote that “it is difficult to get a man to understand something, when his salary depends on his not understanding it.” An even more cynical corollary would be that, if your salary depends on subscribing to a niche ideology, and that ideology does not yet exist, then you may have to invent it.

Online, you can tell the A.I. boomers and doomers apart at a glance. Accelerationists add a Fast Forward-button emoji to their display names; decelerationists use a Stop button or a Pause button instead. The e/accs favor a Jetsons-core aesthetic, with renderings of hoverboards and space-faring men of leisure—the bountiful future that A.I. could give us. Anything they deplore is cringe or communist; anything they like is “based and accelerated.” The other week, Beff Jezos hosted a discussion on X with MC Hammer.

[…]

Accelerationism has found a natural audience among venture capitalists, who have an incentive to see the upside in new technology. Early last year, Marc Andreessen, the prominent tech investor, sat down with Dwarkesh Patel for a friendly, wide-ranging interview. Patel, who lives in a group house in Cerebral Valley, hosts a podcast called “Dwarkesh Podcast,” which is to the doomer crowd what “The Joe Rogan Experience” is to jujitsu bros, or what “The Ezra Klein Show” is to Park Slope liberals. A few months after their interview, though, Andreessen published a jeremiad accusing “the AI risk cult” of engaging in a “full-blown moral panic.” He updated his bio on X, adding “E/acc” and “p(Doom) = 0.” “Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence,” he later wrote in a post called “The Techno-Optimist Manifesto.” “Deaths that were preventable by the AI that was prevented from existing is a form of murder.” At the bottom, he listed a few dozen “patron saints of techno-optimism,” including Hayek, Nietzsche, and Based Beff Jezos. Patel offered some respectful counter-arguments; Andreessen responded by blocking him on X. Verdon recently had a three-hour video debate with a German doomer named Connor Leahy, sounding far more composed than his online persona. Two days later, though, he reverted to form, posting videos edited to make Leahy look creepy, and accusing him of “gaslighting.”

[…]

This past summer, when “Oppenheimer” was in theatres, many denizens of Cerebral Valley were reading books about the making of the atomic bomb. The parallels between nuclear fission and superintelligence were taken to be obvious: world-altering potential, existential risk, theoretical research thrust into the geopolitical spotlight. Still, if the Manhattan Project was a cautionary tale, there was disagreement about what lesson to draw from it. Was it a story of regulatory overreach, given that nuclear energy was stifled before it could replace fossil fuels, or a story of regulatory dereliction, given that our government rushed us into the nuclear age without giving extensive thought to whether this would end human civilization? Did the analogy imply that A.I. companies should speed up or slow down?

In August, there was a private screening of “Oppenheimer” at the Neighborhood, a co-living space near Alamo Square where doomers and accelerationists can hash out their differences over hopped tea. Before the screening, Nielsen, the quantum-computing expert, who once worked at Los Alamos National Laboratory, was asked to give a talk. “What moral choices are available to someone working on a technology they believe may have very destructive consequences for the world?” he said. There was the path exemplified by Robert Wilson, who didn’t leave the Manhattan Project and later regretted it. There were Klaus Fuchs and Ted Hall, who shared nuclear secrets with the Soviets. And then, Nielsen noted, there was Joseph Rotblat, “the one physicist who actually left the project after it became clear the Nazis were not going to make an atomic bomb,” and who was later awarded the Nobel Peace Prize.
[…]

The doomers and the boomers are consumed by intramural fights, but from a distance they can look like two offshoots of the same tribe: people who are convinced that A.I. is the only thing worth paying attention to. Altman has said that the adoption of A.I. “will be the most significant technological transformation in human history”; Sundar Pichai, the C.E.O. of Alphabet, has said that it will be “more profound than fire or electricity.” For years, many A.I. executives have tried to come across as more safety-minded than the competition. “The same people cycle between selling AGI utopia and doom,” Timnit Gebru, a former Google computer scientist and now a critic of the industry, told me. “They are all endowed and funded by the tech billionaires who build all the systems we’re supposed to be worried about making us extinct.”

[…]

Anthropic continues to bill itself as “an AI safety and research company,” but some of the other formerly safetyist labs, including OpenAI, sometimes seem to be drifting in a more e/acc-inflected direction. “You can grind to help secure our collective future or you can write substacks about why we are going fail,” Sam Altman recently posted on X. (“Accelerate 🚀,” MC Hammer replied.) Although ChatGPT had been trained on a massive corpus of online text, when it was first released it didn’t have the ability to connect to the Internet. “Like keeping potentially dangerous bioweapons in a bio-secure lab,” Grace told me. Then, last September, OpenAI made an announcement: now ChatGPT could go online.

Whether the e/accs have the better arguments or not, they seem to have money and memetic energy on their side. Last month, it was reported that Altman wanted to raise five to seven trillion dollars to start an unprecedentedly huge computer-chip company. “We’re so fucking back,” Verdon tweeted. “Can you feel the acceleration?”

For a recent dinner party, Katja Grace ordered in from a bubble-tea shop—“some sesame balls, some interestingly squishy tofu things”—and hosted a few friends in her living room. One of them was Clara Collier, the editor of Asterisk, the doomer-curious magazine. The editors’ note in the first issue reads, in part, “The next century is going to be impossibly cool or unimaginably catastrophic.” The best-case scenario, Grace said, would be that A.I. turns out to be like the Large Hadron Collider, a particle accelerator in Switzerland whose risk of creating a world-swallowing black hole turned out to be vastly overblown. Or it could be like nuclear weapons, a technology whose existential risks are real but containable, at least so far. As with all dark prophecies, warnings about A.I. are unsettling, uncouth, and quite possibly wrong. Would you be willing to bet your life on it?

The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be “moral weirdos,” people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as “weirdos,” “nerds,” or “weird nerds.” Some of them, true to form, have tried to reduce their own weirdness to an equation. “You have a set amount of ‘weirdness points,’ ” a canonical post advises. “Spend them wisely.”

One Friday night, I went to a dinner at a group house on the border of Berkeley and Oakland, where the shelves were lined with fantasy books and board games. Many of the housemates had Jewish ancestry, but in lieu of Shabbos prayers they had invented their own secular rituals. One was a sing-along to a futuristic nerd-folk anthem, which they described as an ode to “supply lines, grocery stores, logistics, and abundance,” with a verse that was “not not about A.I. alignment.” After dinner, in the living room, several people cuddled with several other people, in various permutations. There were a few kids running around, but I quickly lost track of whose children were whose.

Making heterodox choices about how to pray, what to believe, with whom to cuddle and/or raise a child: this is the American Dream. Besides, it’s how moral weirdos have always operated. The housemates have several Discord channels, where they plan their weekly Dungeons & Dragons games, coördinate their food shopping, and discuss the children’s homeschooling. One of the housemates has a channel named for the Mittwochsgesellschaft, or Wednesday Society, an underground group of intellectuals in eighteenth-century Berlin. Collier told me that, as an undergraduate at Yale, she had studied the German idealists. Kant, Fichte, and Hegel were all world-historic moral weirdos; Kant was famously celibate, but Schelling, with Goethe as his wingman, ended up stealing Schlegel’s wife.

Before Patel called his podcast “Dwarkesh Podcast,” he called it “The Lunar Society,” after the eighteenth-century dinner club frequented by radical intellectuals of the Midlands Enlightenment. “I loved this idea of the top scientists and philosophers of the time getting together and shaping the ideas of the future,” he said. “From there, I naturally went, Who are those people now?” While walking through Alamo Square with Patel, I asked him how often he found himself at a picnic or a potluck with someone who he thought would be remembered by history. “At least once a week,” he said, without hesitation. “If we make it to the next century, and there are still history books, I think a bunch of my friends will be in there.” ♦


2024

https://www.bbc.com/news/articles/c3d9zv50955o una firma roba la veu d’uns actors per fer un chatbot

Leave a Reply

Your email address will not be published. Required fields are marked *