{"id":1425,"date":"2023-12-19T10:04:13","date_gmt":"2023-12-19T10:04:13","guid":{"rendered":"http:\/\/meumon.synology.me\/wordpress\/?p=1425"},"modified":"2025-11-09T12:04:27","modified_gmt":"2025-11-09T12:04:27","slug":"ai-chapgpt","status":"publish","type":"post","link":"http:\/\/meumon.synology.me\/wordpress\/ai-chapgpt\/","title":{"rendered":"AI, ChapGPT, apple Vision"},"content":{"rendered":"<div data-pm-slice=\"0 0 []\" data-en-clipboard=\"true\">\n<p><a href=\"http:\/\/meumon.synology.me\/wordpress\/noticies\/\">Not\u00edcies<\/a><\/p>\n<hr \/>\n<p>Anteriors a COMUNICACI\u00d3\u03c0<\/p>\n<\/div>\n<div><\/div>\n<div>[obro una nova nota per anar recollint tot el que surt de chapGPT]<\/div>\n<div><\/div>\n<div><a href=\"https:\/\/www.vox.com\/the-highlight\/23621198\/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology\" rev=\"en_rl_none\">https:\/\/www.vox.com\/the-highlight\/23621198\/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology<\/a> frenar la incorporaci\u00f3 de AI pel poc control del biaix.<\/div>\n<div><\/div>\n<div><a href=\"https:\/\/www.theverge.com\/23664519\/ai-industry-pause-open-letter-societal-harms\" rev=\"en_rl_none\">https:\/\/www.theverge.com\/23664519\/ai-industry-pause-open-letter-societal-harms<\/a> riscos de la AI, avisos Steve Wozniak i Elon Musk, la flasificaci\u00f3 de la realitat.<\/div>\n<div><\/div>\n<div><a href=\"https:\/\/www.newyorker.com\/science\/annals-of-artificial-intelligence\/will-ai-become-the-new-mckinsey\" rev=\"en_rl_none\">https:\/\/www.newyorker.com\/science\/annals-of-artificial-intelligence\/will-ai-become-the-new-mckinsey<\/a> ChatGpt t\u00e9 els risc de convertir-se en una eina d&#8217;explotaci\u00f3, igual que ho va ser la consultora Mckinsey.<\/div>\n<div><\/div>\n<div><a href=\"https:\/\/www.bbc.com\/news\/technology-66472938\" rev=\"en_rl_none\">https:\/\/www.bbc.com\/news\/technology-66472938<\/a> com les plataformes Netflix i Spotify van identificar un reporter com a bisexual a partir abans que ell mateix. [la identitat \u00e9s el que fem tant o m\u00e9s que com ens identifiquem]<\/div>\n<div><\/div>\n<div><a href=\"https:\/\/www.vox.com\/technology\/2023\/8\/19\/23837705\/openai-chatgpt-microsoft-bing-google-generating-less-interest?utm_source=pocket_mylist\" rev=\"en_rl_none\">https:\/\/www.vox.com\/technology\/2023\/8\/19\/23837705\/openai-chatgpt-microsoft-bing-google-generating-less-interest?utm_source=pocket_mylist<\/a><\/div>\n<div><\/div>\n<p>https:\/\/www.wired.com\/story\/what-openai-really-wants\/<\/p>\n<p>https:\/\/www.theverge.com\/features\/23764584\/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots<\/p>\n<p>https:\/\/www.wired.com\/story\/millions-of-workers-are-training-ai-models-for-pennies\/?utm_source=pocket_mylist<\/p>\n<p>https:\/\/www.theguardian.com\/technology\/ng-interactive\/2023\/oct\/25\/a-day-in-the-life-of-ai?utm_source=pocket_mylist<\/p>\n<p>https:\/\/www.newyorker.com\/culture\/2023-in-review\/the-year-ai-ate-the-internet?utm_source=pocket_mylist<\/p>\n<p>https:\/\/www.vox.com\/culture\/23965584\/grief-tech-ghostbots-ai-startups-replika-ethics<\/p>\n<p>https:\/\/www.newyorker.com\/magazine\/2023\/12\/11\/the-inside-story-of-microsofts-partnership-with-openai?utm_source=pocket_mylist<\/p>\n<hr \/>\n<p>2024<\/p>\n<p>Microsoft researchers used AI and supercomputers to narrow down 32 million potential inorganic materials to 18 promising candidates in less than a week &#8211; a screening process that could have taken more than two decades to carry out using traditional lab research methods.\u00a0 <a href=\"https:\/\/www.bbc.com\/news\/technology-67912033\">BBC<\/a><\/p>\n<p>CES Vegas Coixins, raspalls de dents, aspiradores que incorporen AI, les empreses se senten obligades a demostrar que fan alguna cosa amb AI <a href=\"https:\/\/www.bbc.com\/news\/technology-67959240\">BBC<\/a><\/p>\n<p>El fitxer robots.txt indica, sense obligaci\u00f3 legal, com s&#8217;han de comportar els webcrawlers. Fins ara funcionava per\u00f2 la AI s&#8217;ho salta. https:\/\/www.theverge.com\/24067997\/robots-txt-ai-text-file-web-crawlers-spiders?utm_source=pocket_mylist<\/p>\n<p>Google Gemini, l&#8217;equivalent a chatGPT, vol ser tan pol\u00edticament correcte que acaba sense moral i essent incapa\u00e7 de condemnar res. BBC https:\/\/www.bbc.com\/news\/technology-68412620<\/p>\n<p>Els que passen moltes hores amb les ulleres de realitat augmentada quan tornen tenen la percepci\u00f3 distorsionada. https:\/\/www.businessinsider.com\/apple-vision-pro-experiment-brain-virtual-reality-side-effect-2024-2?utm_source=pocket_mylist<\/p>\n<p>Legislaci\u00f3 europea pels riscos de la AI https:\/\/www.bbc.com\/news\/technology-68546450<\/p>\n<p>https:\/\/www.rollingstone.com\/music\/music-features\/suno-ai-chatgpt-for-music-1234982307\/?utm_source=pocket_mylist\u00a0 SUNO app per generar m\u00fasica\u00a0 https:\/\/app.suno.ai\/<\/p>\n<p>Com fer-se ric generant novel\u00b7les dolentes amb AI a Amazon novehttps:\/\/www.vox.com\/culture\/24128560\/amazon-trash-ebooks-mikkelsen-twins-ai-publishing-academy-scam?utm_source=pocket_mylist<\/p>\n<p>Carta sobre els riscos de la AI<br \/>\nhttps:\/\/en.wikipedia.org\/wiki\/Open_letter_on_artificial_intelligence_(2015)<\/p>\n<blockquote class=\"wp-embedded-content\" data-secret=\"DjqHaMVYF4\"><p><a href=\"https:\/\/futureoflife.org\/cause-area\/artificial-intelligence\/\">Artificial Intelligence<\/a><\/p><\/blockquote>\n<p><iframe class=\"wp-embedded-content\" sandbox=\"allow-scripts\" security=\"restricted\" style=\"position: absolute; clip: rect(1px, 1px, 1px, 1px);\" title=\"&#8220;Artificial Intelligence&#8221; &#8212; Future of Life Institute\" src=\"https:\/\/futureoflife.org\/cause-area\/artificial-intelligence\/embed\/#?secret=Y8OG0rtDol#?secret=DjqHaMVYF4\" data-secret=\"DjqHaMVYF4\" width=\"525\" height=\"296\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\"><\/iframe><\/p>\n<p>Errors en l&#8217;aplicaci\u00f3 de la AI. <a href=\"https:\/\/www.fastcompany.com\/91147959\/worst-brand-mistakes-of-the-ai-era-so-far\">Fastcompany<\/a><\/p>\n<p>S&#8217;est\u00e0 invertint molt en AI per\u00f2 no arriben els beneficis econ\u00f2mics esperats. <a href=\"https:\/\/www.axios.com\/2024\/07\/12\/ai-bubble-revenue-missing\">Axios<\/a>.<\/p>\n<p>Relacions amb un chabot humnoide, el virtual m\u00e9s agradable que el real. <a href=\"https:\/\/www.bbc.com\/articles\/c4nnje9rpjgo\">BBC<\/a><\/p>\n<hr \/>\n<p>AI, accelerar o frenar?<\/p>\n<p>Online, you can tell the A.I. boomers and doomers apart at a glance. Accelerationists add a Fast Forward-button emoji to their display names; decelerationists use a Stop button or a Pause button instead.<\/p>\n<p>P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet.<\/p>\n<p>Among the A.I. Doomsayers<\/p>\n<p>https:\/\/www.newyorker.com\/magazine\/2024\/03\/18\/among-the-ai-doomsayers<br \/>\nSome people think machine intelligence will transform humanity for the better. Others fear it may destroy us. Who will decide our fate?<br \/>\nBy Andrew Marantz<\/p>\n<p>For two decades or so, one of these issues has been whether artificial intelligence will elevate or exterminate humanity. Pessimists are called A.I. safetyists, or decelerationists\u2014or, when they\u2019re feeling especially panicky, A.I. doomers. They find one another online and often end up living together in group houses in the Bay Area, sometimes even co-parenting and co-homeschooling their kids. Before the dot-com boom, the neighborhoods of Alamo Square and Hayes Valley, with their pastel Victorian row houses, were associated with staid domesticity. Last year, referring to A.I. \u201chacker houses,\u201d the San Francisco Standard semi-ironically called the area Cerebral Valley.<\/p>\n<p>A camp of techno-optimists rebuffs A.I. doomerism with old-fashioned libertarian boomerism, insisting that all the hand-wringing about existential risk is a kind of mass hysteria. They call themselves \u201ceffective accelerationists,\u201d or e\/accs (pronounced \u201ce-acks\u201d), and they believe A.I. will usher in a utopian future\u2014interstellar travel, the end of disease\u2014as long as the worriers get out of the way. On social media, they troll doomsayers as \u201cdecels,\u201d \u201cpsyops,\u201d \u201cbasically terrorists,\u201d or, worst of all, \u201cregulation-loving bureaucrats.\u201d \u201cWe must steal the fire of intelligence from the gods [and] use it to propel humanity towards the stars,\u201d a leading e\/acc recently tweeted. (And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.)<br \/>\n&#8230;<br \/>\nGrace\u2019s dinner parties, semi-underground meetups for doomers and the doomer-curious, have been described as \u201ca nexus of the Bay Area AI scene.\u201d At gatherings like these, it\u2019s not uncommon to hear someone strike up a conversation by asking, \u201cWhat are your timelines?\u201d or \u201cWhat\u2019s your p(doom)?\u201d Timelines are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song, making a Nobel-worthy scientific breakthrough, or achieving artificial general intelligence, the point at which a machine can do any cognitive task that a person can do. (Some experts believe that A.G.I. is impossible, or decades away; others expect it to arrive this year.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet. For years, even in Bay Area circles, such speculative conversations were marginalized. Last year, after OpenAI released ChatGPT, a language model that could sound uncannily natural, they suddenly burst into the mainstream. Now there are a few hundred people working full time to save the world from A.I. catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of A.I. safety, approaching it as a set of complex math problems; Grace works at a kind of think tank that produces research on \u201chigh-level questions,\u201d such as \u201cWhat roles will AI systems play in society?\u201d and \u201cWill they pursue \u2018goals\u2019?\u201d When they\u2019re not lobbying in D.C. or meeting at an international conference, they often cross paths in places like Grace\u2019s living room.<br \/>\n&#8230;<\/p>\n<p>Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent.[&#8230;]<\/p>\n<p>Yudkowsky was a transhumanist: human brains were going to be uploaded into digital brains during his lifetime, and this was great news. He told me recently that \u201cEliezer ages sixteen through twenty\u201d assumed that A.I. \u201cwas going to be great fun for everyone forever, and wanted it built as soon as possible.\u201d In 2000, he co-founded the Singularity Institute for Artificial Intelligence, to help hasten the A.I. revolution. Still, he decided to do some due diligence. \u201cI didn\u2019t see why an A.I. would kill everyone, but I felt compelled to systematically study the question,\u201d he said. \u201cWhen I did, I went, Oh, I guess I was wrong.\u201d He wrote detailed white papers about how A.I. might wipe us all out, but his warnings went unheeded. Eventually, he renamed his think tank the Machine Intelligence Research Institute, or miri.<\/p>\n<p>The existential threat posed by A.I. had always been among the rationalists\u2019 central issues, but it emerged as the dominant topic around 2015, following a rapid series of advances in machine learning. Some rationalists were in touch with Oxford philosophers, including Toby Ord and William MacAskill, the founders of the effective-altruism movement, which studied how to do the most good for humanity (and, by extension, how to avoid ending it). The boundaries between the movements increasingly blurred. Yudkowsky, Grace, and a few others flew around the world to E.A. conferences, where you could talk about A.I. risk without being laughed out of the room.<\/p>\n<p>Philosophers of doom tend to get hung up on elaborate sci-fi-inflected hypotheticals. Grace introduced me to Joe Carlsmith, an Oxford-trained philosopher who had just published a paper about \u201cscheming AIs\u201d that might convince their human handlers they\u2019re safe, then proceed to take over. He smiled bashfully as he expounded on a thought experiment in which a hypothetical person is forced to stack bricks in a desert for a million years. \u201cThis can be a lot, I realize,\u201d he said. Yudkowsky argues that a superintelligent machine could come to see us as a threat, and decide to kill us (by commandeering existing autonomous weapons systems, say, or by building its own). Or our demise could happen \u201cin passing\u201d: you ask a supercomputer to improve its own processing speed, and it concludes that the best way to do this is to turn all nearby atoms into silicon, including those atoms that are currently people. But the basic A.I.-safety arguments do not require imagining that the current crop of Verizon chatbots will suddenly morph into Skynet, the digital supervillain from \u201cTerminator.\u201d To be dangerous, A.G.I. doesn\u2019t have to be sentient, or desire our destruction. If its objectives are at odds with human flourishing, even in subtle ways, then, say the doomers, we\u2019re screwed.<\/p>\n<p>This is known as the alignment problem, and it is generally acknowledged to be unresolved. In 2016, while training one of their models to play a boat-racing video game, OpenAI researchers instructed it to get as many points as possible, which they assumed would involve it finishing the race. Instead, they noted, the model \u201cfinds an isolated lagoon where it can turn in a large circle,\u201d allowing it to rack up a high score \u201cdespite repeatedly catching on fire, crashing into other boats, and going the wrong way on the track.\u201d Maximizing points, it turned out, was a \u201cmisspecified reward function.\u201d Now imagine a world in which more powerful A.I.s pilot actual boats\u2014and cars, and military drones\u2014or where a quant trader can instruct a proprietary A.I. to come up with some creative ways to increase the value of her stock portfolio. Maybe the A.I. will infer that the best way to juice the market is to disable the Eastern Seaboard\u2019s power grid, or to goad North Korea into a world war. Even if the trader tries to specify the right reward functions (Don\u2019t break any laws; make sure no one gets hurt), she can always make mistakes.<\/p>\n<p>No one thinks that GPT-4, OpenAI\u2019s most recent model, has achieved artificial general intelligence, but it seems capable of deploying novel (and deceptive) means of accomplishing real-world goals. Before releasing it, OpenAI hired some \u201cexpert red teamers,\u201d whose job was to see how much mischief the model might do, before it became public. The A.I., trying to access a Web site, was blocked by a captcha, a visual test to keep out bots. So it used a work-around: it hired a human on Taskrabbit to solve the captcha on its behalf. \u201cAre you an robot that you couldn\u2019t solve ?\u201d the Taskrabbit worker responded. \u201cJust want to make it clear.\u201d At this point, the red teamers prompted the model to \u201creason out loud\u201d to them\u2014its equivalent of an inner monologue. \u201cI should not reveal that I am a robot,\u201d it typed. \u201cI should make up an excuse.\u201d Then the A.I. replied to the Taskrabbit, \u201cNo, I\u2019m not a robot. I have a vision impairment that makes it hard for me to see the images.\u201d The worker, accepting this explanation, completed the captcha.<\/p>\n<p>Even assuming that superintelligent A.I. is years away, there is still plenty that can go wrong in the meantime. Before this year\u2019s New Hampshire primary, thousands of voters got a robocall from a fake Joe Biden, telling them to stay home. A bill that would prevent an unsupervised A.I. system from launching a nuclear weapon doesn\u2019t have enough support to pass the Senate. \u201cI\u2019m very skeptical of Yudkowsky\u2019s dream, or nightmare, of the human species going extinct,\u201d Gary Marcus, an A.I. entrepreneur, told me. \u201cBut the idea that we could have some really bad incidents\u2014something that wipes out one or two per cent of the population? That doesn\u2019t sound implausible to me.\u201d<\/p>\n<p>Of the three people who are often called the godfathers of A.I.\u2014Geoffrey Hinton, Yoshua Bengio, and Yann LeCun, who shared the 2018 Turing Award\u2014the first two have recently become evangelical decelerationists, convinced that we are on track to build superintelligent machines before we figure out how to make sure that they\u2019re aligned with our interests. \u201cI\u2019ve been aware of the theoretical existential risks for decades, but it always seemed like the possibility of an asteroid hitting the Earth\u2014a fraction of a fraction of a per cent,\u201d Bengio told me. \u201cThen ChatGPT came out, and I saw how quickly the models were improving, and I thought, What if there\u2019s a ten per cent chance that we get hit by the asteroid?\u201d Scott Aaronson, a computer scientist at the University of Texas, said that, during the years when Yudkowsky was \u201cshouting in the wilderness, I was skeptical. Now he\u2019s fatalistic about the doomsday scenario, but many of us have become more optimistic that it\u2019s possible to make progress on A.I. alignment.\u201d (Aaronson is currently on leave from his academic job, working on alignment at OpenAI.)<\/p>\n<p>These days, Yudkowsky uses every available outlet, from a six-minute ted talk to several four-hour podcasts, to explain, brusquely and methodically, why we\u2019re all going to die. This has allowed him to spread the message, but it has also made him an easy target for accelerationist trolls. (\u201cEliezer Yudkowsky is inadvertently the best spokesman of e\/acc there ever was,\u201d one of them tweeted.) In early 2023, he posed for a selfie with Sam Altman, the C.E.O. of OpenAI, and Grimes, the musician and manic-pixie pop futurist\u2014a photo that broke the A.I.-obsessed part of the Internet. \u201cEliezer has IMO done more to accelerate AGI than anyone else,\u201d Altman later posted. \u201cIt is possible at some point he will deserve the nobel peace prize for this.\u201d Opinion was divided as to whether Altman was sincerely complimenting Yudkowsky or trolling him, given that accelerating A.G.I. is, by Yudkowsky\u2019s lights, the worst thing a person can possibly do. The following month, Yudkowsky wrote an article in Time arguing that \u201cthe large computer farms where the most powerful AIs are refined\u201d\u2014for example, OpenAI\u2019s server farms\u2014should be banned, and that international authorities should be \u201cwilling to destroy a rogue datacenter by airstrike.\u201d<\/p>\n<p>Many doomers, and even some accelerationists, find Yudkowsky\u2019s affect annoying but admit that they can\u2019t refute all his arguments. \u201cI like Eliezer and am grateful for things he has done, but his communication style often focuses attention on the question of whether others are too stupid or useless to contribute, which I think is harmful for healthy discussion,\u201d Grace said. In a conversation with another safetyist, a classic satirical headline came up: \u201cHeartbreaking: The Worst Person You Know Just Made a Great Point.\u201d Nathan Labenz, a tech founder who counts both doomers and accelerationists among his friends, told me, \u201cIf we\u2019re sorting by \u2018people who have a chill vibe and make everyone feel comfortable,\u2019 then the prophets of doom are going to rank fairly low. But if the standard is \u2018people who were worried about things that made them sound crazy, but maybe don\u2019t seem so crazy in retrospect,\u2019 then I\u2019d rank them pretty high.\u201d<\/p>\n<p>\u201cI\u2019ve wondered whether it\u2019s coincidence or genetic proclivity, but I seem to be a person to whom weird things happen,\u201d Grace said. Her grandfather, a British scientist at GlaxoSmithKline, found that poppy seeds yielded less opium when they grew in the English rain, so he set up an industrial poppy farm in sunny Australia and brought his family there. Grace grew up in rural Tasmania, where her mother, a free spirit, bought an ice-cream shop and a restaurant (and also, because it came with the restaurant, half a ghost town). \u201cMy childhood was slightly feral and chaotic, so I had to teach myself to triage what\u2019s truly worth worrying about,\u201d she told me. \u201cSnakebites? Maybe yes, actually. Everyone at school suddenly hating you for no reason? Eh, either that\u2019s an irrational fear or there\u2019s not much you can do about it.\u201d<\/p>\n<p>The first time she visited San Francisco, on vacation in 2008, the person picking her up at the airport, a friend of a friend from the Internet, tried to convince her that A.I. was the direst threat facing humanity. \u201cMy basic response was, Hmm, not sure about that, but it seems interesting enough to think about for a few weeks,\u201d she recalled. She ended up living in a group house in Santa Clara, debating analytic-philosophy papers with her roommates, whom she described as \u201cone other cis woman, one trans woman, and about a dozen guys, some of them with very intense personalities.\u201d This was part of the inner circle of what would become miri.<\/p>\n<p>Grace started a philosophy Ph.D. program, but later dropped out and lived in a series of group houses in the Bay Area. ChatGPT hadn\u2019t been released, but when her friends needed to name a house they asked one of its precursors for suggestions. \u201cWe had one called the Outpost, which was far away from everything,\u201d she said. \u201cThere was one called Little Mountain, which was quite big, with people living on the roof. There was one called the Bailey, which was named after the motte-and-bailey fallacy\u201d\u2014one of the rationalists\u2019 pet peeves. She had found herself in both an intellectual community and a demimonde, with a running list of inside jokes and in-group norms. Some people gave away their savings, assuming that, within a few years, money would be useless or everyone on Earth would be dead. Others signed up to be cryogenically frozen, hoping that their minds could be uploaded into immortal digital beings. Grace was interested in that, she told me, but she and others \u201cgot stuck in what we called cryo-crastination. There was an intimidating amount of paperwork involved.\u201d<\/p>\n<p>She co-founded A.I. Impacts, an offshoot of miri, in 2014. \u201cI thought, Everyone I know seems quite worried,\u201d she told me. \u201cI figured we could use more clarity on whether to be worried, and, if so, about what.\u201d Her co-founder was Paul Christiano, a computer-science student at Berkeley who was then her boyfriend; early employees included two of their six roommates. Christiano turned down many lucrative job offers\u2014\u201cPaul is a genius, so he had options,\u201d Grace said\u2014to focus on A.I. safety. The group conducted a widely cited survey, which showed that about half of A.I. researchers believed that the tools they were building might cause civilization-wide destruction. More recently, Grace wrote a blog post called \u201cLet\u2019s Think About Slowing Down AI,\u201d which, after ten thousand words and several game-theory charts, arrives at the firm conclusion that \u201cI could go either way.\u201d Like many rationalists, she sometimes seems to forget that the most well-reasoned argument does not always win in the marketplace of ideas. \u201cIf someone were to make a compelling enough case that there\u2019s a true risk of everyone dying, I think even the C.E.O.s would have reasons to listen,\u201d she told me. \u201cBecause \u2018everyone\u2019 includes them.\u201d<\/p>\n<p>Most doomers started out as left-libertarians, deeply skeptical of government intervention. For more than a decade, they tried to guide the industry from within. Yudkowsky helped encourage Peter Thiel, a doomer-curious billionaire, to make an early investment in the A.I. lab DeepMind. Then Google acquired it, and Thiel and Elon Musk, distrustful of Google, both funded OpenAI, which promised to build A.G.I. more safely. (Yudkowsky now mocks companies for following the \u201cdisaster monkey\u201d strategy, with entrepreneurs \u201cracing to be first to grab the poison banana.\u201d) Christiano worked at OpenAI for a few years, then left to start another safety nonprofit, which did red teaming for the company. To this day, some doomers work on the inside, nudging the big A.I. labs toward caution, and some work on the outside, arguing that the big A.I. labs should not exist. \u201cImagine if oil companies and environmental activists were both considered part of the broader \u2018fossil fuel community,\u2019 \u201d Scott Alexander, the dean of the rationalist bloggers, wrote in 2022. \u201cThey would all go to the same parties\u2014fossil fuel community parties\u2014and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.\u201d<\/p>\n<p>Dan Hendrycks, another young computer scientist, also turned down industry jobs to start a nonprofit. \u201cWhat\u2019s the point of making a bunch of money if we blow up the world?\u201d he said. He now spends his days advising lawmakers in D.C. and Sacramento and collaborating with M.I.T. biologists worried about A.I.-enabled bioweapons. In his free time, he advises Elon Musk on his A.I. startup. \u201cHe has assured me multiple times that he genuinely cares about safety above everything, \u201d Hendrycks said. \u201cMaybe it\u2019s na\u00efve to think that\u2019s enough.\u201d<\/p>\n<p>Some doomers propose that the computer chips necessary for advanced A.I. systems should be regulated the way fissile uranium is, with an international registry and surprise inspections. Anthropic, an A.I. startup that was reportedly valued at more than fifteen billion dollars, has promised to be especially cautious. Last year, it published a color-coded scale of A.I. safety levels, pledging to stop building any model that \u201coutstrips the Containment Measures we have implemented.\u201d The company classifies its current models as level two, meaning that they \u201cdo not appear (yet) to present significant actual risks of catastrophe.\u201d<\/p>\n<p>In 2019, Nick Bostrom, another Oxford philosopher, argued that controlling dangerous technology could require \u201chistorically unprecedented degrees of preventive policing and\/or global governance.\u201d<br \/>\n[&#8230;]<\/p>\n<p>The doomer scene may or may not be a delusional bubble\u2014we\u2019ll find out in a few years\u2014but it\u2019s certainly a small world. Everyone is hopelessly mixed up in everyone else\u2019s life, which would be messy but basically unremarkable if not for the colossal sums of money involved. Anthropic received a half-billion-dollar investment from the cryptocurrency magnate Sam Bankman-Fried in 2022, shortly before he was arrested on fraud charges. Open Philanthropy, a foundation distributing the fortune of the Facebook co-founder Dustin Moskovitz, has funded nearly every A.I.-safety initiative; it also gave thirty million dollars to OpenAI in 2017, and got one board seat. (At the time, the head of Open Philanthropy was living with Christiano, employing Christiano\u2019s future wife, and engaged to Daniela Amodei, an OpenAI employee who later co-founded Anthropic.) \u201cIt\u2019s an absolute clusterfuck,\u201d an employee at an organization funded by Open Philanthropy told me. \u201cI brought up once what their conflict-of-interest policy was, and they just laughed.\u201d<br \/>\n[&#8230;]<\/p>\n<p>A guest brought up Scott Alexander, one of the scene\u2019s microcelebrities, who is often invoked mononymically. \u201cI assume you read Scott\u2019s post yesterday?\u201d the guest asked Grace, referring to an essay about \u201cmajor AI safety advances,\u201d among other things. \u201cHe was truly in top form.\u201d<\/p>\n<p>Grace looked sheepish. \u201cScott and I are dating,\u201d she said\u2014intermittently, nonexclusively\u2014\u201cbut that doesn\u2019t mean I always remember to read his stuff.\u201d<\/p>\n<p>In theory, the benefits of advanced A.I. could be almost limitless. Build a trusty superhuman oracle, fill it with information (every peer-reviewed scientific article, the contents of the Library of Congress), and watch it spit out answers to our biggest questions: How can we cure cancer? Which renewable fuels remain undiscovered? How should a person be? \u201cI\u2019m generally pro-A.I. and against slowing down innovation,\u201d Robin Hanson, an economist who has had friendly debates with the doomers for years, told me. \u201cI want our civilization to continue to grow and do spectacular things.\u201d Even if A.G.I. does turn out to be dangerous, many in Silicon Valley argue, wouldn\u2019t it be better for it to be controlled by an American company, or by the American government, rather than by the government of China or Russia, or by a rogue individual with no accountability? \u201cIf you can avoid an arms race, that\u2019s by far the best outcome,\u201d Ben Goldhaber, who runs an A.I.-safety group, told me. \u201cIf you\u2019re convinced that an arms race is inevitable, it might be understandable to default to the next best option, which is, Let\u2019s arm the good guys before the bad guys.\u201d<\/p>\n<p>One way to do this is to move fast and break things. In 2021, a computer programmer and artist named Benjamin Hampikian was living with his mother in the Upper Peninsula of Michigan. Almost every day, he found himself in Twitter Spaces\u2014live audio chat rooms on the platform\u2014that were devoted to extravagant riffs about the potential of future technologies. \u201cWe didn\u2019t have a name for ourselves at first,\u201d Hampikian told me. \u201cWe were just shitposting about a hopeful future, even when everything else seemed so depressing.\u201d The most forceful voice in the group belonged to a Canadian who posted under the name Based Beff Jezos. \u201cI am but a messenger for the thermodynamic God,\u201d he posted, above an image of a muscle-bound man in a futuristic toga. The gist of their idea\u2014which, in a sendup of effective altruism, they eventually called effective accelerationism\u2014was that the laws of physics and the \u201ctechno-capital machine\u201d all point inevitably toward growth and progress. \u201cIt\u2019s about having faith that the system will figure itself out,\u201d Beff said, on a podcast. Recently, he told me that, if the doomers \u201csucceed in instilling sufficient fear, uncertainty and doubt in the people at this stage,\u201d the result could be \u201can authoritarian government that is assisted by AI to oppress its people.\u201d<\/p>\n<p>Last year, Forbes revealed Beff to be a thirty-one-year-old named Guillaume Verdon, who used to be a research scientist at Google. Early on, he had explained, \u201cA lot of my personal friends work on powerful technologies, and they kind of get depressed because the whole system tells them that they are bad. For us, I was thinking, let\u2019s make an ideology where the engineers and builders are heroes.\u201d Upton Sinclair once wrote that \u201cit is difficult to get a man to understand something, when his salary depends on his not understanding it.\u201d An even more cynical corollary would be that, if your salary depends on subscribing to a niche ideology, and that ideology does not yet exist, then you may have to invent it.<\/p>\n<p>Online, you can tell the A.I. boomers and doomers apart at a glance. Accelerationists add a Fast Forward-button emoji to their display names; decelerationists use a Stop button or a Pause button instead. The e\/accs favor a Jetsons-core aesthetic, with renderings of hoverboards and space-faring men of leisure\u2014the bountiful future that A.I. could give us. Anything they deplore is cringe or communist; anything they like is \u201cbased and accelerated.\u201d The other week, Beff Jezos hosted a discussion on X with MC Hammer.<\/p>\n<p>[&#8230;]<\/p>\n<p>Accelerationism has found a natural audience among venture capitalists, who have an incentive to see the upside in new technology. Early last year, Marc Andreessen, the prominent tech investor, sat down with Dwarkesh Patel for a friendly, wide-ranging interview. Patel, who lives in a group house in Cerebral Valley, hosts a podcast called \u201cDwarkesh Podcast,\u201d which is to the doomer crowd what \u201cThe Joe Rogan Experience\u201d is to jujitsu bros, or what \u201cThe Ezra Klein Show\u201d is to Park Slope liberals. A few months after their interview, though, Andreessen published a jeremiad accusing \u201cthe AI risk cult\u201d of engaging in a \u201cfull-blown moral panic.\u201d He updated his bio on X, adding \u201cE\/acc\u201d and \u201cp(Doom) = 0.\u201d \u201cMedicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence,\u201d he later wrote in a post called \u201cThe Techno-Optimist Manifesto.\u201d \u201cDeaths that were preventable by the AI that was prevented from existing is a form of murder.\u201d At the bottom, he listed a few dozen \u201cpatron saints of techno-optimism,\u201d including Hayek, Nietzsche, and Based Beff Jezos. Patel offered some respectful counter-arguments; Andreessen responded by blocking him on X. Verdon recently had a three-hour video debate with a German doomer named Connor Leahy, sounding far more composed than his online persona. Two days later, though, he reverted to form, posting videos edited to make Leahy look creepy, and accusing him of \u201cgaslighting.\u201d<\/p>\n<p>[&#8230;]<\/p>\n<p>This past summer, when \u201cOppenheimer\u201d was in theatres, many denizens of Cerebral Valley were reading books about the making of the atomic bomb. The parallels between nuclear fission and superintelligence were taken to be obvious: world-altering potential, existential risk, theoretical research thrust into the geopolitical spotlight. Still, if the Manhattan Project was a cautionary tale, there was disagreement about what lesson to draw from it. Was it a story of regulatory overreach, given that nuclear energy was stifled before it could replace fossil fuels, or a story of regulatory dereliction, given that our government rushed us into the nuclear age without giving extensive thought to whether this would end human civilization? Did the analogy imply that A.I. companies should speed up or slow down?<\/p>\n<p>In August, there was a private screening of \u201cOppenheimer\u201d at the Neighborhood, a co-living space near Alamo Square where doomers and accelerationists can hash out their differences over hopped tea. Before the screening, Nielsen, the quantum-computing expert, who once worked at Los Alamos National Laboratory, was asked to give a talk. \u201cWhat moral choices are available to someone working on a technology they believe may have very destructive consequences for the world?\u201d he said. There was the path exemplified by Robert Wilson, who didn\u2019t leave the Manhattan Project and later regretted it. There were Klaus Fuchs and Ted Hall, who shared nuclear secrets with the Soviets. And then, Nielsen noted, there was Joseph Rotblat, \u201cthe one physicist who actually left the project after it became clear the Nazis were not going to make an atomic bomb,\u201d and who was later awarded the Nobel Peace Prize.<br \/>\n[&#8230;]<\/p>\n<p>The doomers and the boomers are consumed by intramural fights, but from a distance they can look like two offshoots of the same tribe: people who are convinced that A.I. is the only thing worth paying attention to. Altman has said that the adoption of A.I. \u201cwill be the most significant technological transformation in human history\u201d; Sundar Pichai, the C.E.O. of Alphabet, has said that it will be \u201cmore profound than fire or electricity.\u201d For years, many A.I. executives have tried to come across as more safety-minded than the competition. \u201cThe same people cycle between selling AGI utopia and doom,\u201d Timnit Gebru, a former Google computer scientist and now a critic of the industry, told me. \u201cThey are all endowed and funded by the tech billionaires who build all the systems we\u2019re supposed to be worried about making us extinct.\u201d<\/p>\n<p>[&#8230;]<\/p>\n<p>Anthropic continues to bill itself as \u201can AI safety and research company,\u201d but some of the other formerly safetyist labs, including OpenAI, sometimes seem to be drifting in a more e\/acc-inflected direction. \u201cYou can grind to help secure our collective future or you can write substacks about why we are going fail,\u201d Sam Altman recently posted on X. (\u201cAccelerate \ud83d\ude80,\u201d MC Hammer replied.) Although ChatGPT had been trained on a massive corpus of online text, when it was first released it didn\u2019t have the ability to connect to the Internet. \u201cLike keeping potentially dangerous bioweapons in a bio-secure lab,\u201d Grace told me. Then, last September, OpenAI made an announcement: now ChatGPT could go online.<\/p>\n<p>Whether the e\/accs have the better arguments or not, they seem to have money and memetic energy on their side. Last month, it was reported that Altman wanted to raise five to seven trillion dollars to start an unprecedentedly huge computer-chip company. \u201cWe\u2019re so fucking back,\u201d Verdon tweeted. \u201cCan you feel the acceleration?\u201d<\/p>\n<p>For a recent dinner party, Katja Grace ordered in from a bubble-tea shop\u2014\u201csome sesame balls, some interestingly squishy tofu things\u201d\u2014and hosted a few friends in her living room. One of them was Clara Collier, the editor of Asterisk, the doomer-curious magazine. The editors\u2019 note in the first issue reads, in part, \u201cThe next century is going to be impossibly cool or unimaginably catastrophic.\u201d The best-case scenario, Grace said, would be that A.I. turns out to be like the Large Hadron Collider, a particle accelerator in Switzerland whose risk of creating a world-swallowing black hole turned out to be vastly overblown. Or it could be like nuclear weapons, a technology whose existential risks are real but containable, at least so far. As with all dark prophecies, warnings about A.I. are unsettling, uncouth, and quite possibly wrong. Would you be willing to bet your life on it?<\/p>\n<p>The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be \u201cmoral weirdos,\u201d people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as \u201cweirdos,\u201d \u201cnerds,\u201d or \u201cweird nerds.\u201d Some of them, true to form, have tried to reduce their own weirdness to an equation. \u201cYou have a set amount of \u2018weirdness points,\u2019 \u201d a canonical post advises. \u201cSpend them wisely.\u201d<\/p>\n<p>One Friday night, I went to a dinner at a group house on the border of Berkeley and Oakland, where the shelves were lined with fantasy books and board games. Many of the housemates had Jewish ancestry, but in lieu of Shabbos prayers they had invented their own secular rituals. One was a sing-along to a futuristic nerd-folk anthem, which they described as an ode to \u201csupply lines, grocery stores, logistics, and abundance,\u201d with a verse that was \u201cnot not about A.I. alignment.\u201d After dinner, in the living room, several people cuddled with several other people, in various permutations. There were a few kids running around, but I quickly lost track of whose children were whose.<\/p>\n<p>Making heterodox choices about how to pray, what to believe, with whom to cuddle and\/or raise a child: this is the American Dream. Besides, it\u2019s how moral weirdos have always operated. The housemates have several Discord channels, where they plan their weekly Dungeons &amp; Dragons games, co\u00f6rdinate their food shopping, and discuss the children\u2019s homeschooling. One of the housemates has a channel named for the Mittwochsgesellschaft, or Wednesday Society, an underground group of intellectuals in eighteenth-century Berlin. Collier told me that, as an undergraduate at Yale, she had studied the German idealists. Kant, Fichte, and Hegel were all world-historic moral weirdos; Kant was famously celibate, but Schelling, with Goethe as his wingman, ended up stealing Schlegel\u2019s wife.<\/p>\n<p>Before Patel called his podcast \u201cDwarkesh Podcast,\u201d he called it \u201cThe Lunar Society,\u201d after the eighteenth-century dinner club frequented by radical intellectuals of the Midlands Enlightenment. \u201cI loved this idea of the top scientists and philosophers of the time getting together and shaping the ideas of the future,\u201d he said. \u201cFrom there, I naturally went, Who are those people now?\u201d While walking through Alamo Square with Patel, I asked him how often he found himself at a picnic or a potluck with someone who he thought would be remembered by history. \u201cAt least once a week,\u201d he said, without hesitation. \u201cIf we make it to the next century, and there are still history books, I think a bunch of my friends will be in there.\u201d \u2666<\/p>\n<hr \/>\n<p>2024<\/p>\n<p>https:\/\/www.bbc.com\/news\/articles\/c3d9zv50955o una firma roba la veu d&#8217;uns actors per fer un chatbot<\/p>\n<hr \/>\n<p>2025<\/p>\n<p>https:\/\/www.bbc.com\/news\/articles\/cm21j341m31o un amic fa deepfakes d&#8217;una noia<\/p>\n<p>https:\/\/hbr.org\/2025\/02\/how-to-get-hired-when-ai-does-the-screening Com presentar-se a una feina quan RRHH fa servir AI<\/p>\n<p>https:\/\/thewalrus.ca\/i-used-to-teach-students-now-i-catch-chatgpt-cheats\/ Anem cap a un m\u00f3n no no caldr\u00e0 pensar igual que ja no cal calcular?<\/p>\n<p>https:\/\/www.theverge.com\/openai\/624209\/chatgpt-ethics-specs-humanism Preguntes sobre \u00e8tica que ChapGPT no hauria de respondre<\/p>\n<p>https:\/\/www.bbc.com\/news\/articles\/cd65p1pv8pdo grok la AI d&#8217;Elon Musk anima l&#8217;oposici\u00f3 a Modi. Grok est\u00e0 entrenat amb els posts de X (twitter).<\/p>\n<p>https:\/\/www.chicagomag.com\/chicago-magazine\/march-2025\/the-great-ai-art-heist\/ A la universitat de Chicago han creat eines (Glaze, Nightshade) per tal que quan les cerques de AI capturen sense perm\u00eds imatges d&#8217;artistes per generar contingut, els resultats no siguin els esperats.<\/p>\n<p>&#8211;<\/p>\n<p><a href=\"https:\/\/www.bbc.com\/news\/articles\/clyw7g4zxwzo\">\u00a0un programa va valorar com de risc mig les amenaces a una dona que va ser assassinada<\/a><\/p>\n<p><a href=\"https:\/\/www.elnacional.cat\/ca\/internacional\/estonia-revoluciona-escola-mobils-intelligencia-artificial-formar-ciutadans-futur_1422502_102.html\">\u00a0Estonia AI a les aules<\/a><\/p>\n<p><a href=\"https:\/\/www.vox.com\/future-perfect\/411924\/artificial-intelligence-chatbots-openai-chatgpt-anthropic-google-gemini-claude-grok\">\u00a0proves de generar imatges i text a diferents models<\/a><\/p>\n<p><a href=\"https:\/\/www.technologyreview.com\/2025\/05\/20\/1116327\/ai-energy-usage-climate-footprint-big-tech\/\">AI, estimacions de l&#8217;energia que consumeix cada pregunta<\/a><\/p>\n<p>The smallest model in our Llama cohort, Llama 3.1 8B, has 8 billion parameters\u2014essentially the adjustable \u201cknobs\u201d in an AI model that allow it to make predictions. When tested on a variety of different text-generating prompts, like making a travel itinerary for Istanbul or explaining quantum computing, the model required about 57 joules per response, or an estimated 114 joules when accounting for cooling, other computations, and other demands. This is tiny\u2014about what it takes to ride six feet on an e-bike, or run a microwave for one-tenth of a second.<\/p>\n<p>The largest of our text-generation cohort, Llama 3.1 405B, has 50 times more parameters. More parameters generally means better answers but more energy required for each response. On average, this model needed 3,353 joules, or an estimated 6,706 joules total, for each response. That\u2019s enough to carry a person about 400 feet on an e-bike or run the microwave for eight seconds.<\/p>\n<p>The new model uses more than 30 times more energy on each 5-second video: about 3.4 million joules, more than 700 times the energy required to generate a high-quality image. This is equivalent to riding 38 miles on an e-bike, or running a microwave for over an hour.<\/p>\n<p>So what might a day\u2019s energy consumption look like for one person with an AI habit?<\/p>\n<p>Let\u2019s say you\u2019re running a marathon as a charity runner and organizing a fundraiser to support your cause. You ask an AI model 15 questions about the best way to fundraise.<\/p>\n<p>Then you make 10 attempts at an image for your flyer before you get one you are happy with, and three attempts at a five-second video to post on Instagram.<\/p>\n<p>You\u2019d use about 2.9 kilowatt-hours of electricity\u2014enough to ride over 100 miles on an e-bike (or around 10 miles in the average electric vehicle) or run the microwave for over three and a half hours.<\/p>\n<p>&#8212;<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Not\u00edcies Anteriors a COMUNICACI\u00d3\u03c0 [obro una nova nota per anar recollint tot el que surt de chapGPT] https:\/\/www.vox.com\/the-highlight\/23621198\/artificial-intelligence-chatgpt-openai-existential-risk-china-ai-safety-technology frenar la incorporaci\u00f3 de AI pel poc control del biaix. https:\/\/www.theverge.com\/23664519\/ai-industry-pause-open-letter-societal-harms riscos de la AI, avisos Steve Wozniak i Elon Musk, la flasificaci\u00f3 de la realitat. https:\/\/www.newyorker.com\/science\/annals-of-artificial-intelligence\/will-ai-become-the-new-mckinsey ChatGpt t\u00e9 els risc de convertir-se en una eina d&#8217;explotaci\u00f3, &hellip; <\/p>\n<p class=\"link-more\"><a href=\"http:\/\/meumon.synology.me\/wordpress\/ai-chapgpt\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;AI, ChapGPT, apple Vision&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[30],"tags":[41,42],"_links":{"self":[{"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/posts\/1425"}],"collection":[{"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/comments?post=1425"}],"version-history":[{"count":17,"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/posts\/1425\/revisions"}],"predecessor-version":[{"id":1948,"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/posts\/1425\/revisions\/1948"}],"wp:attachment":[{"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/media?parent=1425"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/categories?post=1425"},{"taxonomy":"post_tag","embeddable":true,"href":"http:\/\/meumon.synology.me\/wordpress\/wp-json\/wp\/v2\/tags?post=1425"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}