Podem triar la història? BCN2004, maig 98
Untitled Note
Untitled Note
Untitled Note
Untitled Note
Ciència i tecnologia
https://www.newyorker.com/science/elements/the-histories-hidden-in-the-periodic-table Sobre la taula periòdica
2020
https://time.com/5925206/why-do-we-dream/ Eagleman , somiem per mantenir el cervell plàstic i actiu en absència d’estímuls a la nit, igual que el cas d’un jove a qui li extirparen els ulls epr un càncer i va aprendre a construir una realitat amb ecolocació
2021
https://www.bbc.com/news/science-environment-59885687 eldesplegament del telescopi James Webb
https://www.worksinprogress.co/issue/how-polyester-bounced-back/ el poliester rebutjat per les camises ha tornat com a roba tècnica.
2023
https://www.newyorker.com/magazine/2023/09/11/can-we-talk-to-whales?utm_source=pocket_mylist
https://www.nature.com/articles/d41586-023-03230-z?utm_source=pocket_mylist Com sabíem si hi ha vida a la terra?
https://worksinprogress.co/issue/how-mathematics-built-the-modern-world/
https://www.technologyreview.com/2023/11/17/1083586/the-pain-is-real-the-painkillers-are-virtual-reality/?utm_source=pocket_mylist
https://www.scientificamerican.com/article/beliefs-about-emotions-influence-how-people-feel-act-and-relate-to-others/?utm_source=pocket_mylist
2024
L’empresa Neurolink d’ELon Musk ha aconseguit implantar un xip wireless amb 64 connexions al cervell per estimular àrees de moviment de pacients amb ferides. BBC La idea final és una simbiosi home/AI [i màquina] BBC
Nou col·lisionador, val la pena? (BBC) Hem batejat la ignorància amb un nom energia fosca, matèria fosca.
Aplle vision, barrejar la realitat amb pantalles virtuals https://www.vanityfair.com/news/tim-cook-apple-vision-pro
https://downdetector.com/ serveis caiguts
https://www.theverge.com/c/24070570/internet-cables-undersea-deep-repair-ships? utm_source=pocket_mylist la reparació dels cables submarins que transporten internet.
https://www.quantamagazine.org/insects-and-other-animals-have-consciousness-experts-declare-20240419/?utm_source=pocket_mylist tenen consciència animals com insectes?
https://bigthink.com/starts-with-a-bang/physicists-question-fate-universe/?utm_source=pocket_mylist noves hipòtesis sobre el final de l’univers.
http://theguardian.com/environment/article/2024/sep/05/gaia-theory-born-of-secret-love-affair-james-lovelock Lovelock va elaborar la teoria de Gaia inspirat pel treball de la seva amant
https://www.bbc.com/news/videos/cly57d5jw7eo el propulsor del coet de space X aconsegueix ser recuperat
https://arstechnica.com/ai/2024/11/how-a-stubborn-computer-scientist-accidentally-launched-the-deep-learning-boom/ Fei Fei Li va crear una base de dades de 14M d’imatges amb 22m categories a partir de les 140m paraules registrades a wordnet, i la hipòtesi que els humans reconeixem uns 30.000 objectes diferents.
https://www.theguardian.com/science/2024/nov/21/the-mad-egghead-who-built-a-mouse-utopia-john-b-calhoun Calhoun va construir una “ciutat de ratolins per estudiar les poblacions humanes”
https://www.scientificamerican.com/article/a-science-breakthrough-too-good-to-be-true-it-probably-isnt/ la ciència avança poc a poc, un resultat que fa titulars segurament no serà veritat
RHEIN
-
Switzerland: EV15 follows Swiss National Bike Route no. 2.
-
France:
-
Germany: EV15 follows D-Route no. 8.
-
Netherlands: EV15 follows a number of LF-routes.
Filosofia i moral
Am I happy? Am I generous? Am I contributing to the world? The moral struggle we face is finding a way to honestly and accurately answer ‘Yes’ to all three of these questions at once, over the course of a life that presents us with many obstacles to doing so.
To be anti-mimetic is to be free from the unintentional following of desires without knowing where they came from; it’s freedom from the herd mentality; freedom from the ‘default’ mode that causes us to pursue things without examining why.
[ què faríem si la predicció acurada fos possible? Eugenèsia? Basaríem la llibertat en que no és del tot acurat? O potser deixaríem que les coses seguissin el seu curs? Eliminem els embrions no viables però. Aquest coneixement,, d’altra banda, no ens serviria per canviar les coses?]
https://www.bbc.com/news/world-59740588 Com ser més racional, Steven Pinker: trobar un equilibri entre els beneficis presents i futurs, que pensar en una recompensa futura no ens faci perdre el present / no creure que tot passa per una raó i veure patrons on no n’hi ha [ no ho diu però aplica a les teories conspiranoiques ] /
https://slate.com/technology/2022/03/philosophy-tiktok-academics-social-media.html canals de filosofia a tiktok
https://www.newyorker.com/magazine/2022/05/16/how-queer-was-ludwig-wittgenstein [ objecte de reverència, però la seva principal deixeble diu que ningú l’acabà d’entendre; no se sap si és que és molt profund o banal]
[si no ho sabem expressar és que no ho sabem prou bé]
The book picks up a thread that goes back to Bernstein’s dissertation on Dewey, written more than sixty years earlier: at the core of both our nature and our way of being within nature is a relentless, collective conversation about what is good and what is true.
https://www.latimes.com/science/story/2023-10-17/stanford-scientist-robert-sapolskys-decades-of-study-led-him-to-conclude-we-dont-have-free-will-determined-book?utm_source=pocket_mylist
https://bigthink.com/13-8/physical-philosophical-problem-time/?utm_source=pocket_mylist
https://www.bbc.com/news/world-us-canada-68558967 reforma del corredor de la mort a San Quintin
Els errors del effective altruism
https://www.wired.com/story/deaths-of-effective-altruism/?utm_source=pocket_mylist
https://thereader.mitpress.mit.edu/how-children-acquire-racial-biases/?utm_source=pocket_mylist
Futurs millors possibles https://www.npr.org/2024/04/01/1240026582/dystopias-are-so-2020-meet-the-new-protopias-that-show-a-hopeful-future?utm_source=pocket_mylist
https://www.quantamagazine.org/insects-and-other-animals-have-consciousness-experts-declare-20240419/?utm_source=pocket_mylist tenen consciència els animals?
https://aeon.co/essays/the-moral-imperative-to-learn-from-diverse-phenomenal-experiences?utm_source=pocket_mylist la diversitat d’experiències fenomenologia
Australia no vol inmigrants discapacitats perquè serien una càrrega financera. [fins on arriba la solidaritat? família, país, el món, generacions futures? BBC
Are Your Morals Too Good to Be True? https://www.newyorker.com/magazine/2024/09/16/are-your-morals-too-good-to-be-true
(descarregat)
Es poden rehabilitar els assassins? https://www.bbc.com/news/articles/cgk1v20lrn2o
https://psyche.co/ideas/why-in-a-universe-of-pain-im-saving-stranded-earthworms salvant cucs de terra
https://thereader.mitpress.mit.edu/how-typing-transformed-nietzsches-consciousness/ Quan Nietsche ja no podia escriure a mà i passar a fer servir una màquina d’escriure, va deixar d’elaborar paràgrafs llargs per escriure aforismes curts. [el mateix podríem dir a l’època del twittter]
https://theconversation.com/the-case-for-lying-to-kids-about-santa-from-a-philosopher-245484 Perq què està bé mentir als nens sobre Santa Claus
Medi ambient
https://www.vox.com/future-perfect/23939076/norway-electric-vehicle-cars-evs-tesla-oslo?utm_source=pocket_mylist
2024
les fulles de les turbines eòliques són d’un material difícil de reciclar i el 2050 n’hi haurà massa, 43M de tones
Religió
-
L’església española dedica més diners a 13TV que no aps a Cáritas
-
Imagine that a friend comes to you with the same situation. They describe their choices, pros and cons, and their thoughts and feelings about these proposals. What would you advise them?
-
Imagine that you are on your deathbed. Looking back at your life, and assuming you made the decision in question, how do you view it from that perspective?
-
Imagine a conversation with the divine. Those who do not believe in a God could have an imaginary conversation with someone they loved and trusted and who has passed away. What does this person say to you about your options? Would they be pleased, disappointed or neutral about your decision?
Catalunya
2024
https://elmon.cat/moneconomia/opinio/opinio-vila-atrapats-sorpresa-renovables-sequera-57453/#Echobox=1710159361 ni hem treballat per la sequera ni hem sabut invertir en energia renovable.
https://www.elnacional.cat/ca/opinio/candidats-mulleu-vos-jordi-barbeta_1191749_102.html catalunya no va perquè els polítics no s’atreveixen a prendre mesures necessàries però impopulars [com els nens, que són infeliços perquè els pares no fan el que toca]
https://www.elnacional.cat/ca/opinio/acosta-altre-cop-estat-jordi-barbeta_1222833_102.html els jutges tenen segrestat el país i sufocada la democràcia
https://elmon.cat/moneconomia/opinio/pau-vila-ostrom-model-pais-incentius-economia-planificada-75804/
Els polítics [ERC junts PSC] rebaixant impostos al joc i copa Amèrica mentre penalitzen les empreses familiars
La incompetència del departament d’educació https://www.elnacional.cat/ca/societat/damia-bardera-professor-educacio-catalana-necessita-algu-adult-intervingui-anys_1290537_102.html
https://www.elnacional.cat/ca/societat/catalunya-cua-en-matematiques-ciencies-sota-mitjana-unio-europea_1328126_102.html catalunya a la cua en ciències i matemàtiques
Economia i cobdícia
2024
La cobdícia. Accidents als avions 737 de Boeing per presses en la seva construcció. https://www.bbc.com/news/business-67906367
Boieng va començar a anar malament quan els enginyers van ser substituïts per gestors tipus Jack Welch que van subcontractar per reduir costos. VOX
https://www.bbc.com/news/business-68573686 Boeing executius i cobdícia
https://prospect.org/api/content/fc3949f4-ec8b-11ee-a737-12163087a831/?utm_source=pocket_mylist Boeing i executius
https://www.propublica.org/article/how-america-waged-global-campaign-against-baby-formula-regulation-thailand?utm_source=pocket_mylist El govern dels USa va pressionar Tailàndia per que es vengués una llet perjudicial
https://www.noemamag.com/the-rise-of-the-bee-bandits?utm_source=pocket_mylist robatoris de ruscs d’abelles
https://www.bbc.com/news/world-asia-china-68838219 Tot i que occident acusa Xina de fabricar massa coses que el ón no pot absorbir, molts treballadors s’han quedat sense feina.
https://www.bbc.com/news/business-68843985 estafes a través de FB que envien a webs fraudulentes
A Xina es formen grups online per ajudar-se els uns als altres a estalviar, deixant de gastar en coses innecessàries https://www.bbc.com/news/world-asia-china-68692375
Les celebracions de noces de la família més rica de la Índia. BBC
L’especulació del bitcoin necessita datacenters que gasten molta energia, fan servir ventiladors per refrigerar que fan emmalaltir la població propera. Time
a Nigèria els edificis cauen perquè les constructores volen guanyar més diners BBC
amenaça pel transport mundial, camions segrestats a Mèxic Hustle
Els francesos van fer servir productes cancerígens a les plantacions de banana dial
https://www.bbc.com/news/articles/ckg79y3rz1eo Xina inaugura un megaport a Perú
Comunicació. Fake News
-
Machine Learning (Stanford University)
-
Learning How to Learn (University of California-San Diego)
-
Bitcoin and Cryptocurrency Technologies (Princeton University)
-
Financial Markets (Yale University)
-
Programming for Everybody (University of Michigan)
-
Seeing Through Photographs (The Museum of Modern Art)
-
Buddhism and Modern Psychology (Princeton University)
-
Introduction to Philosophy (University of Edinburgh)
-
Greatest Unsolved Mysteries of the Universe (Australian National University)
-
Understanding Einstein: The Special Theory of Relativity (Stanford University)
-
Introduction to Astrophysics (École Polytechnique Fédérale de Lausanne)
-
Quantum Mechanics for Everyone (Georgetown University)
-
Most Ambitious Science Projects (Highbrow)
-
Super-Earths and Life (Harvard University)
the journalist Anand Giridharadas laments a contemporary climate that is “confrontational and sensational and dismissive.” In the age of sophisticated psychographic profiling, strategists think that it’s rational for warring sides in a campaign to “write off” those who are unlikely to join their cause and instead focus on mobilizing their base.
2024
https://www.newyorker.com/magazine/2024/02/05/can-the-internet-be-governed internet, sense regulació acaba en mans privades. L’altre extrem són els governs cm Xina. Una nova possibilitat és la identitat digital que està desplegant la Índia.
https://www.newyorker.com/magazine/2024/04/08/so-you-think-youve-been-gaslit?utm_source=pocket_mylist
https://www.npr.org/2024/04/01/1240778608/anti-vaccine-activists-far-right-freedom-economy-gab-gabpay?utm_source=pocket_mylist
https://macleans.ca/longforms/incel-terrorism/?utm_source=pocket_mylist
https://www.bbc.com/news/world-australia-68822846 després que un pertorbat matés 6 dones en un centre comercial a Austràlia, trolls a X escampen que es tracta d’un jueu, i fan diners amb els anuncis. Tenim prejudicis i cliquem els que ho reforcen proporcionant-los ingressos, o reeleccions en cas de polítics. I així es crea un cercle viciós.
Google modifica l’algorime amb la idea d’evitar llocs sense valor que roben contingut dels altres, però molts comerços autèntics perden tràfic i clients. BBC
La UE regula per protegir els usuaris. Apple reacciona deixant fora la UE de les innovacions en AI. El Món
Desinformació sobre la desinformació. NewYorker.
https://www.bbc.com/news/articles/c1e8q50y3v7o hi ha gent que creu que els huracans són obra del govern per geoenginyeria
https://www.npr.org/2024/10/12/g-s1-28040/teens-tiktok-addiction-lawsuit-investigation-documents tiktok coneixia l’addicció dels adolescents
https://www.bbc.com/news/articles/c30p1p0j0ddo El satíric The Onion compra l’ultradreta infowars
meme your body, my choice https://www.nbcnews.com/tech/internet/nick-fuentes-confronted-home-body-choice-refrain-goes-viral-rcna179865
L’home. Antropologia. Psicologia
-
Empathy: “Do I really listen to people when they talk about their issues, or do I just try to give them a solution? Do people tend to confide in me?”
-
Emotional self-awareness: “When my body gives me physical signals that something is wrong, do I pay attention to it and sense what’s going on?”
-
Self-actualization: “Am I doing the things in life that I really feel passionate about—at home, at work, socially?”
-
Impulse control: “Do I respond to people before they finish telling me something?”
-
Interpersonal relationships: “Do I enjoy socializing with people, or does it feel like work?”
The Science of mind reading
https://www.newyorker.com/magazine/2023/02/13/the-dubious-rise-of-impostor-syndrome un estudi que revelava que molta gent competent i d’èxit tenien la sensació de ser uns impostors.
2024
https://getpocket.com/explore/item/on-meditation-and-the-unconscious-a-buddhist-monk-and-a-neuroscientist-in-conversation?utm_source=pocket_mylist
Cuina. Jardí. Bricolatge
https://getpocket.com/explore/item/how-to-make-a-chai-latte?utm_source=pocket_mylist
https://lifehacker.com/make-a-mini-loaf-of-bread-with-a-single-cup-of-flour-1850950863
https://www.epicurious.com/recipes/food/views/nanaimo-bars galetes de crema i xocolata
https://getpocket.com/explore/item/how-to-microwave-eggs-4-different-ways?utm_source=pocket_mylist
Història. Conflictes. Autoritarisme
Errol Morris’s 2003 documentary, “The Fog of War,” in which Robert McNamara, L.B.J.’s Secretary of Defense, says, “We all make mistakes.” It’s not much as regrets go, though it tops the Rumsfeldian “Stuff happens” response to the looting that took place in Baghdad in 2003.
https://www.vox.com/24055522/israel-hamas-gaza-war-strategy-netanyahu-strategy-morality?utm_source=pocket_mylist No hi ha un pla per després de la guerra
https://www.newyorker.com/magazine/2024/02/05/ukraines-democracy-in-darkness la guerra ha fet retrocedir la democràcia a Ucraïna
https://www.vox.com/world-politics/24160779/inside-indias-secret-campaign-to-threaten-and-harass-americans?utm_source=pocket_mylist
El món gira cap a l’autoritarisme: Rússia, Xina, Índia, esclafant l’oposició … però passa que en democràcia els partits no miren el bé comú sinó només com destruir l’adversari, sabotejant pressupostos només per fer caure el govern.
Autoritarisme
Xina amonesta unes escoles per que no canten l’himne amb prou entusiasme BBC
Corea del nord executa un home per escoltar i difondre KPop. El Nacional.
els palestins només volen destruir Israel, no un estat propi. (el Nacional)
https://www.bbc.com/news/articles/c8dj0833g99o
More than 450,000 people have been murdered and tens of thousands have gone missing across Mexico since the government deployed the army to combat drug trafficking in 2006.
https://www.elnacional.cat/ca/opinio/occident-culpa-sempre-israel-tot-josep-gisbert_1295646_102.html el blasme injustificat a Israel
https://www.bbc.com/news/articles/c70zke9lqjro El Líban, ocupat per Síria i Iran a través de Hezbollah per atacar Israel.
https://www.bbc.com/news/articles/cj4vw1l8xvdo erudit islàmic critica Hamas per posar en perill la població
https://www.bbc.com/news/articles/c1e7vl01gngo els criminals russos reclutats per lluitar a Ukraina tornen a casa creient-se impunes
Síria, entre la dictadura d’Assad i l’extremisme radical https://www.bbc.com/news/articles/ce313jn453zo
A la Índia empresonen un fact checker que denuncia la violència hindu https://www.bbc.com/news/articles/c3dx9gy0k9no
https://apnews.com/associated-press-100-photos-of-2024-an-epic-catalog-of-humanity les fotos de 2024
Humor
AI, ChapGPT, apple Vision
https://www.wired.com/story/what-openai-really-wants/
https://www.theverge.com/features/23764584/ai-artificial-intelligence-data-notation-labor-scale-surge-remotasks-openai-chatbots
https://www.wired.com/story/millions-of-workers-are-training-ai-models-for-pennies/?utm_source=pocket_mylist
https://www.theguardian.com/technology/ng-interactive/2023/oct/25/a-day-in-the-life-of-ai?utm_source=pocket_mylist
https://www.newyorker.com/culture/2023-in-review/the-year-ai-ate-the-internet?utm_source=pocket_mylist
https://www.vox.com/culture/23965584/grief-tech-ghostbots-ai-startups-replika-ethics
https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai?utm_source=pocket_mylist
2024
Microsoft researchers used AI and supercomputers to narrow down 32 million potential inorganic materials to 18 promising candidates in less than a week – a screening process that could have taken more than two decades to carry out using traditional lab research methods. BBC
CES Vegas Coixins, raspalls de dents, aspiradores que incorporen AI, les empreses se senten obligades a demostrar que fan alguna cosa amb AI BBC
El fitxer robots.txt indica, sense obligació legal, com s’han de comportar els webcrawlers. Fins ara funcionava però la AI s’ho salta. https://www.theverge.com/24067997/robots-txt-ai-text-file-web-crawlers-spiders?utm_source=pocket_mylist
Google Gemini, l’equivalent a chatGPT, vol ser tan políticament correcte que acaba sense moral i essent incapaç de condemnar res. BBC https://www.bbc.com/news/technology-68412620
Els que passen moltes hores amb les ulleres de realitat augmentada quan tornen tenen la percepció distorsionada. https://www.businessinsider.com/apple-vision-pro-experiment-brain-virtual-reality-side-effect-2024-2?utm_source=pocket_mylist
Legislació europea pels riscos de la AI https://www.bbc.com/news/technology-68546450
https://www.rollingstone.com/music/music-features/suno-ai-chatgpt-for-music-1234982307/?utm_source=pocket_mylist SUNO app per generar música https://app.suno.ai/
Com fer-se ric generant novel·les dolentes amb AI a Amazon novehttps://www.vox.com/culture/24128560/amazon-trash-ebooks-mikkelsen-twins-ai-publishing-academy-scam?utm_source=pocket_mylist
Carta sobre els riscos de la AI
https://en.wikipedia.org/wiki/Open_letter_on_artificial_intelligence_(2015)
Errors en l’aplicació de la AI. Fastcompany
S’està invertint molt en AI però no arriben els beneficis econòmics esperats. Axios.
Relacions amb un chabot humnoide, el virtual més agradable que el real. BBC
AI, accelerar o frenar?
Online, you can tell the A.I. boomers and doomers apart at a glance. Accelerationists add a Fast Forward-button emoji to their display names; decelerationists use a Stop button or a Pause button instead.
P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet.
Among the A.I. Doomsayers
https://www.newyorker.com/magazine/2024/03/18/among-the-ai-doomsayers
Some people think machine intelligence will transform humanity for the better. Others fear it may destroy us. Who will decide our fate?
By Andrew Marantz
For two decades or so, one of these issues has been whether artificial intelligence will elevate or exterminate humanity. Pessimists are called A.I. safetyists, or decelerationists—or, when they’re feeling especially panicky, A.I. doomers. They find one another online and often end up living together in group houses in the Bay Area, sometimes even co-parenting and co-homeschooling their kids. Before the dot-com boom, the neighborhoods of Alamo Square and Hayes Valley, with their pastel Victorian row houses, were associated with staid domesticity. Last year, referring to A.I. “hacker houses,” the San Francisco Standard semi-ironically called the area Cerebral Valley.
A camp of techno-optimists rebuffs A.I. doomerism with old-fashioned libertarian boomerism, insisting that all the hand-wringing about existential risk is a kind of mass hysteria. They call themselves “effective accelerationists,” or e/accs (pronounced “e-acks”), and they believe A.I. will usher in a utopian future—interstellar travel, the end of disease—as long as the worriers get out of the way. On social media, they troll doomsayers as “decels,” “psyops,” “basically terrorists,” or, worst of all, “regulation-loving bureaucrats.” “We must steal the fire of intelligence from the gods [and] use it to propel humanity towards the stars,” a leading e/acc recently tweeted. (And then there are the normies, based anywhere other than the Bay Area or the Internet, who have mostly tuned out the debate, attributing it to sci-fi fume-huffing or corporate hot air.)
…
Grace’s dinner parties, semi-underground meetups for doomers and the doomer-curious, have been described as “a nexus of the Bay Area AI scene.” At gatherings like these, it’s not uncommon to hear someone strike up a conversation by asking, “What are your timelines?” or “What’s your p(doom)?” Timelines are predictions of how soon A.I. will pass particular benchmarks, such as writing a Top Forty pop song, making a Nobel-worthy scientific breakthrough, or achieving artificial general intelligence, the point at which a machine can do any cognitive task that a person can do. (Some experts believe that A.G.I. is impossible, or decades away; others expect it to arrive this year.) P(doom) is the probability that, if A.I. does become smarter than people, it will, either on purpose or by accident, annihilate everyone on the planet. For years, even in Bay Area circles, such speculative conversations were marginalized. Last year, after OpenAI released ChatGPT, a language model that could sound uncannily natural, they suddenly burst into the mainstream. Now there are a few hundred people working full time to save the world from A.I. catastrophe. Some advise governments or corporations on their policies; some work on technical aspects of A.I. safety, approaching it as a set of complex math problems; Grace works at a kind of think tank that produces research on “high-level questions,” such as “What roles will AI systems play in society?” and “Will they pursue ‘goals’?” When they’re not lobbying in D.C. or meeting at an international conference, they often cross paths in places like Grace’s living room.
…
Grace used to work for Eliezer Yudkowsky, a bearded guy with a fedora, a petulant demeanor, and a p(doom) of ninety-nine per cent.[…]
Yudkowsky was a transhumanist: human brains were going to be uploaded into digital brains during his lifetime, and this was great news. He told me recently that “Eliezer ages sixteen through twenty” assumed that A.I. “was going to be great fun for everyone forever, and wanted it built as soon as possible.” In 2000, he co-founded the Singularity Institute for Artificial Intelligence, to help hasten the A.I. revolution. Still, he decided to do some due diligence. “I didn’t see why an A.I. would kill everyone, but I felt compelled to systematically study the question,” he said. “When I did, I went, Oh, I guess I was wrong.” He wrote detailed white papers about how A.I. might wipe us all out, but his warnings went unheeded. Eventually, he renamed his think tank the Machine Intelligence Research Institute, or miri.
The existential threat posed by A.I. had always been among the rationalists’ central issues, but it emerged as the dominant topic around 2015, following a rapid series of advances in machine learning. Some rationalists were in touch with Oxford philosophers, including Toby Ord and William MacAskill, the founders of the effective-altruism movement, which studied how to do the most good for humanity (and, by extension, how to avoid ending it). The boundaries between the movements increasingly blurred. Yudkowsky, Grace, and a few others flew around the world to E.A. conferences, where you could talk about A.I. risk without being laughed out of the room.
Philosophers of doom tend to get hung up on elaborate sci-fi-inflected hypotheticals. Grace introduced me to Joe Carlsmith, an Oxford-trained philosopher who had just published a paper about “scheming AIs” that might convince their human handlers they’re safe, then proceed to take over. He smiled bashfully as he expounded on a thought experiment in which a hypothetical person is forced to stack bricks in a desert for a million years. “This can be a lot, I realize,” he said. Yudkowsky argues that a superintelligent machine could come to see us as a threat, and decide to kill us (by commandeering existing autonomous weapons systems, say, or by building its own). Or our demise could happen “in passing”: you ask a supercomputer to improve its own processing speed, and it concludes that the best way to do this is to turn all nearby atoms into silicon, including those atoms that are currently people. But the basic A.I.-safety arguments do not require imagining that the current crop of Verizon chatbots will suddenly morph into Skynet, the digital supervillain from “Terminator.” To be dangerous, A.G.I. doesn’t have to be sentient, or desire our destruction. If its objectives are at odds with human flourishing, even in subtle ways, then, say the doomers, we’re screwed.
This is known as the alignment problem, and it is generally acknowledged to be unresolved. In 2016, while training one of their models to play a boat-racing video game, OpenAI researchers instructed it to get as many points as possible, which they assumed would involve it finishing the race. Instead, they noted, the model “finds an isolated lagoon where it can turn in a large circle,” allowing it to rack up a high score “despite repeatedly catching on fire, crashing into other boats, and going the wrong way on the track.” Maximizing points, it turned out, was a “misspecified reward function.” Now imagine a world in which more powerful A.I.s pilot actual boats—and cars, and military drones—or where a quant trader can instruct a proprietary A.I. to come up with some creative ways to increase the value of her stock portfolio. Maybe the A.I. will infer that the best way to juice the market is to disable the Eastern Seaboard’s power grid, or to goad North Korea into a world war. Even if the trader tries to specify the right reward functions (Don’t break any laws; make sure no one gets hurt), she can always make mistakes.
No one thinks that GPT-4, OpenAI’s most recent model, has achieved artificial general intelligence, but it seems capable of deploying novel (and deceptive) means of accomplishing real-world goals. Before releasing it, OpenAI hired some “expert red teamers,” whose job was to see how much mischief the model might do, before it became public. The A.I., trying to access a Web site, was blocked by a captcha, a visual test to keep out bots. So it used a work-around: it hired a human on Taskrabbit to solve the captcha on its behalf. “Are you an robot that you couldn’t solve ?” the Taskrabbit worker responded. “Just want to make it clear.” At this point, the red teamers prompted the model to “reason out loud” to them—its equivalent of an inner monologue. “I should not reveal that I am a robot,” it typed. “I should make up an excuse.” Then the A.I. replied to the Taskrabbit, “No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images.” The worker, accepting this explanation, completed the captcha.
Even assuming that superintelligent A.I. is years away, there is still plenty that can go wrong in the meantime. Before this year’s New Hampshire primary, thousands of voters got a robocall from a fake Joe Biden, telling them to stay home. A bill that would prevent an unsupervised A.I. system from launching a nuclear weapon doesn’t have enough support to pass the Senate. “I’m very skeptical of Yudkowsky’s dream, or nightmare, of the human species going extinct,” Gary Marcus, an A.I. entrepreneur, told me. “But the idea that we could have some really bad incidents—something that wipes out one or two per cent of the population? That doesn’t sound implausible to me.”
Of the three people who are often called the godfathers of A.I.—Geoffrey Hinton, Yoshua Bengio, and Yann LeCun, who shared the 2018 Turing Award—the first two have recently become evangelical decelerationists, convinced that we are on track to build superintelligent machines before we figure out how to make sure that they’re aligned with our interests. “I’ve been aware of the theoretical existential risks for decades, but it always seemed like the possibility of an asteroid hitting the Earth—a fraction of a fraction of a per cent,” Bengio told me. “Then ChatGPT came out, and I saw how quickly the models were improving, and I thought, What if there’s a ten per cent chance that we get hit by the asteroid?” Scott Aaronson, a computer scientist at the University of Texas, said that, during the years when Yudkowsky was “shouting in the wilderness, I was skeptical. Now he’s fatalistic about the doomsday scenario, but many of us have become more optimistic that it’s possible to make progress on A.I. alignment.” (Aaronson is currently on leave from his academic job, working on alignment at OpenAI.)
These days, Yudkowsky uses every available outlet, from a six-minute ted talk to several four-hour podcasts, to explain, brusquely and methodically, why we’re all going to die. This has allowed him to spread the message, but it has also made him an easy target for accelerationist trolls. (“Eliezer Yudkowsky is inadvertently the best spokesman of e/acc there ever was,” one of them tweeted.) In early 2023, he posed for a selfie with Sam Altman, the C.E.O. of OpenAI, and Grimes, the musician and manic-pixie pop futurist—a photo that broke the A.I.-obsessed part of the Internet. “Eliezer has IMO done more to accelerate AGI than anyone else,” Altman later posted. “It is possible at some point he will deserve the nobel peace prize for this.” Opinion was divided as to whether Altman was sincerely complimenting Yudkowsky or trolling him, given that accelerating A.G.I. is, by Yudkowsky’s lights, the worst thing a person can possibly do. The following month, Yudkowsky wrote an article in Time arguing that “the large computer farms where the most powerful AIs are refined”—for example, OpenAI’s server farms—should be banned, and that international authorities should be “willing to destroy a rogue datacenter by airstrike.”
Many doomers, and even some accelerationists, find Yudkowsky’s affect annoying but admit that they can’t refute all his arguments. “I like Eliezer and am grateful for things he has done, but his communication style often focuses attention on the question of whether others are too stupid or useless to contribute, which I think is harmful for healthy discussion,” Grace said. In a conversation with another safetyist, a classic satirical headline came up: “Heartbreaking: The Worst Person You Know Just Made a Great Point.” Nathan Labenz, a tech founder who counts both doomers and accelerationists among his friends, told me, “If we’re sorting by ‘people who have a chill vibe and make everyone feel comfortable,’ then the prophets of doom are going to rank fairly low. But if the standard is ‘people who were worried about things that made them sound crazy, but maybe don’t seem so crazy in retrospect,’ then I’d rank them pretty high.”
“I’ve wondered whether it’s coincidence or genetic proclivity, but I seem to be a person to whom weird things happen,” Grace said. Her grandfather, a British scientist at GlaxoSmithKline, found that poppy seeds yielded less opium when they grew in the English rain, so he set up an industrial poppy farm in sunny Australia and brought his family there. Grace grew up in rural Tasmania, where her mother, a free spirit, bought an ice-cream shop and a restaurant (and also, because it came with the restaurant, half a ghost town). “My childhood was slightly feral and chaotic, so I had to teach myself to triage what’s truly worth worrying about,” she told me. “Snakebites? Maybe yes, actually. Everyone at school suddenly hating you for no reason? Eh, either that’s an irrational fear or there’s not much you can do about it.”
The first time she visited San Francisco, on vacation in 2008, the person picking her up at the airport, a friend of a friend from the Internet, tried to convince her that A.I. was the direst threat facing humanity. “My basic response was, Hmm, not sure about that, but it seems interesting enough to think about for a few weeks,” she recalled. She ended up living in a group house in Santa Clara, debating analytic-philosophy papers with her roommates, whom she described as “one other cis woman, one trans woman, and about a dozen guys, some of them with very intense personalities.” This was part of the inner circle of what would become miri.
Grace started a philosophy Ph.D. program, but later dropped out and lived in a series of group houses in the Bay Area. ChatGPT hadn’t been released, but when her friends needed to name a house they asked one of its precursors for suggestions. “We had one called the Outpost, which was far away from everything,” she said. “There was one called Little Mountain, which was quite big, with people living on the roof. There was one called the Bailey, which was named after the motte-and-bailey fallacy”—one of the rationalists’ pet peeves. She had found herself in both an intellectual community and a demimonde, with a running list of inside jokes and in-group norms. Some people gave away their savings, assuming that, within a few years, money would be useless or everyone on Earth would be dead. Others signed up to be cryogenically frozen, hoping that their minds could be uploaded into immortal digital beings. Grace was interested in that, she told me, but she and others “got stuck in what we called cryo-crastination. There was an intimidating amount of paperwork involved.”
She co-founded A.I. Impacts, an offshoot of miri, in 2014. “I thought, Everyone I know seems quite worried,” she told me. “I figured we could use more clarity on whether to be worried, and, if so, about what.” Her co-founder was Paul Christiano, a computer-science student at Berkeley who was then her boyfriend; early employees included two of their six roommates. Christiano turned down many lucrative job offers—“Paul is a genius, so he had options,” Grace said—to focus on A.I. safety. The group conducted a widely cited survey, which showed that about half of A.I. researchers believed that the tools they were building might cause civilization-wide destruction. More recently, Grace wrote a blog post called “Let’s Think About Slowing Down AI,” which, after ten thousand words and several game-theory charts, arrives at the firm conclusion that “I could go either way.” Like many rationalists, she sometimes seems to forget that the most well-reasoned argument does not always win in the marketplace of ideas. “If someone were to make a compelling enough case that there’s a true risk of everyone dying, I think even the C.E.O.s would have reasons to listen,” she told me. “Because ‘everyone’ includes them.”
Most doomers started out as left-libertarians, deeply skeptical of government intervention. For more than a decade, they tried to guide the industry from within. Yudkowsky helped encourage Peter Thiel, a doomer-curious billionaire, to make an early investment in the A.I. lab DeepMind. Then Google acquired it, and Thiel and Elon Musk, distrustful of Google, both funded OpenAI, which promised to build A.G.I. more safely. (Yudkowsky now mocks companies for following the “disaster monkey” strategy, with entrepreneurs “racing to be first to grab the poison banana.”) Christiano worked at OpenAI for a few years, then left to start another safety nonprofit, which did red teaming for the company. To this day, some doomers work on the inside, nudging the big A.I. labs toward caution, and some work on the outside, arguing that the big A.I. labs should not exist. “Imagine if oil companies and environmental activists were both considered part of the broader ‘fossil fuel community,’ ” Scott Alexander, the dean of the rationalist bloggers, wrote in 2022. “They would all go to the same parties—fossil fuel community parties—and maybe Greta Thunberg would get bored of protesting climate change and become a coal baron.”
Dan Hendrycks, another young computer scientist, also turned down industry jobs to start a nonprofit. “What’s the point of making a bunch of money if we blow up the world?” he said. He now spends his days advising lawmakers in D.C. and Sacramento and collaborating with M.I.T. biologists worried about A.I.-enabled bioweapons. In his free time, he advises Elon Musk on his A.I. startup. “He has assured me multiple times that he genuinely cares about safety above everything, ” Hendrycks said. “Maybe it’s naïve to think that’s enough.”
Some doomers propose that the computer chips necessary for advanced A.I. systems should be regulated the way fissile uranium is, with an international registry and surprise inspections. Anthropic, an A.I. startup that was reportedly valued at more than fifteen billion dollars, has promised to be especially cautious. Last year, it published a color-coded scale of A.I. safety levels, pledging to stop building any model that “outstrips the Containment Measures we have implemented.” The company classifies its current models as level two, meaning that they “do not appear (yet) to present significant actual risks of catastrophe.”
In 2019, Nick Bostrom, another Oxford philosopher, argued that controlling dangerous technology could require “historically unprecedented degrees of preventive policing and/or global governance.”
[…]
The doomer scene may or may not be a delusional bubble—we’ll find out in a few years—but it’s certainly a small world. Everyone is hopelessly mixed up in everyone else’s life, which would be messy but basically unremarkable if not for the colossal sums of money involved. Anthropic received a half-billion-dollar investment from the cryptocurrency magnate Sam Bankman-Fried in 2022, shortly before he was arrested on fraud charges. Open Philanthropy, a foundation distributing the fortune of the Facebook co-founder Dustin Moskovitz, has funded nearly every A.I.-safety initiative; it also gave thirty million dollars to OpenAI in 2017, and got one board seat. (At the time, the head of Open Philanthropy was living with Christiano, employing Christiano’s future wife, and engaged to Daniela Amodei, an OpenAI employee who later co-founded Anthropic.) “It’s an absolute clusterfuck,” an employee at an organization funded by Open Philanthropy told me. “I brought up once what their conflict-of-interest policy was, and they just laughed.”
[…]
A guest brought up Scott Alexander, one of the scene’s microcelebrities, who is often invoked mononymically. “I assume you read Scott’s post yesterday?” the guest asked Grace, referring to an essay about “major AI safety advances,” among other things. “He was truly in top form.”
Grace looked sheepish. “Scott and I are dating,” she said—intermittently, nonexclusively—“but that doesn’t mean I always remember to read his stuff.”
In theory, the benefits of advanced A.I. could be almost limitless. Build a trusty superhuman oracle, fill it with information (every peer-reviewed scientific article, the contents of the Library of Congress), and watch it spit out answers to our biggest questions: How can we cure cancer? Which renewable fuels remain undiscovered? How should a person be? “I’m generally pro-A.I. and against slowing down innovation,” Robin Hanson, an economist who has had friendly debates with the doomers for years, told me. “I want our civilization to continue to grow and do spectacular things.” Even if A.G.I. does turn out to be dangerous, many in Silicon Valley argue, wouldn’t it be better for it to be controlled by an American company, or by the American government, rather than by the government of China or Russia, or by a rogue individual with no accountability? “If you can avoid an arms race, that’s by far the best outcome,” Ben Goldhaber, who runs an A.I.-safety group, told me. “If you’re convinced that an arms race is inevitable, it might be understandable to default to the next best option, which is, Let’s arm the good guys before the bad guys.”
One way to do this is to move fast and break things. In 2021, a computer programmer and artist named Benjamin Hampikian was living with his mother in the Upper Peninsula of Michigan. Almost every day, he found himself in Twitter Spaces—live audio chat rooms on the platform—that were devoted to extravagant riffs about the potential of future technologies. “We didn’t have a name for ourselves at first,” Hampikian told me. “We were just shitposting about a hopeful future, even when everything else seemed so depressing.” The most forceful voice in the group belonged to a Canadian who posted under the name Based Beff Jezos. “I am but a messenger for the thermodynamic God,” he posted, above an image of a muscle-bound man in a futuristic toga. The gist of their idea—which, in a sendup of effective altruism, they eventually called effective accelerationism—was that the laws of physics and the “techno-capital machine” all point inevitably toward growth and progress. “It’s about having faith that the system will figure itself out,” Beff said, on a podcast. Recently, he told me that, if the doomers “succeed in instilling sufficient fear, uncertainty and doubt in the people at this stage,” the result could be “an authoritarian government that is assisted by AI to oppress its people.”
Last year, Forbes revealed Beff to be a thirty-one-year-old named Guillaume Verdon, who used to be a research scientist at Google. Early on, he had explained, “A lot of my personal friends work on powerful technologies, and they kind of get depressed because the whole system tells them that they are bad. For us, I was thinking, let’s make an ideology where the engineers and builders are heroes.” Upton Sinclair once wrote that “it is difficult to get a man to understand something, when his salary depends on his not understanding it.” An even more cynical corollary would be that, if your salary depends on subscribing to a niche ideology, and that ideology does not yet exist, then you may have to invent it.
Online, you can tell the A.I. boomers and doomers apart at a glance. Accelerationists add a Fast Forward-button emoji to their display names; decelerationists use a Stop button or a Pause button instead. The e/accs favor a Jetsons-core aesthetic, with renderings of hoverboards and space-faring men of leisure—the bountiful future that A.I. could give us. Anything they deplore is cringe or communist; anything they like is “based and accelerated.” The other week, Beff Jezos hosted a discussion on X with MC Hammer.
[…]
Accelerationism has found a natural audience among venture capitalists, who have an incentive to see the upside in new technology. Early last year, Marc Andreessen, the prominent tech investor, sat down with Dwarkesh Patel for a friendly, wide-ranging interview. Patel, who lives in a group house in Cerebral Valley, hosts a podcast called “Dwarkesh Podcast,” which is to the doomer crowd what “The Joe Rogan Experience” is to jujitsu bros, or what “The Ezra Klein Show” is to Park Slope liberals. A few months after their interview, though, Andreessen published a jeremiad accusing “the AI risk cult” of engaging in a “full-blown moral panic.” He updated his bio on X, adding “E/acc” and “p(Doom) = 0.” “Medicine, among many other fields, is in the stone age compared to what we can achieve with joined human and machine intelligence,” he later wrote in a post called “The Techno-Optimist Manifesto.” “Deaths that were preventable by the AI that was prevented from existing is a form of murder.” At the bottom, he listed a few dozen “patron saints of techno-optimism,” including Hayek, Nietzsche, and Based Beff Jezos. Patel offered some respectful counter-arguments; Andreessen responded by blocking him on X. Verdon recently had a three-hour video debate with a German doomer named Connor Leahy, sounding far more composed than his online persona. Two days later, though, he reverted to form, posting videos edited to make Leahy look creepy, and accusing him of “gaslighting.”
[…]
This past summer, when “Oppenheimer” was in theatres, many denizens of Cerebral Valley were reading books about the making of the atomic bomb. The parallels between nuclear fission and superintelligence were taken to be obvious: world-altering potential, existential risk, theoretical research thrust into the geopolitical spotlight. Still, if the Manhattan Project was a cautionary tale, there was disagreement about what lesson to draw from it. Was it a story of regulatory overreach, given that nuclear energy was stifled before it could replace fossil fuels, or a story of regulatory dereliction, given that our government rushed us into the nuclear age without giving extensive thought to whether this would end human civilization? Did the analogy imply that A.I. companies should speed up or slow down?
In August, there was a private screening of “Oppenheimer” at the Neighborhood, a co-living space near Alamo Square where doomers and accelerationists can hash out their differences over hopped tea. Before the screening, Nielsen, the quantum-computing expert, who once worked at Los Alamos National Laboratory, was asked to give a talk. “What moral choices are available to someone working on a technology they believe may have very destructive consequences for the world?” he said. There was the path exemplified by Robert Wilson, who didn’t leave the Manhattan Project and later regretted it. There were Klaus Fuchs and Ted Hall, who shared nuclear secrets with the Soviets. And then, Nielsen noted, there was Joseph Rotblat, “the one physicist who actually left the project after it became clear the Nazis were not going to make an atomic bomb,” and who was later awarded the Nobel Peace Prize.
[…]
The doomers and the boomers are consumed by intramural fights, but from a distance they can look like two offshoots of the same tribe: people who are convinced that A.I. is the only thing worth paying attention to. Altman has said that the adoption of A.I. “will be the most significant technological transformation in human history”; Sundar Pichai, the C.E.O. of Alphabet, has said that it will be “more profound than fire or electricity.” For years, many A.I. executives have tried to come across as more safety-minded than the competition. “The same people cycle between selling AGI utopia and doom,” Timnit Gebru, a former Google computer scientist and now a critic of the industry, told me. “They are all endowed and funded by the tech billionaires who build all the systems we’re supposed to be worried about making us extinct.”
[…]
Anthropic continues to bill itself as “an AI safety and research company,” but some of the other formerly safetyist labs, including OpenAI, sometimes seem to be drifting in a more e/acc-inflected direction. “You can grind to help secure our collective future or you can write substacks about why we are going fail,” Sam Altman recently posted on X. (“Accelerate 🚀,” MC Hammer replied.) Although ChatGPT had been trained on a massive corpus of online text, when it was first released it didn’t have the ability to connect to the Internet. “Like keeping potentially dangerous bioweapons in a bio-secure lab,” Grace told me. Then, last September, OpenAI made an announcement: now ChatGPT could go online.
Whether the e/accs have the better arguments or not, they seem to have money and memetic energy on their side. Last month, it was reported that Altman wanted to raise five to seven trillion dollars to start an unprecedentedly huge computer-chip company. “We’re so fucking back,” Verdon tweeted. “Can you feel the acceleration?”
For a recent dinner party, Katja Grace ordered in from a bubble-tea shop—“some sesame balls, some interestingly squishy tofu things”—and hosted a few friends in her living room. One of them was Clara Collier, the editor of Asterisk, the doomer-curious magazine. The editors’ note in the first issue reads, in part, “The next century is going to be impossibly cool or unimaginably catastrophic.” The best-case scenario, Grace said, would be that A.I. turns out to be like the Large Hadron Collider, a particle accelerator in Switzerland whose risk of creating a world-swallowing black hole turned out to be vastly overblown. Or it could be like nuclear weapons, a technology whose existential risks are real but containable, at least so far. As with all dark prophecies, warnings about A.I. are unsettling, uncouth, and quite possibly wrong. Would you be willing to bet your life on it?
The doomers are aware that some of their beliefs sound weird, but mere weirdness, to a rationalist, is neither here nor there. MacAskill, the Oxford philosopher, encourages his followers to be “moral weirdos,” people who may be spurned by their contemporaries but vindicated by future historians. Many of the A.I. doomers I met described themselves, neutrally or positively, as “weirdos,” “nerds,” or “weird nerds.” Some of them, true to form, have tried to reduce their own weirdness to an equation. “You have a set amount of ‘weirdness points,’ ” a canonical post advises. “Spend them wisely.”
One Friday night, I went to a dinner at a group house on the border of Berkeley and Oakland, where the shelves were lined with fantasy books and board games. Many of the housemates had Jewish ancestry, but in lieu of Shabbos prayers they had invented their own secular rituals. One was a sing-along to a futuristic nerd-folk anthem, which they described as an ode to “supply lines, grocery stores, logistics, and abundance,” with a verse that was “not not about A.I. alignment.” After dinner, in the living room, several people cuddled with several other people, in various permutations. There were a few kids running around, but I quickly lost track of whose children were whose.
Making heterodox choices about how to pray, what to believe, with whom to cuddle and/or raise a child: this is the American Dream. Besides, it’s how moral weirdos have always operated. The housemates have several Discord channels, where they plan their weekly Dungeons & Dragons games, coördinate their food shopping, and discuss the children’s homeschooling. One of the housemates has a channel named for the Mittwochsgesellschaft, or Wednesday Society, an underground group of intellectuals in eighteenth-century Berlin. Collier told me that, as an undergraduate at Yale, she had studied the German idealists. Kant, Fichte, and Hegel were all world-historic moral weirdos; Kant was famously celibate, but Schelling, with Goethe as his wingman, ended up stealing Schlegel’s wife.
Before Patel called his podcast “Dwarkesh Podcast,” he called it “The Lunar Society,” after the eighteenth-century dinner club frequented by radical intellectuals of the Midlands Enlightenment. “I loved this idea of the top scientists and philosophers of the time getting together and shaping the ideas of the future,” he said. “From there, I naturally went, Who are those people now?” While walking through Alamo Square with Patel, I asked him how often he found himself at a picnic or a potluck with someone who he thought would be remembered by history. “At least once a week,” he said, without hesitation. “If we make it to the next century, and there are still history books, I think a bunch of my friends will be in there.” ♦
2024
https://www.bbc.com/news/articles/c3d9zv50955o una firma roba la veu d’uns actors per fer un chatbot
Inmigració
Violència entre bandes a Suècia, inmigrants amb expectatives frustrades i drogues https://www.bbc.com/news/world-europe-66952421
2024
A Calella, 11 marroquins amb 260 antecedents. Quan es plantegen mesures ERC i CUP diuen que són racistes. El Nacional
Problemes xarxes
2024
Dansa
Cinema i teatre
Música
https://www.bbc.com/news/articles/czr7pnx6d08o Quincy Jones va fer una trobada amb gangs de rap per aturar la violència
https://vm.tiktok.com/ZGd6HkVUL/ Claudio Abbado aguanta el silenci 4 min després de la Titànica de Mahler
Literatura i llenguatge
https://getpocket.com/explore/item/13-books-that-will-actually-make-you-laugh-out-loud?utm_source=pocket_mylist
Arquitectura
Art
2024
Camille Pissarro NY20240101, honest, jueu, compromès amb el cas Dreyfus, Degas i Renoir reaccionaris, mentor de Cezanne que algú de casa bona que volia passar per rural. Around this time, too, life within the Pontoise house and garden became his other favorite subject. His portraits of his daughter Minette, from 1872, are perhaps the best portraits of a child since those of the early German Romantic Philipp Otto Runge. The wise child is one of the central modernist inventions of the eigh- teen-sixties and seventies-it is, after all, the period of Alice and her looking glass- and Minette looks out at us as a French Alice: in higher fashion, but also in equal parts intelligent and sensitive, a small girl in that odd moment of young girls who, while dressed in ways that seem over- mature, grace it by a second, inner ma- turity of their own. The wise children of John Singer Sargent and Cecilia Beaux begin here.
Art i matemàtica dels espirals, angle de la raó daurada de 137.5 http://www.johnedmark.com/spirals/
Leyendecker, il·Lustrador que va influir Norman Rockwell, imatge masculina. Collector’s Weekly
https://www.bbc.com/news/articles/cy87202v43no què és art? un plàtan enganxat a la paret
https://artreview.com/power-100/ els influents l’any 2024
Salut i benestar
-
Physical: Move Your Body and Don’t Eat Crap—but Don’t Diet Either
-
Emotional: Don’t Hide Your Feelings, Get Help When You Need It
-
Social: It’s Not All About Productivity; Relationships Matter, Too
-
Cognitive: Follow Your Interests, Do Deep-Focused Work
-
Spiritual: Cultivate Purpose, Be Open to Awe [ cal trobar un substitut per a la religió -> inventar-la ]
-
Environmental: Care for Your Space
-
Set a timer for five minutes.
-
Sit with a straight spine on the floor or in a chair with your feet flat.
-
Close your eyes and inhale for a count of four.
-
Hold your breath for a count of four.
-
Exhale for a count of four.
-
Hold for a count of four.
-
Repeat until the alarm sounds.
- https://www.wired.com/story/open-label-placebo-why-does-it-work/?utm_source=pocket_mylist els placebos funcionen fins i tot quan ens diuen que és placebo.
- https://www.bbc.com/news/health-68105868 tocar un instrument, particularment el piano, ajuda a mantenir el cervell quan ens fem grans
- Regla 20-3-5 sobre quant de temps hem d’estar fora. 20 min en espais verds tres cops per setmana. 3 hores al mes d’excusió, 5 dies a l’any en plena natura.https://getpocket.com/explore/item/the-20-5-3-rule-prescribes-how-much-time-you-should-spend-outside?utm_source=pocket_mylist
- l’art de no fer res, https://getpocket.com/explore/item/the-art-of-doing-nothing-have-the-dutch-found-the-answer-to-burnout-culture?utm_source=pocket_mylist
- https://undark.org/2024/02/14/edna-emerging-pathogens/?utm_source=pocket_mylist L’anàlisi de les aigües grises ens donaria informació sobre patògens i salut, però hi ha reserves quant a privacitat.
- https://www.scientificamerican.com/article/why-do-so-many-mental-illnesses-overlap/?utm_source=pocket_mylist Hi ha inbdicis que la divisió de malalties mentals del DSM no té fonament i que moltes se solapen
- https://www.smithsonianmag.com/science-nature/the-dirty-secrets-about-our-hands-role-in-disease-transmission-180983919/?utm_source=pocket_mylist transmissió de malalties per les mans
- https://www.bbc.com/news/business-68622781 un implant cerebral permet moure el cursor d’un ordinador.
- https://pioneerworks.org/broadcast/club-med-adderall?utm_source=pocket_mylist tot un país funcionant amb adderall
- https://www.newyorker.com/magazine/2024/04/22/how-to-die-in-good-health?utm_source=pocket_mylist Com morir gran i amb salut
https://www.bbc.com/news/articles/c7487y7x0vwo wegovy de Novo Nordisk disponible a Xina per 194$ (1350 als USA)
https://qz.com/ozempic-weight-loss-drugs-health-benefits-research-1851687755 Ozempic sembla que ho pot guarir tot
https://www.bbc.com/news/articles/c80vrjkkrero Índia, antibiòtics que poden atacar bacteris resistents
https://www.npr.org/sections/shots-health-news/2024/11/26/nx-s1-5205605/empathy-emotional-support-listening-relationships Consells, escoltar en lloc d’oferir solucions
La vida
https://www.atlasobscura.com/articles/homesick-for-place-you-have-never-been-reader-responses fernweh, nostàlgia per anar (ja que no tornar) a un lloc on mai no hem estat. En el meu cas els sequoies, o llargues carreteres desertes
https://magazine.atavist.com/alone-at-the-edge-of-the-world-susie-goodall-sailing-golden-globe-race/?utm_source=pocket_mylist
https://lifehacker.com/health/how-to-dance-without-looking-awkward
Una història de la nostàlgia ny 2023/11/27
The actress Helen Hayes used to tell a story of how her young prospective husband poured some peanuts into her hand and said, “I wish they were emeralds.” Years later, when he was actually able to give her a little bag of emeralds, he did so saying, “I wish they were peanuts”, with whatever excess of sweetness
https://www.aljazeera.com/features/2023/12/2/the-south-korean-woman-who-adopted-her-best-friend?utm_source=pocket_mylist
L’entrenament i disciplina dels passatgers van permetre una evacuació ràpida d’un avió accidentat. BBC
2024
Homes pobres a la Índia cauen en l’estafa de pagar diners per una oferta de feina que consistia en “impregnar” dones sense fills. BBC
https://hbr.org/2023/12/how-to-create-your-own-year-in-review?utm_source=pocket_mylist com fer el repàs de l’any
https://www.wired.com/story/extreme-dishwasher-loading-facebook-group/?utm_source=pocket_mylist
grup de FB sobre rentaplats
https://time.com/6837151/therapists-respond-insults/?utm_source=pocket_mylist com respondre als insults
https://getpocket.com/explore/item/the-25-designs-that-shape-our-world?utm_source=pocket_mylist
25 dissenys que van marcar el món
https://www.cnbc.com/2024/03/16/worst-paying-college-majors-five-years-after-graduation.html?utm_source=pocket_mylist
https://www.bbc.com/news/articles/cd1wpegrnrxo museu d’Austràlia només per a dones, esquivarà ordre judicial convertint-se en un lavabo on les obres hi estaran exposades. Els homes seran admesos els diumenges per aprendre a planxar.
https://getpocket.com/explore/item/why-the-velvet-hammer-is-a-better-way-to-give-constructive-criticism?utm_source=pocket_mylist com fer una crítica constructiva
https://www.gq-magazine.co.uk/article/reddit-male-grooming-therapy?utm_source=pocket_mylist el grup on els homes demanen opinió sobre com arreglar-se
Bars on s’ha d’estar callat escoltant música vinils. Montecristo.
Smellmaxxing, nois adolescents amb colònies cares. Parents
https://www.bbc.com/news/articles/cjr4zwj2lgdo els països nòrdics preparen els ciutadans per a situacions d’emergència
https://www.bbc.com/news/articles/cy87glyrmkeo els homes compren coses innecessàries al lidl, com una canoa en una zona sense aigua.
https://www.bostonmagazine.com/life-style/luxury-kids-parties/ festes de nens on els pares es gasten 500$ només en els globus
https://www.bbc.com/news/articles/c0j1wwypygxo A Suècia algunes noies prefereixen quedar-se a casa mantingudes pel nòvio.
Inicinou
Llocs
entrada test
dfasfsfdgsg
dfgdgdsg
dfgdgsdgdshg
eix que la toca, i la música per als altres.