L’home. Antropologia. Psicologia

https://www.npr.org/sections/goatsandsoda/2018/06/07/617097908/why-grandmothers-may-hold-the-key-to-human-evolution les iaies van ser més importants a l’hora de la supervivència de l’espècia en les societats caçadores-recolectores, que no pas el mascle caçador.
https://www.inverse.com/article/48300-why-is-it-hard-to-focus-research-humans Només estem conscients 4 vegades per segons, la resta del temps el cervell va en pilot automàtic
https://www.atlasobscura.com/articles/how-to-echolocate orientar-se amb el so [what is like to be a bat]

https://aeon.co/essays/schools-love-the-idea-of-a-growth-mindset-but-does-it-work educació, fer creure als nens que tot és possible i que la capacitat no està predetermianda -> pot dur a expectatives frustrrades?
http://www.bbc.com/future/story/20190326-are-we-close-to-solving-the-puzzle-of-consciousness   Tononi proposes that we can identify a person’s (or an animal’s, or even a computer’s) consciousness from the level of “information integration” that is possible in the brain (or CPU). According to his theory, the more information that is shared and processed between many different components to contribute to that single experience, then the higher the level of consciousness.
https://getpocket.com/explore/item/what-are-the-ethical-consequences-of-immortality-technology  l’ànsia d’immortalitat:  rejuvenation technology, and mind uploading. Like a futuristic fountain of youth, rejuvenation promises to remove and reverse the damage of ageing at the cellular level.  The other option would be mind uploading, in which your brain is digitally scanned and copied onto a computer. This method presupposes that consciousness is akin to software running on some kind of organic hard-disk – that what makes you you is the sum total of the information stored in the brain’s operations, and therefore it should be possible to migrate the self onto a different physical substrate or platform. This remains a highly controversial stance. [seria com una presó? no interaccionem, no tenim cos]
https://www.wired.com/2016/04/susie-mckinnon-autobiographical-memory-sdam/ la dona que pot recordar informació però no experiències, sempre viu en el present.
  • Empathy: “Do I really listen to people when they talk about their issues, or do I just try to give them a solution? Do people tend to confide in me?”
  • Emotional self-awareness: “When my body gives me physical signals that something is wrong, do I pay attention to it and sense what’s going on?”
  • Self-actualization: “Am I doing the things in life that I really feel passionate about—at home, at work, socially?”
  • Impulse control: “Do I respond to people before they finish telling me something?”
  • Interpersonal relationships: “Do I enjoy socializing with people, or does it feel like work?”
https://getpocket.com/explore/item/why-you-can-t-help-but-act-your-age l’estat mental influeix en l’epigenètica: un grup de persones traslladat a un entorn d’una època en que eren més joves, va mostrar que el cos també havia rejovenit. L’edat biològica dels teixits no coincideix amb l’edat cronològica, pot anar més endavant o més enrere.
https://www.newyorker.com/magazine/2020/01/13/a-world-without-pain l’estudi d’una dona que té sentiments pe`ro no sent dolor
https://getpocket.com/explore/item/how-knowledge-about-different-cultures-is-shaking-the-foundations-of-psychology les generalitzacions de la psicologia es basen sobretot en resultats obtinguts amb experiments fets amb estudiants d’universitats occidentals, per tant, blancs, relativament rics, religió cristiana. Segurament els resultats no són extrapolables.
https://www.bbc.com/future/article/20200313-how-your-personality-changes-as-you-age la personalitat és plàstica, ens tornem més oberts i adaptables quan ens fem grans i rígids quan som molt grans.
https://www.nature.com/articles/d41586-020-00922-8 les divisions del DSM no recullen la realitat que gairebé sempre diferents símptomes apareixen alhora. Però com que es fa servir epr facturar a les asseguradores, no s’ha modificat. S’intenta un plantejament basat en la biologia.
https://www.bbc.com/future/article/20200929-what-your-thoughts-sound-like sentim com una veu interior quan pensem perquè en processar el llenguatge al cervell, s’hi impliquen ones sonores, encara que no ho diguem en veu alta.
https://getpocket.com/explore/item/physics-explains-why-time-passes-faster-as-you-age  perquè ens sembla que el temps passa més de pressa quan ens fem grans : processem menys “clics” d’informació [ jo no tinc la sensació que el temps vagi més de pressa ]
https://www.wsj.com/articles/how-to-stop-the-negative-chatter-in-your-head-11609876801 passem una quarta part del nostre temps desperts sense atendre el present , xerrotejant interiorment a un ritme de 4000 paraules/minut  (Ethan Kross, a neuroscientist, has a book coming out this month called ‘Chatter: The Voice in Our Head, Why it Matters, and How to Harness It)
https://getpocket.com/explore/item/susceptibility-to-mental-illness-may-have-helped-humans-adapt-over-the-millennia  First of all, there are two very  different categories of illness that should be kept separate. One is the  emotional disorders, which are potentially normal, useful responses to  situations. And in all such responses, variability and sensitivity are  influenced by lots of different genes.
There  are also mental disorders that are the most severe ones that are just  plain old genetic diseases: bipolar disease and autism and  schizophrenia. They’re genetic diseases, and whether you get them or not  is overwhelmingly dependent on what genes you have. But why would a  strong, inheritable trait that cuts fitness by half not be selected  against? I think this is one of the deepest mysteries in psychiatry.
https://www.vice.com/en/article/v7mwa3/why-your-true-self-is-an-illusion criem que tenim una naturalesa profunda que és essencialment bona, i que algunes accions ens n’allunyen i altres ens apropen.
Beure en societat ajuda a fer lligams, com la religió
As far back as his graduate work at Stanford in the 1990s, he’d found it bizarre that across all cultures and time periods, humans went to such extraordinary (and frequently painful and expensive) lengths to please invisible beings.
In 2012, Slingerland and several scholars in other fields won a big grant to study religion from an evolutionary perspective. In the years since, they have argued that religion helped humans cooperate on a much larger scale than they had as hunter-gatherers. Belief in moralistic, punitive gods, for example, might have discouraged behaviors (stealing, say, or murder) that make it hard to peacefully coexist. In turn, groups with such beliefs would have had greater solidarity, allowing them to outcompete or absorb other groups.
Around the same time, Slingerland published a social-science-heavy self-help book called Trying Not to Try. In it, he argued that the ancient Taoist concept of wu-wei (akin to what we now call “flow”) could help with both the demands of modern life and the more eternal challenge of dealing with other people. Intoxicants, he pointed out in passing, offer a chemical shortcut to wu-wei—by suppressing our conscious mind, they can unleash creativity and also make us more sociable.
At a talk he later gave on wu-wei at Google, Slingerland made much the same point about intoxication. During the Q&A, someone in the audience told him about the Ballmer Peak—the notion, named after the former Microsoft CEO Steve Ballmer, that alcohol can affect programming ability. Drink a certain amount, and it gets better. Drink too much, and it goes to hell. Some programmers have been rumored to hook themselves up to alcohol-filled IV drips in hopes of hovering at the curve’s apex for an extended time.
His hosts later took him over to the “whiskey room,” a lounge with a foosball table and what Slingerland described to me as “a blow-your-mind collection of single-malt Scotches.” The lounge was there, they said, to provide liquid inspiration to coders who had hit a creative wall. Engineers could pour themselves a Scotch, sink into a beanbag chair, and chat with whoever else happened to be around. They said doing so helped them to get mentally unstuck, to collaborate, to notice new connections. At that moment, something clicked for Slingerland too: “I started to think, Alcohol is really this very useful cultural tool.” Both its social lubrications and its creativity-enhancing aspects might play real roles in human society, he mused, and might possibly have been involved in its formation.
this is the core of Slingerland’s argument: Bonding is necessary to human society, and alcohol has been an essential means of our bonding. Compare us with our competitive, fractious chimpanzee cousins. Placing hundreds of unrelated chimps in close quarters for several hours would result in “blood and dismembered body parts,” Slingerland notes—not a party with dancing, and definitely not collaborative stone-lugging. Human civilization requires “individual and collective creativity, intensive cooperation, a tolerance for strangers and crowds, and a degree of openness and trust that is entirely unmatched among our closest primate relatives.” It requires us not only to put up with one another, but to become allies and friends.
As to how alcohol assists with that process, Slingerland focuses mostly on its suppression of prefrontal-cortex activity, and how resulting disinhibition may allow us to reach a more playful, trusting, childlike state.
Just as people were learning to love their gin and whiskey, more of them (especially in parts of Europe and North America) started drinking outside of family meals and social gatherings. As the Industrial Revolution raged, alcohol use became less leisurely. Drinking establishments suddenly started to feature the long counters that we associate with the word bar today, enabling people to drink on the go, rather than around a table with other drinkers. This short move across the barroom reflects a fairly dramatic break from tradition: According to anthropologists, in nearly every era and society, solitary drinking had been almost unheard‑of among humans.
What’s more, as Christine Sismondo writes in America Walks Into a Bar, by kicking the party out of saloons, the Eighteenth Amendment had the effect of moving alcohol into the country’s living rooms, where it mostly remained.
[ la idea és que l’alcohole ra saludable mentre era quelcom social, però perjudicial quan es fa en solitari]
(As Iain Gately reports in Drink: A Cultural History of Alcohol, in the month after 60 Minutes ran a widely viewed segment on the so-called French paradox—the notion that wine might explain low rates of heart disease in France—U.S. sales of red wine shot up 44 percent.)
Although both men and women commonly use alcohol to cope with stressful situations and negative feelings, research finds that women are substantially more likely to do so. And they’re much more apt to be sad and stressed out to begin with: Women are about twice as likely as men to suffer from depression or anxiety disorders—and their overall happiness has fallen substantially in recent decades.
In the 2013 book Her Best-Kept Secret, an exploration of the surge in female drinking, the journalist Gabrielle Glaser recalls noticing, early this century, that women around her were drinking more.
ast August, the beer manufacturer Busch launched a new product well timed to the problem of pandemic-era solitary drinking. Dog Brew is bone broth packaged as beer for your pet. “You’ll never drink alone again,” said news articles reporting its debut. It promptly sold out. As for human beverages, though beer sales were down in 2020, continuing their long decline, Americans drank more of everything else, especially spirits and (perhaps the loneliest-sounding drinks of all) premixed, single-serve cocktails, sales of which skyrocketed.

 


The Science of mind reading

One night in October, 2009, a young man lay in an fMRI scanner in Liège, Belgium. Five years earlier, he’d suffered a head trauma in a motorcycle accident, and since then he hadn’t spoken. He was said to be in a “vegetative state.” A neuroscientist named Martin Monti sat in the next room, along with a few other researchers. For years, Monti and his postdoctoral adviser, Adrian Owen, had been studying vegetative patients, and they had developed two controversial hypotheses. First, they believed that someone could lose the ability to move or even blink while still being conscious; second, they thought that they had devised a method for communicating with such “locked-in” people by detecting their unspoken thoughts.
In a sense, their strategy was simple. Neurons use oxygen, which is carried through the bloodstream inside molecules of hemoglobin. Hemoglobin contains iron, and, by tracking the iron, the magnets in fMRI machines can build maps of brain activity. Picking out signs of consciousness amid the swirl seemed nearly impossible. But, through trial and error, Owen’s group had devised a clever protocol. They’d discovered that if a person imagined walking around her house there was a spike of activity in her parahippocampal gyrus—a finger-shaped area buried deep in the temporal lobe. Imagining playing tennis, by contrast, activated the premotor cortex, which sits on a ridge near the skull. The activity was clear enough to be seen in real time with an fMRI machine. In a 2006 study published in the journal Science, the researchers reported that they had asked a locked-in person to think about tennis, and seen, on her brain scan, that she had done so.
With the young man, known as Patient 23, Monti and Owen were taking a further step: attempting to have a conversation. They would pose a question and tell him that he could signal “yes” by imagining playing tennis, or “no” by thinking about walking around his house. In the scanner control room, a monitor displayed a cross-section of Patient 23’s brain. As different areas consumed blood oxygen, they shimmered red, then bright orange. Monti knew where to look to spot the yes and the no signals.
I first heard about these studies from Ken Norman, the fifty-year-old chair of the psychology department at Princeton University and an expert on thought decoding. Norman works at the Princeton Neuroscience Institute, which is housed in a glass structure, constructed in 2013, that spills over a low hill on the south side of campus. P.N.I. was conceived as a center where psychologists, neuroscientists, and computer scientists could blend their approaches to studying the mind; M.I.T. and Stanford have invested in similar cross-disciplinary institutes. At P.N.I., undergraduates still participate in old-school psych experiments involving surveys and flash cards. But upstairs, in a lab that studies child development, toddlers wear tiny hats outfitted with infrared brain scanners, and in the basement the skulls of genetically engineered mice are sliced open, allowing individual neurons to be controlled with lasers. A server room with its own high-performance computing cluster analyzes the data generated from these experiments.
Norman, whose jovial intelligence and unruly beard give him the air of a high-school science teacher, occupies an office on the ground floor, with a view of a grassy field. The bookshelves behind his desk contain the intellectual DNA of the institute, with William James next to texts on machine learning. Norman explained that fMRI machines hadn’t advanced that much; instead, artificial intelligence had transformed how scientists read neural data. This had helped shed light on an ancient philosophical mystery. For centuries, scientists had dreamed of locating thought inside the head but had run up against the vexing question of what it means for thoughts to exist in physical space. When Erasistratus, an ancient Greek anatomist, dissected the brain, he suspected that its many folds were the key to intelligence, but he could not say how thoughts were packed into the convoluted mass. In the seventeenth century, Descartes suggested that mental life arose in the pineal gland, but he didn’t have a good theory of what might be found there. Our mental worlds contain everything from the taste of bad wine to the idea of bad taste. How can so many thoughts nestle within a few pounds of tissue?
Now, Norman explained, researchers had developed a mathematical way of understanding thoughts. Drawing on insights from machine learning, they conceived of thoughts as collections of points in a dense “meaning space.” They could see how these points were interrelated and encoded by neurons. By cracking the code, they were beginning to produce an inventory of the mind. “The space of possible thoughts that people can think is big—but it’s not infinitely big,” Norman said. A detailed map of the concepts in our minds might soon be within reach.
Norman invited me to watch an experiment in thought decoding. A postdoctoral student named Manoj Kumar led us into a locked basement lab at P.N.I., where a young woman was lying in the tube of an fMRI scanner. A screen mounted a few inches above her face played a slide show of stock images: an empty beach, a cave, a forest.
“We want to get the brain patterns that are associated with different subclasses of scenes,” Norman said.
As the woman watched the slide show, the scanner tracked patterns of activation among her neurons. These patterns would be analyzed in terms of “voxels”—areas of activation that are roughly a cubic millimetre in size. In some ways, the fMRI data was extremely coarse: each voxel represented the oxygen consumption of about a million neurons, and could be updated only every few seconds, significantly more slowly than neurons fire.
The origins of this approach, I learned, dated back nearly seventy years, to the work of a psychologist named Charles Osgood. When he was a kid, Osgood received a copy of Roget’s Thesaurus as a gift. Poring over the book, Osgood recalled, he formed a “vivid image of words as clusters of starlike points in an immense space.” In his postgraduate days, when his colleagues were debating how cognition could be shaped by culture, Osgood thought back on this image. He wondered if, using the idea of “semantic space,” it might be possible to map the differences among various styles of thinking.
Osgood conducted an experiment. He asked people to rate twenty concepts on fifty different scales. The concepts ranged widely: BOULDER, ME, TORNADO, MOTHER. So did the scales, which were defined by opposites: fair-unfair, hot-cold, fragrant-foul. Some ratings were difficult: is a TORNADO fragrant or foul? But the idea was that the method would reveal fine and even elusive shades of similarity and difference among concepts. “Most English-speaking Americans feel that there is a difference, somehow, between ‘good’ and ‘nice’ but find it difficult to explain,” Osgood wrote. His surveys found that, at least for nineteen-fifties college students, the two concepts overlapped much of the time. They diverged for nouns that had a male or female slant. MOTHER might be rated nice but not good, and COP vice versa. Osgood concluded that “good” was “somewhat stronger, rougher, more angular, and larger” than “nice.”
Osgood became known not for the results of his surveys but for the method he invented to analyze them. He began by arranging his data in an imaginary space with fifty dimensions—one for fair-unfair, a second for hot-cold, a third for fragrant-foul, and so on. Any given concept, like TORNADO, had a rating on each dimension—and, therefore, was situated in what was known as high-dimensional space. Many concepts had similar locations on multiple axes: kind-cruel and honest-dishonest, for instance. Osgood combined these dimensions. Then he looked for new similarities, and combined dimensions again, in a process called “factor analysis.”
When you reduce a sauce, you meld and deepen the essential flavors. Osgood did something similar with factor analysis. Eventually, he was able to map all the concepts onto a space with just three dimensions. The first dimension was “evaluative”—a blend of scales like good-bad, beautiful-ugly, and kind-cruel. The second had to do with “potency”: it consolidated scales like large-small and strong-weak. The third measured how “active” or “passive” a concept was. Osgood could use these three key factors to locate any concept in an abstract space. Ideas with similar coördinates, he argued, were neighbors in meaning.
In the end, the Bell Labs researchers made a space that was more complex than Osgood’s. It had a few hundred dimensions. Many of these dimensions described abstract or “latent” qualities that the words had in common—connections that wouldn’t be apparent to most English speakers. The researchers called their technique “latent semantic analysis,” or L.S.A.
In the following years, scientists applied L.S.A. to ever-larger data sets. In 2013, researchers at Google unleashed a descendant of it onto the text of the whole World Wide Web. Google’s algorithm turned each word into a “vector,” or point, in high-dimensional space. The vectors generated by the researchers’ program, word2vec, are eerily accurate: if you take the vector for “king” and subtract the vector for “man,” then add the vector for “woman,” the closest nearby vector is “queen.” Word vectors became the basis of a much improved Google Translate, and enabled the auto-completion of sentences in Gmail.
In 2001, a scientist named Jim Haxby brought machine learning to brain imaging: he realized that voxels of neural activity could serve as dimensions in a kind of thought space. Haxby went on to work at Princeton, where he collaborated with Norman. The two scientists, together with other researchers, concluded that just a few hundred dimensions were sufficient to capture the shades of similarity and difference in most fMRI data. At the Princeton lab, the young woman watched the slide show in the scanner. With each new image—beach, cave, forest—her neurons fired in a new pattern. These patterns would be recorded as voxels, then processed by software and transformed into vectors. The images had been chosen because their vectors would end up far apart from one another: they were good landmarks for making a map. Watching the images, my mind was taking a trip through thought space, too.
One described a 2017 study by Christopher Baldassano, one of his postdocs, in which people watched an episode of the BBC show “Sherlock” while in an fMRI scanner. Baldassano’s guess going into the study was that some voxel patterns would be in constant flux as the video streamed—for instance, the ones involved in color processing. Others would be more stable, such as those representing a character in the show. The study confirmed these predictions. But Baldassano also found groups of voxels that held a stable pattern throughout each scene, then switched when it was over. He concluded that these constituted the scenes’ voxel “signatures.”
Through decades of experimental work, Norman told me later, psychologists have established the importance of scripts and scenes to our intelligence. Walking into a room, you might forget why you came in; this happens, researchers say, because passing through the doorway brings one mental scene to a close and opens another. Conversely, while navigating a new airport, a “getting to the plane” script knits different scenes together: first the ticket counter, then the security line, then the gate, then the aisle, then your seat. And yet, until recently, it wasn’t clear what you’d find if you went looking for “scripts” and “scenes” in the brain.
Minnery’s most fanciful idea—“Never an official focus of the program,” he said—was to change how databases are indexed. Instead of labelling items by hand, you could show an item to someone sitting in an fMRI scanner—the person’s brain state could be the label. Later, to query the database, someone else could sit in the scanner and simply think of whatever she wanted. The software could compare the searcher’s brain state with the indexer’s. It would be the ultimate solution to the vocabulary problem.
Jack Gallant, a professor at Berkeley who has used thought decoding to reconstruct video montages from brain scans—as you watch a video in the scanner, the system pulls up frames from similar YouTube clips, based only on your voxel patterns—suggested that one group of people interested in decoding were Silicon Valley investors. “A future technology would be a portable hat—like a thinking hat,” he said. He imagined a company paying people thirty thousand dollars a year to wear the thinking hat, along with video-recording eyeglasses and other sensors, allowing the system to record everything they see, hear, and think, ultimately creating an exhaustive inventory of the mind. Wearing the thinking hat, you could ask your computer a question just by imagining the words. Instantaneous translation might be possible. In theory, a pair of wearers could skip language altogether, conversing directly, mind to mind. Perhaps we could even communicate across species. Among the challenges the designers of such a system would face, of course, is the fact that today’s fMRI machines can weigh more than twenty thousand pounds. There are efforts under way to make powerful miniature imaging devices, using lasers, ultrasound, or even microwaves. “It’s going to require some sort of punctuated-equilibrium technology revolution,” Gallant said. Still, the conceptual foundation, which goes back to the nineteen-fifties, has been laid.
In some ways, the story of thought decoding is reminiscent of the history of our understanding of the gene. For about a hundred years after the publication of Charles Darwin’s “On the Origin of Species,” in 1859, the gene was an abstraction, understood only as something through which traits passed from parent to child. As late as the nineteen-fifties, biologists were still asking what, exactly, a gene was made of. When James Watson and Francis Crick finally found the double helix, in 1953, it became clear how genes took physical form. Fifty years later, we could sequence the human genome; today, we can edit it.
Thoughts have been an abstraction for far longer. But now we know what they really are: patterns of neural activation that correspond to points in meaning space. The mind—the only truly private place—has become inspectable from the outside. In the future, a therapist, wanting to understand how your relationships run awry, might examine the dimensions of the patterns your brain falls into. Some epileptic patients about to undergo surgery have intracranial probes put into their brains; researchers can now use these probes to help steer the patients’ neural patterns away from those associated with depression. With more fine-grained control, a mind could be driven wherever one liked. (The imagination reels at the possibilities, for both good and ill.) Of course, we already do this by thinking, reading, watching, talking—actions that, after I’d learned about thought decoding, struck me as oddly concrete. I could picture the patterns of my thoughts flickering inside my mind. Versions of them are now flickering in yours.
The larger goal of thought decoding is to understand how our brains mirror the world. To this end, researchers have sought to watch as the same experiences affect many people’s minds simultaneously. Norman told me that his Princeton colleague Uri Hasson has found movies especially useful in this regard. They “pull people’s brains through thought space in synch,” Norman said. “What makes Alfred Hitchcock the master of suspense is that all the people who are watching the movie are having their brains yanked in unison. It’s like mind control in the literal sense.”


https://www.inputmag.com/culture/dr-peter-scott-morgan-als-ai-cyborg com un home paralitzat es torna cyborg i s’expressa amb veu i imatge artificial

The researcher who met with S. that day was twenty-seven-year-old Alexander Luria, whose fame as a founder of neuropsychology still lay before him. Luria began reeling off lists of random numbers and words and asking S. to repeat them, which he did, in ever-lengthening series. Even more remarkably, when Luria retested S. more than fifteen years later, he found those numbers and words still preserved in S.’s memory. “I simply had to admit that the capacity of his memory had no distinct limits,” Luria writes in his famous case study of S., “The Mind of a Mnemonist,” published in 1968 in both Russian and English.
Luria’s monograph became a psychology classic both in Russia and abroad, and it had considerable influence over the nascent field of memory studies. S.’s case became a parable about the pitfalls of flawless recall. Luria catalogues various difficulties that S. experienced navigating everyday life, linking them to profound deficits he identified in S.’s ability to conceive the world in abstract terms. These cognitive deficiencies, Luria suggests, were related to S.’s extraordinary episodic memory—the memory we have for personal experiences, as opposed to semantic memory (which tells us, for instance, that the dromedary has only one hump). Deriving meaning from the world requires us to relinquish some of its texture. S.’s case, as many readers have noted, resembles the Jorge Luis Borges story “Funes the Memorious,” a fictional work about a man plagued by the persistence of his memory. “To think is to forget a difference, to generalize, to abstract,” Borges writes. “In the overly replete world of Funes there were nothing but details, almost contiguous details.” Similarly, Luria writes that for S., almost every word, every thought, was freighted with excessive detail. When he heard “restaurant,” for example, he would picture an entrance, customers, a Romanian orchestra tuning up to play for them, and so on. Like Funes, S. had a sort of private language to catalogue the richness of his mental associations. The word for “roach” in Yiddish could also mean, in his mind, a dent in a metal chamber pot, a crust of black bread, and the light cast by a lamp that fails to push back all the darkness in a room.
For years now, since first reading Luria’s book as an undergraduate studying Russian, then after encountering it again as a research assistant in a memory lab, I’ve searched, on odd weekends and nights, for what information I could find about S., whose real name was Solomon Shereshevsky. Eventually, I tracked down a relative. Then, more recently, I got hold of a small, blue school notebook, preserved by Luria’s grandniece in the psychologist’s archives. It contains Shereshevsky’s own handwritten autobiographical account of how he became a mnemonist. Written not long before his death and left incomplete, it opens with his impressions of that first meeting with Luria twenty-eight years earlier. It even provides the exact list of things Luria gave him to memorize that day.
My search for Solomon Shereshevsky revealed a person who fit uneasily in the story of the Man Who Could Not Forget, as he has so often been portrayed. He did not, in fact, have perfect recall. His past was not a land he could wander through at will. For him, remembering took conscious effort and a certain creative genius.
Something else I learned that afternoon threatened to change my entire sense of who Shereshevsky was: His uncle, Reynberg said, could be forgetful. If he didn’t consciously try to commit something to memory, he didn’t always recall it later. I had imagined, based on Luria’s case study and the mythology that had grown up around it, a Soviet Funes, with flawless and involuntary recollection of his past. Reynberg told me that his uncle trained hours a day for his evening performances. Was he a mere showman after all?
As described by Luria, some of Shereshevsky’s mental operations bear a strong resemblance to the sort of garden-variety mnemonic tricks that have been known for many centuries—for example, the “memory palace,” or “method of loci,” in which an imagined physical space is used to organize information in its proper sequence. In Shereshevsky’s version of this device, he would imagine Gorky Street, Moscow’s main thoroughfare, or a village street from his childhood, mentally distributing what he wanted to remember along its length, often creating an impromptu story out of the sequence, then strolling back through later to recollect (re-collect) these items in his mind.
Luria also notes that Shereshevsky had an extraordinarily strong case of synesthesia, the heritable condition in which the senses become intermingled in the mind, and the psychologist recognized that this had something to do with Shereshevsky’s powers of recall. (Vladimir Nabokov wrote about both his “colored hearing” and exceptional recall in his memoir, “Speak, Memory,” first published in 1951.) When Luria rang a small bell, for instance, the sound would evoke in his subject’s mind “a small round object . . . something rough like а rope . . . the taste of salt water . . . and something white.” Shereshevsky thought of numbers in the same colors and fonts that he first saw them in as a child; in his unpublished notebook, he writes that “all the numbers had names, first and last, and nicknames, which changed depending on my age and mood.” The number one “is a slender man with ramrod posture and a long face; ‘two’ is a plump lady with a complicated hairdo atop her head, clad in a velvet or silk dress with a train that trails behind her.” Luria speculates that Shereshevsky used his web of multimodal associations to cross-check his memory.
There were serious drawbacks in having so many channels open to the world. Shereshevsky avoided such things as reading the newspaper over breakfast because the flavors evoked by the printed words clashed with the taste of his meal.
Luria’s famous case study of extraordinary memory turns out to be less about perfect recall and more about something at once more fundamental and more strange: our ability to conjure such sensory details even without the direct input of our senses, to swim against the usual currents of perceiving minds. It’s this same ability that allows us to daydream, or to do thought experiments in physics, or to read words on a page and hear the dim inner echo a piano makes when rolling across the cobbles of a Moscow courtyard. But what do imagination and make-believe have to do with memory—a mental faculty we value precisely for its supposed veracity?
Schacter’s interest in the connection between memory and imagination stretches back to the nineteen-eighties, when he and his mentor, Endel Tulving, interviewed a profoundly amnesiac patient known by the initials K. C. A victim of a motorcycle accident, K. C. had become incapable of forming episodic memories. He couldn’t say what he was doing a day or even an hour prior. He was also, somewhat unexpectedly, unable to speculate in any detail about what he would be doing the following day. He couldn’t call up any detailed scenes in his mind’s eye, whether these scenes lay in the actual past or in some imagined future.
The experimental evidence suggests to Schacter that our imagination draws heavily on memory, recombining bits and pieces of actual experience to model hypothetical and counterfactual scenarios. This seems intuitive. But he goes further, arguing that our all-too-fallible recollections of the past are in fact adaptive, providing the flexibility that allows us to reconfigure memory to imagine our possible futures.
Luria relates that Shereshevsky was capable of sitting in a chair and consciously modifying his heart rate from sixty-four beats per minute to a hundred by picturing himself either lying in bed or racing after a train just leaving the station, respectively. According to Luria’s experiments, Shereshevsky could alter the skin temperature of his hands by several degrees by visualizing himself touching a hot stove or a block of ice. Imagining a loud noise caused an involuntary protective reflex in his eardrums, as though the sound had actually occurred.
Instead of burning memories on scraps of paper, Shereshevsky found a different kind of erasure in his final years, according to his nephew: he turned to drinking. He did not, as it has been claimed, end up in an insane asylum, though his drinking may have been an expression of what Soviet citizens called “internal emigration.” As Luria writes, “One would be hard put to say which was more real for him: the world of imagination in which he lived, or the world of reality in which he was but a temporary guest.” Shereshevsky died in 1958 from complications related to his alcoholism. The last dated entry in his notebook is from December 11, 1957, but in it he writes only of the past, remembering experiments and performances from his earlier days as a professional mnemonist. After that, it breaks off into blank pages, as though inviting us to imagine other possible endings.

https://www.newyorker.com/magazine/2022/08/08/how-universal-are-our-emotions
In “Between Us: How Cultures Create Emotions” (Norton), the Dutch psychologist Batja Mesquita describes her puzzlement, before arriving in the United States, at the use of the English word “distress.” Was it “closer to the Dutch angst (‘anxious/afraid’),” she wondered, “or closer to the Dutch verdriet/wanhoop (‘sadness/despair’)?” It took her time to feel at home with the word: “I now no longer draw a blank when the word is used. I know both when distress is felt, and what the experience of distress can feel like. Distress has become an ‘emotion’ to me.”
For Mesquita, this is an instance of a larger, overlooked reality: emotions aren’t simply natural upwellings from our psyche—they’re constructions we inherit from our communities. She urges us to move beyond the work of earlier researchers who sought to identify a small set of “hard-wired” emotions, which were universal and presumably evolutionarily adaptive. (The usual candidates: anger, fear, disgust, surprise, happiness, sadness.) Mesquita herself once accepted that, as she writes, “people’s emotional lives are different, but emotions themselves are the same.”
Here, Mesquita—joining her sometime co-author Lisa Feldman Barrett and other contemporary constructionists—enlists linguistic data to undermine the universalist view of emotions. Japanese, Mesquita points out, has one word, haji, to mean both “shame” and “embarrassment”; in fact, many languages (including my own first language, Tamil) make no such distinction. The Bedouins’ word hasham covers not only shame and embarrassment but also shyness and respectability. The Ilongot of the Philippines have a word, bētang, that touches on all those, plus on awe and obedience.
In Mesquita’s book, Westerners have succumbed to a mode of thinking sufficiently widespread to be the subject of a Pixar film. In “Inside Out,” a little girl, Riley, is shown as having a mind populated by five emotions—Joy, Sadness, Fear, Disgust, and Anger—each assigned an avatar. Anger is, of course, red. A heated conversation between Riley and her parents is represented as similar red figures being activated in each of them. “Inside Out” captures, with some visual flair, what Mesquita calls the MINE model of emotion, a model in which emotions are “Mental, INside the person, and Essentialist”—that is, always having the same properties.
What all this established, for Mesquita, is that “cultural differences go beyond semantics”; that emotions lived “ ‘between’ people rather than ‘within.’ ”
[ recordo que el wagens deia que el llenguatge tendia a simplificar la grafia u jov ei amolt clar que tendia a simplificar els fonemes, peruè el llenguatge és bàsicament parlat]
[ les conclusions són precipitades, tots ens expressem de manera diferent]

https://www.newyorker.com/magazine/2023/01/16/how-should-we-think-about-our-different-styles-of-thinking
In “Thinking in Pictures,” Grandin suggested that the world was divided between visual and verbal thinkers.
The imagistic minds in “Visual Thinking” can seem glamorous compared with the verbal ones depicted in “Chatter: The Voice in Our Head, Why It Matters, and How to Harness It,” by Ethan Kross, a psychologist and neuroscientist who teaches at the University of Michigan. Kross is interested in what’s known as the phonological loop—a neural system, consisting of an “inner ear” and an “inner voice,” that serves as a “clearinghouse for everything related to words that occurs around us in the present.” If Grandin’s visual thinkers are attending Cirque du Soleil, then Kross’s verbal thinkers are stuck at an Off Broadway one-man show. It’s just one long monologue.
In the nineteen-seventies, Russell T. Hurlburt, a professor at the University of Nevada, Las Vegas, came up with the idea of giving people devices that would beep at certain times and asking them to record what was going on in their heads at the sound of the beep. In theory, if they responded quickly enough, they’d offer an unvarnished look at what he called “pristine inner experience”—thought as it happens spontaneously. After spending decades working with hundreds of subjects, Hurlburt concluded that, broadly speaking, inner experience is made of five elements, which each of us mix in different proportions. Some thoughts are rendered in “inner speech,” and others appear through “inner seeing”; some make themselves felt through our emotions (I’ve got a bad feeling about this!), while others manifest as a kind of “sensory awareness” (The hairs on my neck stood on end!). Finally, some people make use of “unsymbolized thinking.” They often have “an explicit, differentiated thought that does not include the experience of words, images, or any other symbols.”
Quantum physicists confront a problem with observation. Whenever they look at a particle, they alter and fix its quantum state, which otherwise would have remained indeterminate. A similar issue afflicts our attempts to understand how we think; thinking about our thinking risks forcing it into a form it does not have.
Hurlburt would say that describing one’s inner life is hard. Schwitzgebel would say that our inner lives are not necessarily describable. On a deep level, he contends, our own thinking is a little like bat sonar. We’ll never know what it’s really like. [Nagel, what is like to eb a bat]
If we can’t say exactly how we think, then how well do we know ourselves? In an essay titled “The Self as a Center of Narrative Gravity,” the philosopher Daniel Dennett argued that a layer of fiction is woven into what it is to be human. In a sense, fiction is flawed: it’s not true. But, when we open a novel, we don’t hurl it to the ground in disgust, declaring that it’s all made-up nonsense; we understand that being made up is actually the point. Fiction, Dennett writes, has a deliberately “indeterminate” status: it’s true, but only on its own terms. The same goes for our minds. We have all sorts of inner experiences, and we live through and describe them in different ways—telling one another about our dreams, recalling our thoughts, and so on. Are our descriptions and experiences true or fictionalized? Does it matter? It’s all part of the story.

https://aeon.co/essays/how-infant-temperament-extends-its-reach-into-young-adulthood aspectes innats dels nens, reactivitat, autoregulació, sociabilitat

Kreiner compares the minds of medieval monastics to construction sites, describing the machinery they employed “to reorganize their past thoughts, draw themselves deeper into present thoughts, and establish new cognitive patterns for the future.” Some of this is World Memory Championships territory, with monks using mnemonic devices and multisensory prompts to stuff their brains with Biblical texts and holy meditations. Today, we think mostly of memory palaces, but many medieval monks turned to images of trees or ladders to create elaborate visualizations, meant not only to encode good knowledge but also to override bad impulses and sinful memories. Other imagery flourished, too. By the twelfth century, the six-winged angel described by the prophet Isaiah doubled as what Kreiner calls an “organizational avatar,” with monks inscribing holy subtopics on each wing and feather, while other monks filled an imaginary Noah’s Ark twosie-twosie with sacred history and theology.
Whether monks built arks, angels, or palaces, vigilance was expected of them all, and metacognition was one of their most critical duties, necessary for determining whether any given thought served God or the Devil. For the truly devout, there was no such thing as overthinking it; discernment required constantly monitoring one’s mental activity and interrogating the source of any distraction. Some monasteries encouraged monks to use checklists for reviewing their thoughts throughout the day, and one of the desert fathers was said to keep two baskets for tracking his own. He put a stone in one basket whenever he had a virtuous thought and a stone in the other whenever he had a sinful thought; whether he ate dinner depended on which basket had more stones by the end of the day.
Such careful study of the mind yielded gorgeous writing about it, and Kreiner collects centuries’ worth of metaphors for concentration (fish swimming peaceably in the depths, helmsmen steering a ship through storms, potters perfecting their ware, hens sitting atop their eggs) and just as many metaphors for distraction (mice taking over your home, flies swarming your face, hair poking you in the eyes, horses breaking out of your barn). These earthy, analog metaphors, though, betray the centuries between us and the monks who wrote them. For all that “The Wandering Mind” helps to collapse the differences between their world and ours, it also illuminates one very profound distinction. We inherited the monkish obsession with attention, and even inherited their moral judgments about the capacity, or failure, to concentrate. But most of us did not inherit their clarity about what is worthy of our concentration.
Medieval monks shared a common cosmology that depended on their attention. Justinian the Great claimed that if monks lived holy lives they could bring God’s favor upon the whole of the Byzantine Empire, and the prayers of Simeon Stylites were said to be like support beams, holding up all of creation. “Distraction was not just a personal problem, they knew; it was part of the warp of the world,” Kreiner writes. “Attention would not have been morally necessary, would not have been the objective of their culture of conflict and control, were it not for the fact that it centered on the divine order.”
Perhaps that is why so many of us have half-done tasks on our to-do lists and half-read books on our bedside tables, scroll through Instagram while simultaneously semi-watching Netflix, and swipe between apps and tabs endlessly, from when we first open our eyes until we finally fall asleep. One uncomfortable explanation for why so many aspects of modern life corrode our attention is that they do not merit it. The problem for those of us who don’t live in monasteries but hope to make good use of our days is figuring out what might. That is the real contribution of “The Wandering Mind”: it moves beyond the question of why the mind wanders to the more difficult, more beautiful question of where it should rest.

https://www.newyorker.com/magazine/2023/02/13/the-dubious-rise-of-impostor-syndrome un estudi que revelava que molta gent competent i d’èxit tenien la sensació de ser uns impostors.


2024

https://getpocket.com/explore/item/on-meditation-and-the-unconscious-a-buddhist-monk-and-a-neuroscientist-in-conversation?utm_source=pocket_mylist


Leave a Reply

Your email address will not be published. Required fields are marked *