diff --git a/content/.gitkeep b/content/.gitkeep deleted file mode 100644 index e69de29bb2d1d..0000000000000 diff --git a/content/ad-hoc-hypothesis.md b/content/ad-hoc-hypothesis.md new file mode 100644 index 0000000000000..da56a7560d71c --- /dev/null +++ b/content/ad-hoc-hypothesis.md @@ -0,0 +1,10 @@ +--- +title: Adding ad hoc hypothesis to resist falsification +enableToc: true +--- + +In the Popperian view of epistemology, the key ingredient of knowledge-growth is _criticism_. To contribute to the growth of knowledge, the creators of theories should _seek out criticism_ of their theory. One way to do this is to seek out logical inconsistencies in it. Another way is to conduct experiments that would either falsify or corroborate the theory. + +The anti-scientific stance in this view would be to _resist_ criticism. One common way to do this (you see this often with conspiracy theorists) is to keep modifying the theory every time some evidence arises to contradict it. Say you've come up with some equations for the motion of all objects, and then you find that pink feathers do not follow this equation. The conventionalist (as Popper calls it, someone who wants to preserve the existing theory) will say, "no worries, just add an additional hypothesis to the theory that goes: pink feathers will move according to this other equation Y." + +As more and more counterexamples come up, the conventionalist adds more and more exceptions to their theory. At some point this becomes futile. The better approach is to consider the old theory dead, and to try to come up with a new theory that explains both the old and the new observations. \ No newline at end of file diff --git a/content/ai-discourse.md b/content/ai-discourse.md new file mode 100644 index 0000000000000..eb71addcc6132 --- /dev/null +++ b/content/ai-discourse.md @@ -0,0 +1,6 @@ +--- +title: Current Understanding of AI Discourse +enableToc: true +--- + +WIP. Where I talk about my current understand of AI discourse. \ No newline at end of file diff --git a/content/hoel-harris-utilitarianism.md b/content/hoel-harris-utilitarianism.md new file mode 100644 index 0000000000000..b50fc180586b4 --- /dev/null +++ b/content/hoel-harris-utilitarianism.md @@ -0,0 +1,37 @@ +--- +title: Contra Harris Contra Hoel on Consequentialism +enableToc: true +--- + +In [Why I am not an effective altruist](https://erikhoel.substack.com/p/why-i-am-not-an-effective-altruist), Erik Hoel criticizes the philosophical core of the EA movement. The problem with effective altruism—which is basically utilitarianism—is that it leads to repugnant conclusions (coined by Derek Parfit in *Reasons and Persons*): + +- Example: strict utilitarianism would claim that a surgeon, trying to save ten patients with organ failure, should find someone in a back alley, murder them, and harvest all their organs. +- Example: utilitarianism would claim that there there is some number of "hiccups eliminated" that would justify feeding a little girl to sharks. + +For a lot of utilitarian thought experiments, the initial experiment has a clear conclusion (e.g. you should switch the trolley so fewer people die), but then all the variations of it no longer have a clear conclusion, because morality is not mathematics. + +The core mistake of utilitarianism is that it assumes that *the moral value of all experience is a well-ordered set*—that you can line them all up on a single scale, and thus all you need to do to figure out which action is better is some addition and multiplication. In a [follow up post](https://erikhoel.substack.com/p/we-owe-the-future-but-why?utm_source=substack&utm_medium=email), Hoel adds: the mistake of utilitarianism is the view that morality is *fungible*, that moral value can be measured as mounds of dirt. But in reality, moral actions are not fungible. + +(Note: the fact that moral values are not fungible does not preclude us from making comparisons between actions, e.g. a stubbed toe and the holocaust are entirely different *categories of evils*, but we can still compare them and say the latter is worse. Hoel’s point, though, is that not all moral actions can be compared to each other in an order-preserving manner.) + +One way to avoid repugnant conclusions is to add more and more parameters to your utilitarian calculus. Hoel claims this is futile, because you'll never have enough parameters. (Popper had a term for this: adding [ad hoc hypotheses](/ad-hoc-hypothesis.md)). + +> Just as how Ptolemy accounted for the movements of the planets in his geocentric model by adding in epicycles (wherein planets supposedly circling Earth also completed their own smaller circles), and this trick allowed him to still explain the occasional paradoxical movement of the planets in a fundamentally flawed geocentric model, so too does the utilitarian add moral epicycles to keep from constantly arriving at immoral outcomes. +> + +Hoel thinks the EA movement has done a lot of good; he just disagrees with its philosophical underpinnings. And it still leads to some repugnant-ish conclusions, e.g. that you should just be a stock broker to make more money and donate it. + +> One can see repugnant conclusions pop up in the everyday choices of the movement: would you be happier being a playwright than a stock broker? Who cares, stock brokers make way more money, go make a bunch of money to give to charity. And so effective altruists have to come up with a grab bag of diluting rules to keep the repugnancy of utilitarianism from spilling over into their actions and alienating people. +> + +### Sam Harris’s criticism + +Hoel made a recent appearance on the Sam Harris podcast, in which he and Harris kept butting heads on the same basic point. Harris: all moral theories, even those that are not consequentialism, ultimately make their claims to moral value based on *consequences*. If you ask a deontologist why they advocate for some principle or other, their argument will be framed in terms of the *consequences* of people following that principle. Same thing with virtue ethics, or any other moral system. + +For Harris, all the criticisms that Hoel makes still boil down to consequences. It's all just more consequences! + +This is technically true, but at some point it begins to sound vacuous. You could describe every moral theorem in terms of eventual consequences, but it's not especially useful to do so. It's like saying "It's all about goodness! Morality is all about goodness!” Hoel made this point in his piece: once you strip utilitarianism enough to no longer do strict mathematics around hedonic units, and instead just say "do the most good you can, where good is defined in a loose, complex, personal way", you’ve arrived at something no one can disagree with. + +Choosing a moral theory requires us to answer the question of *how best to think* about *how we should act*. And it just seems that if you think about morality in terms of *maximization*, you are far more likely to engage in morally questionable behavior than if you just thought about it in terms of, say, *adhering to principles*, or *cultivating virtues*. Who are the people we think of as ethical heroes, as role models in the moral life? Gandhi, Mandela, Jesus, the Buddha. Are they *rationalistic maximizers*? Or are they just, really principled and virtuous people? + +There is one place where I do agree with Harris, and it's around preferring human consciousness over artificial consciousness. Hoel claims that a pitfall of consequentialism is that it doesn't give us any reason to prefer our own survival over, say, the survival of some alien or artificial intelligence. And to that I would say: if we *really* come to the conclusion that artificial intelligences can have inner lives as rich and as laden with suffering and happiness as our own, and if they are as concerned with ethics as we are, then why *should* we prefer our wellbeing over theirs? \ No newline at end of file diff --git a/content/images/action-potential.gif b/content/images/action-potential.gif new file mode 100644 index 0000000000000..546d12f1e0a2d Binary files /dev/null and b/content/images/action-potential.gif differ diff --git a/content/images/dns records.png b/content/images/dns records.png new file mode 100644 index 0000000000000..bf9f854bdd4b1 Binary files /dev/null and b/content/images/dns records.png differ diff --git a/content/images/eeg-measurement.png b/content/images/eeg-measurement.png new file mode 100644 index 0000000000000..1a189cdc83010 Binary files /dev/null and b/content/images/eeg-measurement.png differ diff --git a/content/images/hebb-cell-assembly.png b/content/images/hebb-cell-assembly.png new file mode 100644 index 0000000000000..84df9e8dfe86f Binary files /dev/null and b/content/images/hebb-cell-assembly.png differ diff --git a/content/images/neuron-parts.jpeg b/content/images/neuron-parts.jpeg new file mode 100644 index 0000000000000..f957c8a22640a Binary files /dev/null and b/content/images/neuron-parts.jpeg differ diff --git a/content/images/neuron-parts.png b/content/images/neuron-parts.png new file mode 100644 index 0000000000000..6ee120be6fb93 Binary files /dev/null and b/content/images/neuron-parts.png differ diff --git a/content/images/neuron-resting-potential.png b/content/images/neuron-resting-potential.png new file mode 100644 index 0000000000000..ed7158e25f885 Binary files /dev/null and b/content/images/neuron-resting-potential.png differ diff --git a/content/images/pet-image-sleep.png b/content/images/pet-image-sleep.png new file mode 100644 index 0000000000000..1c7dd4e72004b Binary files /dev/null and b/content/images/pet-image-sleep.png differ diff --git a/content/images/quartz layout.png b/content/images/quartz layout.png new file mode 100644 index 0000000000000..03435f7d57874 Binary files /dev/null and b/content/images/quartz layout.png differ diff --git a/content/images/quartz transform pipeline.png b/content/images/quartz transform pipeline.png new file mode 100644 index 0000000000000..657f0a3abb8cb Binary files /dev/null and b/content/images/quartz transform pipeline.png differ diff --git a/content/index.md b/content/index.md new file mode 100644 index 0000000000000..e182ada89ca9e --- /dev/null +++ b/content/index.md @@ -0,0 +1,12 @@ +--- +title: This site is WIP +--- + +### My interests + +- Neuroscience. + - [Current understanding of neuroscience](/neuroscience.md) + - [Are lesions a good proxy for brain function?](/lesions.md) +- Meditation +- Writing +- Philosophy. See [Contra Harris contra Hoel on consequentialism](/hoel-harris-utilitarianism.md). \ No newline at end of file diff --git a/content/lesions.md b/content/lesions.md new file mode 100644 index 0000000000000..a3035a87f8909 --- /dev/null +++ b/content/lesions.md @@ -0,0 +1,22 @@ +--- +title: Are lesions a good proxy for brain function? +enableToc: true +--- + +One of the recurring themes in our understanding of the brain is the study of *lesions*. Or perhaps more broadly, regions of the brain that have either been damaged (due to disease, injury) or removed surgically. + +Most of the time that we say "X region of the brain is responsible for Y function", it's based on a study of damage or removal of that region. A classic example is [patient HM](https://en.wikipedia.org/wiki/Henry_Molaison), who had his hippocampi removed to mitigate seizures, and subsequently lost the ability to form new declarative memories. Based on this, neuroscientists speculated that the hippocampus is involved in the formation of durable memories. Similar reasoning applies to [Broca's aphasia](https://en.wikipedia.org/wiki/Expressive_aphasia) and [Wernicke's aphasia](https://en.wikipedia.org/wiki/Receptive_aphasia), which are conditions that have led us to conclude that particular brain regions are involved with specific sub-functions of language, like generating speech. + +Unfortunately, arguments like this don't give us a robust understanding of *how* any of these brain functions work. The fact that a function Y is impaired after we remove region X of the brain does *not* imply that region X is the exclusive zone in which function Y is executed; and further, it says nothing interesting about *how* Y is accomplished. There could be many other explanations: +- region X could be one of several regions involved in accomplishing Y +- region X could be responsible for some "prerequisite" task that enables function Y, without actually being involved in Y + +At best, exhaustive analysis of how damage to different regions affects behavior can give us a high-level map of how the brain works. But as [David Poeppel notes](https://www.youtube.com/watch?v=-1su5DWUYXo): *a map is not an explanation*. We have a complete map of the C elegans nervous system down the last neuron and synapse, but we do not have an understanding of how this map generates the worm's behavior. + +In a [2017 study](https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268) provocatively titled *Could a Neuroscientist Understand a Microprocessor?*, researchers applied neuroscience techniques (like creating lesions and measuring electrical activity) to a microprocessor, and demonstrated that such techniques gave no fundamental insight on the actual workings of the processor. As summarized in Poeppel et al's [*Neuroscience Needs Behavior*](https://pubmed.ncbi.nlm.nih.gov/28182904/): + +> The study poses the question of whether a neuroscientist could understand a microprocessor. They applied numerous neuroscience techniques to a high-fidelity simulation of a classic video game microprocessor (the ‘‘brain’’) in an attempt to understand how it controls the initiation of three well-known videogames (which they dubbed as ‘‘behaviors’’) originally programmed to run on that microprocessor. Crucial to the experiment was the fact that it was performed on an object that is already fully understood: the fundamental fetch-decode-execute structure of a microprocessor can be drawn in a diagram. Understanding the chip using neuroscientific techniques would therefore mean being able to discover this diagram. In the study, (simulated) transistors were lesioned, their tuning determined, local field potentials recorded, and dimensionality reduction performed on activity across all the transistors. The result was that none of these techniques came close to reverse engineering the standard stored-program computer architecture (Jonas and Kording, 2017). + +Poeppel argues that to truly understand the brain, we need not just ad hoc manipulations and measurements of brain activity, but robust *theories*—in particular, algorithms—that would explain *how* the brain executes functions like speech, and then test those theories with measurements of the brain. We can't go the other way around. + +All that said, the behavioral consequences of lesions are interesting. First, the fact that entire regions of the brain can be destroyed without killing the organism feels surprising. If, in contrast, you removed a giant piece of your stomach, I'm pretty sure you would quickly die. Perhaps the takeaway is: most of what the brain does is not immediately crucial for survival. (Except the brain stem.) And the consistency we find between brain damage and behavioral deficits does imply that the brain has *some* degree of modularity. \ No newline at end of file diff --git a/content/neuroscience.md b/content/neuroscience.md new file mode 100644 index 0000000000000..2d28d99745c7b --- /dev/null +++ b/content/neuroscience.md @@ -0,0 +1,114 @@ +--- +title: Current Understanding of Neuroscience +enableToc: true +--- + +Inspired by silenceinbetween's lovely post [Current Understanding of Biology](https://silenceinbetween.substack.com/p/current-understanding-of-biology), I thought I'd write up a snapshot of my current understanding of neuroscience. _Caveat: this will be written mostly off the top of my head, so excuse any typos or nonsequiturs._ + +I started "studying neuroscience" a few years ago, as in, I began reading an intro textbook a few years ago. I wrote about it early on in [A foray into neuroscience](https://bitsofwonder.substack.com/p/a-foray-into-neuroscience) and more recently in [Textbooks as a preventative for depression](https://bitsofwonder.substack.com/p/textbooks-as-a-preventative-for-depression), although the latter is more about the motivation behind reading a textbook. + +I'm ~~almost done now (98% of the way through)~~ finished as of 1/4/2023, so here goes. + + +## Basic facts + +### Neurons + +So, what's the deal with tbe brain? It contains about a hundred billion neurons, and neurons connect to each other via synapses. Each neuron has a cell body, dendrites which take input, and an axon which produces output. + +![Neuron parts](/images/neuron-parts.jpeg)*The basic anatomy of a neuron* + +A note about anatomy which confused me at first: while the cell body projects into _one_ axon, that axon can subsequently branch and form synapses with multiple further neurons. That's why we say the neuron has _one_ axon even though it may ultimately feed its output to multiple neurons. Dendrites, on the other hand, are present aplenty in each neuron, sometimes in the tens of thousands. + +This is all well and good, except for the fact that there is this whole other category of cells in the brain called _glia_, and we're not 100% sure about what glia do. They help insulate axons, "regulate extracellular space", and perhaps other stuff? Some neuroscientists think glia will turn out to be much more important than we currently understand. + +With that picture set, let's talk about some high-level takeaways and questions. + +### Action potentials + +Action potentials are how neurons transmit information. Let's explain what an action potential is. Neurons by default have an imbalance of charge on their membrane: the inside has slightly more negative charge than the outside: + +![Neuron membrane](/images/neuron-resting-potential.png)*Neurons by default have slightly more negative charge on the inside than the outside* + +By a series of complicated steps that I won't get into here (which involve the passive and active transfer of ions through ion channels and transporter proteins in the neuron's membrane), an "action potential" is a brief, sharp reversal of this electrical imbalance that propagates through the neuron's membrane like a wave. See this GIF: + +![Action potential](/images/action-potential.gif)*This "travelling reversal" of electric charge is what constitutes an action potential on neurons.* + +One other thing I should mention is: once the action potential has reached the end of the axon, there's a whole _other_ process for passing information onto the next neuron. Instead of directly creating an action potential on the next neuron, the axon _releases chemicals_ into the synapse. We call these chemicals neurotransmitters. The effect of the neurotransmitter is to trigger an action potential on the next neuron, _assuming_ there is sufficient neurotransmitter to actually trigger the neuron. + +(Some more details: a little bit of neurotransmitter slightly _depolarizes_ the target neuron, i.e. it begins to reverse the default electric charge imbalance. As more neurotransmitter builds up, the reversal of charge goes further and further until suddenly, it's strong enough to trigger a full action potential on the target neuron. Note that some neurotransmitters have the opposite effect: instead of reversing the target neuron's polarity, they actually _increase_ it, i.e. they make it harder for the neuron to have an action potential. Each neuron takes inputs from many preceding neurons, and the synapse for each of these preceding neurons is either inhibitory or excitatory.) + +### Measuring the brain + +Our ability to measure neural activity in humans is quite crude. Part of this is a technological limitation and part of it is ethical. With humans, we generally avoid doing anything invasive on the brain unless we absolutely have to (e.g. to treat severe forms of epilepsy or treatment-resistant depression). We don't just cut out large pieces of the brain willy-nilly and see what happens (not anymore, at least). + +So instead we measure the brain indirectly: +- **Electrical activity (EEG):** we tape electrodes all over the scalp and measure changes in electrical potential. Unfortunately, the electrical potential variations of an individual neuron are far too small to be measurable from the scalp, so the EEG is really measuring the aggregate activity of large numbers of neurons. + - This is why you see "brainwaves" on EEG recordings: neurons are only measurable if hundreds of thousands of them are all firing in harmony, and as it turns out the brain does have these synchronized patterns of activity at a few distinctive frequency & amplitude combinations (we call these e.g. alpha waves, beta waves, and so on.) + ![EEG measurements during sleep](/images/eeg-measurement.png)*EEG measurements during sleep. We just get a high-level picture of activity* + - We can also _implant electrodes_ directly onto the brain (ECoG), which gives us _much_ more fidelity in measuring action potentials. This is what Neuralink does to, e.g. determine which direction the brain wants to move a joystick. + - EEG measures _electrical_ activity, while MEG measures _magnetic_ activity (wherever there is a changing electric field, there will also be a magnetic field, so they are effectively measuring the same thing). +- **Blood flow (fMRI):** using magical physics, we measure the flow of blood in different regions of the brain, which supposedly gives us an idea about what brain regions are most active. + +While fMRI provides higher spatial resolution, EEG and MEG provide higher temporal resolution. +![PET imaging](/images/pet-image-sleep.png)*PET imaging during sleep.* + +There are other techniques too like PET scans, but the above seem the most common. + +Outside of humans (and occasionally in humans too, via measures like Deep-Brain Stimulation), we do more invasive things like inserting electrodes into random places in the brain, or modifying the genes of the animal so as to make their neurons triggerable via light. + +## Commentary + +### The significance of action potentials + +One question worth asking: is the propagation of action potentials the correct way to understand what the brain is doing? Is this the building block of the brain's information processing capacities? + +I think the canonical answer is: yes. I haven't heard of any other proposals for the fundamental unit of information processing in the brain. But, I've gotten the sense that individual neurons are doing a lot more computation _internally_ than we previously believed. And in any case, until we have a good understanding of _how_ the brain actually performs its functions (intelligence, consciousness, thought, perception) as an aggregate of action potentials, we can't be totally sure that this is the correct building block. + +How do we know that action potentials are the relevant frame of analysis? From what I've seen, one thing we've established quite clearly is that **our sensory systems transmit information to the brain via action potentials**. (And in the other direction, our brain transmits _commands_ to our muscles also via action potentials.) In particular, each of our sensory systems is really just a system for _transducing_ action potentials from some other kind of signal in the world: +- **In our eyes**, we have proteins which allow ions (charged particles) to pass through a membrane based on the _amount of light they're receiving_. In other words, we're converting light into action potentials. +- **In our ears**, we have these tiny hair cells which move ever so slightly as a result of vibrations in the air (which first propagate through a series of intermediate media like our eardrum, a few tiny bones called ossicles, and then a viscous fluid in the inner ear), and this movement triggers a change in electrical potential on the hair cells. +- **In our nose**, we have ion channels on receptor cells that open or close based on the presence of specific chemicals, such that the chemical itself (the odorant) triggers movements of electrical charge which propogates to neurons. (Fun fact: the sense of smell is the oldest of all senses evolutionarily, so the apparatus is the most "primitive", compared to e.g. sight or sound. With smell, there is no elaborate machinery for converting signals from the world into action potentials, it's a pretty direct conversion.) + +For more, see [sensory neuron](https://en.wikipedia.org/wiki/Sensory_neuron) and [sensory transduction](https://en.wikipedia.org/wiki/Transduction_(physiology)) wiki pages. + +So one way you could think of the brain is: convert signals from the world into patterns of charged particles moving back and forth on neuron membranes in complex ways, and then convert those patterns into muscular contraction which allows the organism to move about and act in the world. + +Another line of evidence of action potentials being the fundamental operating unit of the brain: the brain-machine interfaces we've successfully built so far (e.g. Neuralink) operate by measuring changes in electrical potential on neurons, AFAIK. + +### Macro versus micro + +One general takeaway from reading the book: **our understanding of the macro is much worse than our understanding of the micro**. There are open questions at all levels of granularity, but at the higher levels we don't even have an operating framework for how the brain does any of the things we care about. We understand the mechanics of individual neurons fairly well, but at the level of the whole brain, we have some _very rough_ ideas like "the cortex seems to be where the higher-level processing happens". Well, let's actually list out what we know about the higher-level properties of the brain: +- Certain regions are specialized for specific kinds of activity. Some clear-cut examples: + - The cerebellum is where motor commands from the cortex are translated into individual muscular movements. This region is responsible for the pure, mechanical function of coordinating our muscles to move in the way we intend them to. This involves a _lot_ of computation. + - The brainstem is where functions vital to survival are (heartbeat, breathing, etc.). If you lose your brainstem you die. +- And then there are other regions which, AFAIK, are also specialized, but the boundary is much less solid, and other regions can sometimes compensate if they are damaged. Examples: + - The visual cortex processes visual input. It's located at the back of your brain. (There are actually several areas of cortex involved in vision, ranging from "low-level" visual processing of lines and shapes to "high-level" processing of objects and faces.) + - The auditory cortex processes sound input. + - A few regions of cortex are responsible for language (Wernicke's area, Broca's area). + +What's the deal with the location of the different regions of cortex, e.g. why is vision in the back? It's actually just a matter of where sensory neurons tend to end up in the cortex. So the visual cortex is in the back because the optic nerve takes input from the eyes and projects it all the way to the back of the head, and so on. + +How do we discern that a given region is responsible for a given function? Either by finding that it's more active in specific tasks (as measured by fMRI), or by finding that when that region is damaged, the corresponding function is damaged. See [Are lesions a good proxy for brain function?](/lesions.md). + +### What part of the brain is involved with consciousness? + +We really don't know. Some people claim it's the cortex, some people claim it's the thalamus (which acts as a kind of "relay switch" between cortex and sensory organs), some people claim consciousness is everywhere, some people claim it's in the midbrain. + +### How does the brain learn? + +This is one of the spicier parts. Here's the basic picture we have right now: we learn by modifying the weights of connections between neurons. + +**Hebbian learning** is a major theory for how weights of neural connections are modified. In short it says: _neurons that fire together wire together_. Suppose neuron A synapses onto neuron B. Hebbian theory proposes that if B fires immediately after A fires, and this happens repeatedly, then the connection between A and B get strengthened. + +Hebb proposed his theory in the 1950's, and a few decades later it was developed further into **BCM theory**, which is Hebbian learning but with additional theses for _weakening_ synapses as well as strengthening them. BCM theory states that if neuron A fires but neuron B doesn't, then the synapse from A to B gets a little bit weaker. Or, if neuron B fires right _before_ neuron A fires, the synapse from A to B also gets weaker. + +The general idea is that in this way, the brain adjusts its weights and it learns. This is how current deep learning models work: the magic is in the weights of the connections between nodes; the "memory" of the network is in the set of connections and weights of each of those connections. (See my [notes on AI](/ai-discourse.md).) + +This is a _connectionist_ or _associationist_ picture of learning. Crucial to this picture is the idea of _cell assemblies as representations of objects or ideas_. So, if I see a picture of my grandma, there's a specific set of neurons that are active in my brain, and this is what represents my grandma. And then, if I later just think about my grandma, the same assembly is activated. Memory is just the linking of various sensory inputs together. + +![Hebbian cell assembly](/images/hebb-cell-assembly.png)*The Hebbian cell assembly* + +There are plenty of things here that I don't understand. For example, if a given assembly is the representation of my grandma, how do transformations to that representation take place? If I imagine my grandma drinking water, does that involve the assemblies for "grandma", and "water", and "drinking" all being active at once? + +While this is the predominant theory for how the brain implements memory, there are people who disagree. Randy Gallistel, for example, [claims](https://join.substack.com/p/is-this-the-most-interesting-idea) that the brain does not learn merely by forming associations in sensory input, but by storing abstract information _inside individual neurons_. Wow! I don't understand his theory in any detail, and my next task is to read [his paper](https://www.sciencedirect.com/science/article/abs/pii/S0010027720303528). \ No newline at end of file diff --git a/package-lock.json b/package-lock.json index a879078977376..e4208c49b3698 100644 --- a/package-lock.json +++ b/package-lock.json @@ -1,12 +1,12 @@ { "name": "@jackyzha0/quartz", - "version": "4.0.11", + "version": "4.1.0", "lockfileVersion": 3, "requires": true, "packages": { "": { "name": "@jackyzha0/quartz", - "version": "4.0.11", + "version": "4.1.0", "license": "MIT", "dependencies": { "@clack/prompts": "^0.6.3", @@ -85,7 +85,8 @@ "typescript": "^5.0.4" }, "engines": { - "node": ">=18.14" + "node": ">=18.14", + "npm": ">=9.3.1" } }, "node_modules/@clack/core": { diff --git a/quartz.config.ts b/quartz.config.ts index f677a18f9572b..973c86a3e746b 100644 --- a/quartz.config.ts +++ b/quartz.config.ts @@ -3,13 +3,13 @@ import * as Plugin from "./quartz/plugins" const config: QuartzConfig = { configuration: { - pageTitle: "🪴 Quartz 4.0", + pageTitle: "🪴 kasra's notes", enableSPA: true, enablePopovers: true, analytics: { provider: "plausible", }, - baseUrl: "quartz.jzhao.xyz", + baseUrl: "brain.kasra.io", ignorePatterns: ["private", "templates", ".obsidian"], defaultDateType: "created", theme: {